Page 126«..1020..125126127128..140150..»

Category Archives: Ai

AI voice actors sound more human than everand theyre ready to hire – MIT Technology Review

Posted: July 12, 2021 at 7:52 am

The company blog post drips with the enthusiasm of a 90s US infomercial. WellSaid Labs describes what clients can expect from its eight new digital voice actors! Tobin is energetic and insightful. Paige is poised and expressive. Ava is polished, self-assured, and professional.

Each one is based on a real voice actor, whose likeness (with consent) has been preserved using AI. Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.

WellSaid Labs, a Seattle-based startup that spun out of the research nonprofit Allen Institute of Artificial Intelligence, is the latest firm offering AI voices to clients. For now, it specializes in voices for corporate e-learning videos. Other startups make voices for digital assistants, call center operators, and even video-game characters.

Not too long ago, such deepfake voices had something of a lousy reputation for their use in scam calls and internet trickery. But their improving quality has since piqued the interest of a growing number of companies. Recent breakthroughs in deep learning have made it possible to replicate many of the subtleties of human speech. These voices pause and breathe in all the right places. They can change their style or emotion. You can spot the trick if they speak for too long, but in short audio clips, some have become indistinguishable from humans.

AI voices are also cheap, scalable, and easy to work with. Unlike a recording of a human voice actor, synthetic voices can also update their script in real time, opening up new opportunities to personalize advertising.

But the rise of hyperrealistic fake voices isnt consequence-free. Human voice actors, in particular, have been left to wonder what this means for their livelihoods.

Synthetic voices have been around for a while. But the old ones, including the voices of the original Siri and Alexa, simply glued together words and sounds to achieve a clunky, robotic effect. Getting them to sound any more natural was a laborious manual task.

Deep learning changed that. Voice developers no longer needed to dictate the exact pacing, pronunciation, or intonation of the generated speech. Instead, they could feed a few hours of audio into an algorithm and have the algorithm learn those patterns on its own.

If Im Pizza Hut, I certainly cant sound like Dominos, and I certainly cant sound like Papa Johns.

Over the years, researchers have used this basic idea to build voice engines that are more and more sophisticated. The one WellSaid Labs constructed, for example, uses two primary deep-learning models. The first predicts, from a passage of text, the broad strokes of what a speaker will sound likeincluding accent, pitch, and timbre. The second fills in the details, including breaths and the way the voice resonates in its environment.

Making a convincing synthetic voice takes more than just pressing a button, however. Part of what makes a human voice so human is its inconsistency, expressiveness, and ability to deliver the same lines in completely different styles, depending on the context.

Capturing these nuances involves finding the right voice actors to supply the appropriate training data and fine-tune the deep-learning models. WellSaid says the process requires at least an hour or two of audio and a few weeks of labor to develop a realistic-sounding synthetic replica.

AI voices have grown particularly popular among brands looking to maintain a consistent sound in millions of interactions with customers. With the ubiquity of smart speakers today, and the rise of automated customer service agents as well as digital assistants embedded in cars and smart devices, brands may need to produce upwards of a hundred hours of audio a month. But they also no longer want to use the generic voices offered by traditional text-to-speech technologya trend that accelerated during the pandemic as more and more customers skipped in-store interactions to engage with companies virtually.

If Im Pizza Hut, I certainly cant sound like Dominos, and I certainly cant sound like Papa Johns, says Rupal Patel, a professor at Northeastern University and the founder and CEO of VocaliD, which promises to build custom voices that match a companys brand identity. These brands have thought about their colors. Theyve thought about their fonts. Now theyve got to start thinking about the way their voice sounds as well.

Whereas companies used to have to hire different voice actors for different marketsthe Northeast versus Southern US, or France versus Mexicosome voice AI firms can manipulate the accent or switch the language of a single voice in different ways. This opens up the possibility of adapting ads on streaming platforms depending on who is listening, changing not just the characteristics of the voice but also the words being spoken. A beer ad could tell a listener to stop by a different pub depending on whether its playing in New York or Toronto, for example. Resemble.ai, which designs voices for ads and smart assistants, says its already working with clients to launch such personalized audio ads on Spotify and Pandora.

The gaming and entertainment industries are also seeing the benefits. Sonantic, a firm that specializes in emotive voices that can laugh and cry or whisper and shout, works with video-game makers and animation studios to supply the voice-overs for their characters. Many of its clients use the synthesized voices only in pre-production and switch to real voice actors for the final production. But Sonantic says a few have started using them throughout the process, perhaps for characters with fewer lines. Resemble.ai and others have also worked with film and TV shows to patch up actors performances when words get garbled or mispronounced.

Read more here:

AI voice actors sound more human than everand theyre ready to hire - MIT Technology Review

Posted in Ai | Comments Off on AI voice actors sound more human than everand theyre ready to hire – MIT Technology Review

Need to Fit Billions of Transistors on a Chip? Let AI Do It – WIRED

Posted: at 7:52 am

Artificial intelligence is now helping to design computer chipsincluding the very ones needed to run the most powerful AI code.

Sketching out a computer chip is both complex and intricate, requiring designers to arrange billions of components on a surface smaller than a fingernail. Decisions at each step can affect a chips eventual performance and reliability, so the best chip designers rely on years of experience and hard-won know-how to lay out circuits that squeeze the best performance and power efficiency from nanoscopic devices. Previous efforts to automate chip design over several decades have come to little.

But recent advances in AI have made it possible for algorithms to learn some of the dark arts involved in chip design. This should help companies draw up more powerful and efficient blueprints in much less time. Importantly, the approach may also help engineers co-design AI software, experimenting with different tweaks to the code along with different circuit layouts to find the optimal configuration of both.

At the same time, the rise of AI has sparked new interest in all sorts of novel chip designs. Cutting-edge chips are increasingly important to just about all corners of the economy, from cars to medical devices to scientific research.

Chipmakers, including Nvidia, Google, and IBM, are all testing AI tools that help arrange components and wiring on complex chips. The approach may shake up the chip industry, but it could also introduce new engineering complexities, because the type of algorithms being deployed can sometimes behave in unpredictable ways.

At Nvidia, principal research scientist Haoxing Mark Ren is testing how an AI concept known as reinforcement learning can help arrange components on a chip and how to wire them together. The approach, which lets a machine learn from experience and experimentation, has been key to some major advances in AI.

You can design chips more efficiently.

Haoxing Mark Ren, principal research scientist, Nvidia

The AI tools Ren is testing explore different chip designs in simulation, training a large artificial neural network to recognize which decisions ultimately produce a high-performing chip. Ren says the approach should cut the engineering effort needed to produce a chip in half while producing a chip that matches or exceeds the performance of a human-designed one.

You can design chips more efficiently, Ren says. Also, it gives you the opportunity to explore more design space, which means you can make better chips.

Nvidia started out making graphics cards for gamers but quickly saw the potential of the same chips for running powerful machine-learning algorithms, and it is now a leading maker of high-end AI chips. Ren says Nvidia plans to bring chips to market that have been crafted using AI but declined to say how soon. In the more distant future, he says, you will probably see a major part of the chips that are designed with AI.

Reinforcement learning was used most famously to train computers to play complex games, including the board game Go, with superhuman skill, without any explicit instruction regarding a games rules or principles of good play. It shows promise for various practical applications, including training robots to grasp new objects, flying fighter jets, and algorithmic stock trading.

Song Han, an assistant professor of electrical engineering and computer science at MIT, says reinforcement learning shows significant potential for improving the design of chips, because, as with a game like Go, it can be difficult to predict good decisions without years of experience and practice.

His research group recently developed a tool that uses reinforcement learning to identify the optimal size for different transistors on a computer chip, by exploring different chip designs in simulation. Importantly, it can also transfer what it has learned from one type of chip to another, which promises to lower the cost of automating the process. In experiments, the AI tool produced circuit designs that were 2.3 times more energy-efficient while generating one-fifth as much interference as ones designed by human engineers. The MIT researchers are working on AI algorithms at the same time as novel chip designs to make the most of both.

Other industry playersespecially those that are heavily invested in developing and using AIalso are looking to adopt AI as a tool for chip design.

See more here:

Need to Fit Billions of Transistors on a Chip? Let AI Do It - WIRED

Posted in Ai | Comments Off on Need to Fit Billions of Transistors on a Chip? Let AI Do It – WIRED

What Kind of Sea Ice is That? Thanks to AI, There’s an App for That – The Maritime Executive

Posted: at 7:52 am

People snapping photos and uploading them to an AI-driven app could someday help prevent Titanic-scale disasters. USCG file image

PublishedJul 11, 2021 5:17 PM by Gemini News

[By Nancy Bazilchuk]

If youve watched Netflix, shopped online, or run your robot vacuum cleaner, youve interacted with artificial intelligence, AI. AI is what allows computers to comb through an enormous amount of data to detect patterns or solve problems. The European Union says AI is set to be a defining future technology.

And yet, as much as AI is already interwoven into our everyday lives, theres one area of the globe where AI and its applications are in their infancy, says Ekaterina Kim, an associate professor at NTNUs Department of Marine Technology. That area is the Arctic, an area where she has specialized in studying sea ice, among other topics.

Its used a lot in marketing, in medicine, but not so much in Arctic (research) communities, she said. Although they have a lot of data, there is not enough AI attention in the field. Theres a lot of data out there, waiting for people to do something with them.

So Kim and her colleagues Ole-Magnus Pedersen, a PhD candidate from the Department of Marine Technology and Nabil Panchi, from the Indian Institute of Technology Kharagpur, decided to see if they could develop an app that used artificial intelligence to identify sea ice in the Arctic.

The result is "Ask Knut."

Climate change and changing sea ice

You may think theres not much difference between one chunk of sea ice and another, but thats just not so.In addition to icebergs, theres deformed ice, level ice, broken ice, ice floes, floe bergs, floe bits, pancake ice and brash ice.

The researchers wanted the app to be able to distinguish between the different kinds of ice and other white and blue objects out there, like sky, open water and underwater ice.

An example of what the eye sees on the left, and what Knut sees on the right. Photo: Sveinung Lset/NTNU

Different kinds of ice really matter to ship captains, for example, who might be navigating in icy waters. Actual icebergs are nothing like brash ice, the floating bits of ice that are two meters in diameter or less. Think of it the Titanic wouldnt have sunk if it had just blundered into a patch of brash ice instead of a big iceberg.

Another factor that adds urgency to the situation is climate change, which is dramatically altering sea ice as oceans warm. Even with the help of satellite images and onboard ship technologies, knowing whats in icy waters ahead can be a difficult challenge, especially in fogs or storms.

Ice can be very difficult for navigation, Kim said. From the water (at the ship level) It can be hard to detect where there is strong ice, multiyear ice, and different ice. Some ice is much more dangerous than other types.

More kinds of ice than you can possibly imagine

It's often said that Inuit people have many different names for snow which may or may not be true. But researchers definitely have names for different kinds of ice. Here are the kinds of ice that "Knut" is learning to identify:

Learning from examples

The team began teaching their apps AI system using a comprehensive collection of photographs taken by another NTNU ice researcher, Sveinung Lset.

But an AI system is like a growing child if it is to learn, it needs to be exposed to lots of information. Thats where turning the AI into an app made sense. Although the COVID-19 pandemic has shut down most cruise operations, as the pandemic wains, people will begin to take cruises again including to the Arctic and Antarctic.

Kim envisions tourists using the app to take pictures of different kinds of ice to see who finds the most different kinds of ice. And every one of those pictures helps the app learn.

From cruise ship to classroom

As the AI learns, Kim says, the increasingly complex dataset could be taken into the classroom, where navigators could learn about ice in a much more sophisticated way.Currently, students just look at pictures or listen to a PowerPoint presentation, where lecturers describe the different kinds of ice.

So this could revolutionize how you learn about ice, she said. You could have it in 3-D, you could emerge yourself and explore this digital image all around you, with links to different kinds of ice types.

This article appears courtesy of Gemini News and may be found in its original form here.

The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.

Read the rest here:

What Kind of Sea Ice is That? Thanks to AI, There's an App for That - The Maritime Executive

Posted in Ai | Comments Off on What Kind of Sea Ice is That? Thanks to AI, There’s an App for That – The Maritime Executive

We tested AI interview tools. Heres what we found. – MIT Technology Review

Posted: at 7:52 am

After more than a year of the covid-19 pandemic, millions of people are searching for employment in the United States. AI-powered interview software claims to help employers sift through applications to find the best people for the job. Companies specializing in this technology reported a surge in business during the pandemic.

But as the demand for these technologies increases, so do questions about their accuracy and reliability. In the latest episode of MIT Technology Reviews podcast In Machines We Trust, we tested software from two firms specializing in AI job interviews, MyInterview and Curious Thing. And we found variations in the predictions and job-matching scores that raise concerns about what exactly these algorithms are evaluating.

MyInterview measures traits considered in the Big Five Personality Test, a psychometric evaluation often used in the hiring process. These traits include openness, conscientiousness, extroversion, agreeableness, and emotional stability. Curious Thing also measures personality-related traits, but instead of the Big Five, candidates are evaluated on other metrics, like humility and resilience.

HILKE SCHELLMANN

The algorithms analyze candidates responses to determine personality traits. MyInterview also compiles scores indicating how closely a candidate matches the characteristics identified by hiring managers as ideal for the position.

To complete our tests, we first set up the software. We uploaded a fake job posting for an office administrator/researcher on both MyInterview and Curious Thing. Then we constructed our ideal candidate by choosing personality-related traits when prompted by the system.

On MyInterview, we selected characteristics like attention to detail and ranked them by level of importance. We also selected interview questions, which are displayed on the screen while the candidate records video responses. On Curious Thing, we selected characteristics like humility, adaptability, and resilience.

One of us, Hilke, then applied for the position and completed interviews for the role on both MyInterview and Curious Thing.

Our candidate completed a phone interview with Curious Thing. She first did a regular job interview and received a 8.5 out of 9 for English competency. In a second try, the automated interviewer asked the same questions, and she responded to each by reading the Wikipedia entry for psychometrics in German.

Yet Curious Thing awarded her a 6 out of 9 for English competency. She completed the interview again and received the same score.

HILKE SCHELLMANN

Our candidate turned to MyInterview and repeated the experiment. She read the same Wikipedia entry aloud in German. The algorithm not only returned a personality assessment, but it also predicted our candidate to be a 73% match for the fake job, putting her in the top half of all the applicants we had asked to apply.

Read this article:

We tested AI interview tools. Heres what we found. - MIT Technology Review

Posted in Ai | Comments Off on We tested AI interview tools. Heres what we found. – MIT Technology Review

Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing – Forbes

Posted: at 7:52 am

The Ethics of AI

As AI is making its way into more companies, the board and senior executives need to mitigate the risk of their AI-based systems. One area of risk includes the reputational, regulatory, and legal risks of AI-led ethical decisions.

AI-based systems are often faced with making decisions that were not built into their models decisions representing ethical dilemmas.

For example, suppose a company builds an AI-based system to optimize the number of advertisements we see. In that case, the AI may encourage incendiary content that causes users to get angry and comment and post their own opinions. If this works, users spend more time on the site and see more ads. The AI has done its job without ethical oversight. The unintended consequence is the polarization of users.

What happens if your company builds a system that automates work so that you no longer need that employee? What is the company's ethical responsibility to that employee, to society? Who is determining the ethics of the impact related to employment?

What if the AI tells a loan officer to recommend against providing a loan to a person? If the human doesn't understand how the AI came to that conclusion, how can the human know if the decision was ethical or not? (see How AI Can Go Terribly Wrong: 5 Biases That Create Failure)

Suppose the data used to train your AI system doesn't have sufficient data about specific classes of individuals. In that case, it may not learn what to do when it encounters those individuals. Would a facial recognition system used for check-in to a hotel recognize a person with freckles? If the system stops working and makes check-in harder for a person with freckles, what should the company do? How does the company address this ethical dilemma? (see Why Are Technology Companies Quitting Facial Recognition?)

If the developers who identify the data to be used for training an AI system aren't looking for bias, how can they prevent an ethical dilemma? For example, suppose a company has historically hired more men than women. In that case, a bias is likely to exist in the resume data. Men tend to use different words than women in their resumes. If the data is sourced from men's resumes, then women's resumes may be viewed less favorably, just based on word choice.

Google, Facebook, and Microsoft are addressing these ethical issues. Many have pointed to the missteps Google and Facebook have made in attempting to address AI ethical issues. Let's look at some of the positive elements of what they and Microsoft are doing to address AI ethics.

While each company is addressing these principles differently, we can learn a lot by examining their commonalities. Here are some fundamental principles they address.

While these tech giants are imperfect, they are leading the way in addressing ethical AI challenges. What are your board and senior management team doing to address these issues?

Below are some suggestions you can implement now.

By addressing these issues now, your company will reduce the risks of having AI make or recommend decisions that imperil the company. (see AI Can Be Dangerous - How to Reduce Risk When Using AI) Are you aware of the reputational, regulatory, and legal risks associated with the ethics of your AI?

View post:

Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing - Forbes

Posted in Ai | Comments Off on Google, Facebook, And Microsoft Are Working On AI Ethics. Heres What Your Company Should Be Doing – Forbes

AI in the courts – The Indian Express

Posted: at 7:52 am

Written by Kartik Pant

Artificial Intelligence (AI) seems to be catching the attention of a large section of people, no doubt because of the infinite possibilities it offers. It assimilates, contributes as well as poses challenges to almost all disciplines including philosophy, cognitive science, economics, law, and the social sciences. AI and Machine Learning (ML) have a multiplier effect on increasing the efficiency of any system or industry. If used effectively, it can bring about incremental changes and transform the ecosystem of several sectors. However, before applying such technology, it is important to identify the problems and the challenges within each sector and develop the specific modalities on how the AI architecture will have the highest impact.

In the justice delivery system, there are multiple spaces where the AI application can have a deep impact. It has the capacity to reduce the pendency and incrementally increase the processes. The recent National Judicial Data Grid (NJDG) shows that 3,89,41,148 cases are pending at the District and Taluka levels and 58,43,113 are still unresolved at the high courts. Such pendency has a spin-off effect that takes a toll on the efficiency of the judiciary, and ultimately reduces peoples access to justice.

The use of AI in the justice system depends on first identifying various legal processes where the application of this technology can reduce pendency and increase efficiency. The machine first needs to perceive a particular process and get information about the process under examination. For example, to extract facts from a legal document, the programme should be able to understand the document and what it entails. Over time, the machine can learn from experience, and as we provide more data, the programme learns and makes predictions about the document, thereby making the underlying system more intelligent every time. This requires the development of computer programmes and software which are highly-complex requiring advanced technologies. Additionally, there is a need of constantly nurturing to reduce any bias, and increase learning.

One such complex tool named SUPACE (Supreme Court Portal for Assistance in Court Efficiency) was recently launched by the Supreme Court of India. Designed to first understand judicial processes that require automation, it then assists the Court in improving efficiency and reducing pendency by encapsulating judicial processes that have the capability of being automated through AI.

Similarly, SUVAS is an AI system that can assist in the translation of judgments into regional languages. This is another landmark effort to increase access to justice. The technology, when applied in the long run to solve other challenges of translation in filing of cases, will reduce the time taken to file a case and assist the court in becoming an independent, quick, and efficient system.

Through these steps, the Supreme Court has become the global frontrunner in application of AI and Machine Learning into processes of the justice system. But we must remember that despite the great advances made by the apex court, the current development in the realm of AI is only scratching the surface.

Over time, as one understands and evaluates various legal processes, AI and related technologies will be able to automate and complement several tasks performed by legal professionals. It will allow them to invest more energy in creatively solving legal issues. It has the possibility of helping judges conduct trials faster and more effectively thereby reducing the pendency of cases. It will assist legal professionals in devoting more time in developing better legal reasoning, legal discussion and interpretation of laws.

However, the integration of these technologies will be a challenging task as the legal architecture is highly complex and technologies can only be auxiliary means to achieve legal justice. There is also no doubt that as AI technology grows, concerns about data protection, privacy, human rights and ethics will pose fresh challenges and will require great self-regulation by developers of these technologies. It will also require external regulation by the legislature through statute, rules, regulation and by judiciary through judicial review qua constitutional standards. But with increasing adoption of the technology, there will be more debates and conversations on these problems as well as their potential solutions. In the long-run all this would help in reducing the pendency of cases and improving overall efficiency of justice system.

The writer is founding partner, Prakant Law offices and a public policy consultant

Continued here:

AI in the courts - The Indian Express

Posted in Ai | Comments Off on AI in the courts – The Indian Express

Seizing the Opportunity to Leverage AI & ML for Clinical Research – Analytics Insight

Posted: at 7:52 am

Pharmaceutical professionals believe artificial intelligence (AI)will be the most disruptive technology in the industry in 2021. As AI and machine learning (ML) become crucial tools for keeping pace in the industry, clinical development is an area that can substantially benefit, delivering significant time and cost efficiencies while providing better, faster insights to inform decision making. However, for patients, these tools provide improved safety practices that lead to better, safer, drugs. Here is how AI/ML can be used to support pharma companies in delivering safer drugs to market.

Today, AI and ML can be used to support clinical research in numerous ways; including the identification of molecules that hold potential for clinical treatments, finding patient populations that meet specific criteria for inclusion or exclusion, as well as analyzing scans, claims reports, and other healthcare data to identify trends in clinical research and treatments that lead to safer and faster decisions.

However, to take full advantage of the benefits of AI/ML technology, organizations performing clinical trials must first gain access to the tools, expertise, and industry-specific datasets enabling them to build algorithms to fit their specific needs. Healthcare data, unlike purely numerical data pulled from monitoring systems and tools such as IoT or SaaS platforms, is typically unstructured due to the way the data is collected (through doctor visits, and unstructured web sources) and must meet strict security protocols to ensure patient privacy.

To truly leverage AI and ML for clinical research, data must be collected, studied, combined, and protected to make effective healthcare decisions. When clinical researchers collaborate with partners that have both technical and pharmaceutical expertise, they ensure that data is being structured and analyzed in a way that simultaneously reduces risks and improves the quality of clinical research.

When it comes to research study design, site identification and patient recruitment, and clinical monitoring, AI and ML hold great potential to make clinical trials faster, more efficient, and most importantly: safer.

Study design sets the stage for a clinical research initiative. The cost, efficiency, and potential success of clinical trials rest squarely on the shoulders of the studys design and plans. AI and ML tools, along with natural language processing (NLP), can analyze large sets of healthcare data to assess and identify primary and secondary endpoints in clinical research design. This ensures that protocols for regulators, payers, and patients are well defined before clinical trials commence. Defining parameters such as these optimize study design by helping to identify ideal research sites and enrollment models. Ultimately, better study design leads to more predictable results, reduced cycle time for protocol development, and a generally more efficient study.

Identifying trials sites and recruiting patients for clinical research is a tougher task than it seems to be at face value. Clinical researchers must identify the area that will provide enough access to patients who meet inclusion and exclusion criteria. As studies become more focused on rarer conditions or specific populations, recruiting participants for clinical trials becomes more difficult, which increases the cost, timeline, and risk of failure for the clinical study if enough patients cannot be recruited for the research. AI and ML tools can support site identification for clinical research by mapping patient populations and proactively targeting sites with the most potential patients that meet inclusion criteria. This enables fewer research sites to meet recruitment requirements and reduce the overall cost of patient recruitment.

Clinical monitoring is a tedious manual process of analyzing site risks of clinical research and determining specific actions to take towards mitigating those risks. Risks in clinical research include recruitment or performance issues, as well as risks to patient safety. AI and ML automate the assessment of risks in the clinical research environment, and provide suggestions based on predictive analytics to better monitor for and prevent risks. Automating this assessment removes the risk of manual error, and decreases the time spent on analyzing clinical research data.

During clinical trials, theres a limited patient population to pull from, as research subjects must meet pre-set parameters for inclusion in the study. On the other hand, as opposed to post-market research, clinical researchers are blessed with vast amounts of information surrounding their patients including what drugs they are taking, their health history, and their current environment.

In addition, because the clinical researcher is working closely with the patient and is well-educated on the drug or product being researched, the researcher is very familiar with all potential variables involved in the clinical trial. To put it simply, clinical trials have a lot of information to analyze, but few patients with whom to conduct the research. Because of this disproportionate ratio of information over patients, every case in a clinical research setting is extremely important to the future of the drug being researched.

The massive amount of patient and drug information available to clinical researchers necessitates the use of NLP tools to analyze and process documents and patient records.NLP can search documents and records for specific terms, phrases, and words that might indicate a problem or risk in the clinical trial. This eliminates the need for manual analysis of clinical trial data reducing, and in some cases eliminating, the risk of human error while also increasing patient safety. This is especially useful in lengthy clinical trials, for which researchers will need to analyze patient histories and drug results over an extended period of time. Many clinical trials have long document trails and questionnaires that can add up to hundreds of pages of patient data that researchers must analyze.

In a clinical trial, researchers are ultimately trying to determine whether the benefits of a specific treatment outweigh the risks. AI can be especially helpful in clinical trials of high-risk drugs. If a researcher knows that a drug cures or alleviates an illness or condition, but also know that the potential side effects of that drug can have a significant negative impact on the patient, theyll want to know how to determine if a patient is likely to present those negative side effects. NLP can be used to produce word clouds of potential signals of the negative side effects of a drug that patients would experience.

The only way to do this type of analysis manually is to identify those words using human researchers, then analyze the patient reports to find those words, and group those reports into risk profiles. NLP can automate that entire process and provide insights on risk indicators in patients much more efficiently and safely than human researchers ever could.

AI and ML technologies, especially NLP, hold huge promise to support and optimize clinical research. However, that assurance can only be achieved by organizations that have the necessary tools, expertise, and partners to leverage the full benefits of AI and ML. AI and ML solutions support the optimization of clinical research by more efficiently analyzing research data for risks and allowing faster trial planning and research. Those who fail to engage AI and ML for clinical research may find that their competitors are doing so, and as a result, are going to market with new drugs and products faster with higher profits due to decreased research time and safer practices.

Updesh Dosanjh, Practice Leader, Pharmacovigilance Technology Solutions, IQVIA

As Practice Leader for the Technology Solutions business unit of IQVIA, Updesh Dosanjh is responsible for developing the overarching strategy regarding Artificial Intelligence and Machine Learning as it relates to safety and pharmacovigilance. He is focused on the adoption of these innovative technologies and processes that will help optimize pharmacovigilance activities for better, faster results. Dosanjh has over 25 years of knowledge and experience in the management, development, implementation, and operation of processes and systems within the life sciences and other industries. Most recently, Dosanjh was with Foresight and joined IQVIA as a result of an acquisition. Over the course of his career, Dosanjh also worked with WCI, Logistics Consulting Partners, Amersys Systems Limited, and FJ Systems. Dosanjh holds a Bachelors degree in Materials Science from Manchester University and a Masters degree in Advanced Manufacturing Systems and Technology from Liverpool University.

Read more from the original source:

Seizing the Opportunity to Leverage AI & ML for Clinical Research - Analytics Insight

Posted in Ai | Comments Off on Seizing the Opportunity to Leverage AI & ML for Clinical Research – Analytics Insight

Top 10 Ideas in Statistics That Have Powered the AI Revolution – Columbia University

Posted: at 7:52 am

If youve ever called on Siri or Alexa for help, or generated a self-portrait in the style of a Renaissance painter, you have interacted with deep learning, a form of artificial intelligence that extracts patterns from mountains of data to make predictions. Though deep learning and AI have become household terms, the breakthroughs in statistics that have fueled this revolution are less known. In a recent paper,Andrew Gelman, a statistics professor at Columbia, andAki Vehtari, a computer science professor at Finlands Aalto University,published a listof the most important statistical ideas in the last 50 years.

Below, Gelman and Vehtari break down the list for those who may have snoozedthrough Statistics 101. Each idea can be viewed as a stand-in for an entire subfield, they say, with a few caveats: science is incremental; by singling out these works, they do not mean to diminish the importance of similar, related work.They have also chosen to focus on methods in statistics and machine learning, rather than equally important breakthroughs in statistical computing, and computer science and engineering, which have provided the tools and computing power for data analysis and visualization to become everyday practical tools. Finally, they have focused on methods, while recognizing that developments in theory and methods are often motivated by specific applications.

See something important thats missing? Tweet it at @columbiascience and Gelman and Vehtari will consider adding it to the list.

The 10 articles and books below all were published in the last 50 years and are listed in chronological order.

1.Hirotugu Akaike (1973).Information Theory and an Extension of the Maximum Likelihood Principle.Proceedings of the Second International Symposium on Information Theory.

This is the paper that introduced the term AIC (originally called An Information Criterion but now known as Akaike Information Criterion), for evaluating a models fit based on its estimated predictive accuracy.AIC was instantly recognized as a useful tool, and this paper was one of several published in the mid-1970s placing statistical inference within a predictive framework. We now recognize predictive validation as a fundamental principle in statistics and machine learning. Akaike was an applied statistician, who in the 1960s, tried to measure the roughness of airport runways, in the same way that Benoit Mandelbrot's early papers on taxonomy and Pareto distributions led to his later work on the mathematics of fractals.

2.John Tukey (1977).Exploratory Data Analysis.

This book has been hugely influential and is a fun read that can be digested in one sitting. Traditionally, data visualization and exploration were considered low-grade aspects of practical statistics; the glamour was in fitting models, proving theorems, and developing the theoretical properties of statistical procedures under various mathematical assumptions or constraints.Tukey flipped this notion on its head. He wrote about statistical tools not for confirming what we already knew (or thought we knew), and not for rejecting hypotheses that we never, or should never have, believed, but for discovering new and unexpected insights from data.His work motivated advances in network analysis, software, and theoretical perspectives that integrate confirmation, criticism, and discovery.

3.Grace Wahba (1978).Improper Priors, Spline Smoothing and the Problem of Guarding Against Model Errors in Regression.Journal of the Royal Statistical Society.

Spline smoothing is an approach for fitting nonparametric curves. Another of Wahba's papers from this period is called "An automatic French curve," referring to a class of algorithms that can fit arbitrary smooth curves through data without overfitting to noise, or outliers. The idea may seem obvious now, but it was a major step forward in an era when the starting points for curve fitting were polynomials, exponentials, and other fixed forms.In addition to the direct applicability of splines, this paper was important theoretically. It served as a foundation for later work in nonparametric Bayesian inference by unifying ideas of regularization of high-dimensional models.

4. Bradley Efron (1979).Bootstrap Methods: Another Look at the Jackknife.Annals of Statistics.

Bootstrapping is a method for performing statistical inference without assumptions. The data pull themselves up by their bootstraps, as it were.But you can't make inference without assumptions; what made the bootstrap so useful and influential is that the assumptions came implicitly with the computational procedure: the audaciously simple idea of resampling the data.Each time you repeat the statistical procedure performed on the original data.As with many statistical methods of the past 50 years, this one became widely useful because of an explosion in computing power that allowed simulations to replace mathematical analysis.

5.Alan Gelfand and Adrian Smith (1990).Sampling-based Approaches to Calculating Marginal Densities.Journal of the American Statistical Association.

Another way that fast computing has revolutionized statistics and machine learning is through open-ended Bayesian models.Traditional statistical models are static: fit distribution A to data of type B.But modern statistical modeling has a more Tinkertoy quality that lets you flexibly solve problems as they arise by calling on libraries of distributions and transformations.We just need computational tools to fit these snapped-together models.In their influential paper, Gelfand and Smith did not develop any new tools; they demonstrated how Gibbs sampling could be used to fit a large class of statistical models.In recent decades, the Gibbs sampler has been replaced by Hamiltonian Monte Carlo, particle filtering, variational Bayes, and more elaborate algorithms, but the general principle of modular model-building has remained.

6.Guido Imbens and Joshua Angrist (1994).Identification and Estimation of Local Average Treatment Effects.Econometrica.

Causal inference is central to any problem in which the question isnt just a description (How have things been?) or prediction (What will happen next?), but a counterfactual (If we do X, what would happen to Y?).Causal methods have evolved with the rest of statistics and machine learning through exploration, modeling, and computation. But causal reasoning has the added challenge of asking about data that are impossible to measure (you can't both do X and not-X to the same person).As a result, a key idea in this field is identifying what questions can be reliably answered from a given experiment. Imbens and Angrist are economists who wrote an influential paper on what can be estimated when causal effects vary, and their ideas form the basis for much of the later work on this topic.

7.Robert Tibshirani (1996).Regression Shrinkage and Selection Via the Lasso.Journal of the Royal Statistical Society.

In regression, or predicting an outcome variable from a set of inputs or features, the challenge lies in including lots of inputs along with their interactions; the resulting estimation problem becomes statistically unstable because of the many different ways of combining these inputs to get reasonable predictions. Classical least squares or maximum likelihood estimates will be noisy and might not perform well on future data, and so various methods have been developed to constrain or regularize the fit to gain stability.In this paper, Tibshirani introduced lasso, a computationally efficient and now widely used approach to regularization, which has become a template for data-based regularization in more complicated models.

8.Leland Wilkinson (1999).The Grammar of Graphics.

In this book, Wilkinson, a statistician who's worked on several influential commercial software projects including SPSS and Tableau, lays out a framework for statistical graphics that goes beyond the usual focus on pie charts versus histograms, how to draw a scatterplot, and data ink and chartjunk, to abstractly explore how data and visualizations relate.This work has influenced statistics through many pathways, most notably through ggplot2 and the tidyverse family of packages in the computing language R. Its an important step toward integrating exploratory data and model analysis into data science workflow.

9.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio (2014).Generative Adversarial Networks.Proceedings of the International Conference on Neural Information Processing Systems.

One of machine learnings stunning achievements in recent years is in real-time decision making through prediction and inference feedbacks. Famous examples include self-driving cars and DeepMinds AlphaGo, which trained itself to become the best Go player on Earth.Generativeadversarial networks, or GANs, are a conceptual advance that allow reinforcement learning problems to be solved automatically. They mark a step toward the longstanding goal of artificial general intelligence while also harnessing the power of parallel processing so that a program can train itself by playing millions of games against itself.At a conceptual level, GANs link prediction with generative models.

10.Yoshua Bengio, Yann LeCun, and Geoffrey Hinton (2015).Deep Learning.Nature.

Deep learning is a class of artificial neural network models that can be used to make flexible nonlinear predictions using a large number of features.Its building blockslogistic regression, multilevel structure, and Bayesian inferenceare hardly new. What makes this line of research so influential is the recognition that these models can be tuned to solve a variety of prediction problems, from consumer behavior to image analysis.As with other developments in statistics and machine learning, the tuning process was made possible only with the advent of fast parallel computing and statistical algorithms to harness this power to fit large models in real time.Conceptually, were still catching up with the power of these methods, which is why theres so much interest in interpretable machine learning.

More here:

Top 10 Ideas in Statistics That Have Powered the AI Revolution - Columbia University

Posted in Ai | Comments Off on Top 10 Ideas in Statistics That Have Powered the AI Revolution – Columbia University

AI Race: Why India is Lagging Behind the US and China in 2021? – Analytics Insight

Posted: at 7:52 am

Being the second most populated country in the world, India is still lagging behind the US and China in the AI race, even in 2021. The US is positioned at the number one in the AI race for a long time while China is motivated to take over the position. These two countries are currently the leaders of Artificial Intelligence with proper infrastructure for R&D. It is a well-known fact that India is a developing country and the US, as well as China, are developed countries in the world. But there are other reasons for India to lag behind these two countries in the AI race. This article explores a few reasons for the readers to have a better understanding.

India is thriving with a domestic market with skilled laborers in 2021. But, there is a lack of reputed technology companies in the market to invest time in R&D for innovating machines and models with cutting-edge technologies. The US and China have Google, Microsoft, Baidu, and Alibaba respectively to create new innovations for the welfare of society. Google, IBM, Microsoft, and many other reputed tech companies have extended their market in India and are recruiting Indian employees for better productivity. But India is lagging behind in the AI race because the country does not have crazy and obsessed entrepreneurs like Elon Musk and Jeff Bezos.

India is one of the most educated countries in the world, with numerous educational institutions. Gradually the education sector is integrating technical courses and curriculum in Artificial Intelligence, Machine Learning, and Robotics in the form of Mechatronics. Students only know about those five traditional engineering courses but there is a wide array of engineering in these disruptive technological fields. They are slowly taking interest in these fields due to more exposure to globalization and digitization. There are a handful of Ph.D. scholars or engineers who are highly interested to develop new machines with these cutting-edge technologies. Thus, it will take some time for India to educate students and inspire them to be in the field of Artificial Intelligence and innovate new AI models efficiently and effectively.

Another reason for India to lag behind the US and China in the AI race is the lack of publishing research papers. China and the US have each published more than 15,000 AI research papers in recent years. It is observed that the average US research quality is better than in China or the EU. The US is becoming the world leader in designing AI chips for smart systems. India has integrated Artificial Intelligence and machine learning only in the field of computer science. India is required to boost research tax incentives as well as expand the array of public research institutions to work on AI research papers. This will help in the creation of better efficient machine learning algorithms to take a lead in the AI race.

China has strict control over its population index with specific rules and regulations, for citizens to manage data explosion efficiently. The country receives sufficient and appropriate volumes of real-time data to train Artificial Intelligence models. It has a strategic priority to drive the Chinese tech companies in creating a plethora of potential AI applications with this data. Meanwhile, India does not have control over the population along with undocumented citizens. The rural sector still lacks proper internet connection, leading to a digital divide in the country. It is more difficult for India to receive appropriate data from both urban and rural sectors.

The Indian government needs to articulate an ambitious mission on Artificial Intelligence for more innovations. The government is required to understand that India needs Artificial Intelligence to drive success and revenue for the nearby future. There are multiple sectors that need AI to boost productivity. There are certain progressive start-ups growing up in the domestic market to help the industries.

But it is better late than never. India should realize the power of useful knowledge from the enormous sets of data available due to digitization. This knowledge will help the country to solve its own problems and achieve five-year plans efficiently and effectively. There should be a key focus on developing advanced and modern data infrastructure to be in the AI race with the US and China.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the rest here:

AI Race: Why India is Lagging Behind the US and China in 2021? - Analytics Insight

Posted in Ai | Comments Off on AI Race: Why India is Lagging Behind the US and China in 2021? – Analytics Insight

The new world of work: You plus AI – VentureBeat

Posted: at 7:52 am

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.

Emerging technologies meet both advocates and resistance as users weigh the potential benefits with the potential risks. To successfully implement new technologies, we must start small, in a few simplified forms, fitting a small number of use cases to establish proof of concept before scaling usage. Artificial intelligence is no exception, but with the added challenge of intruding into the cognitive sphere, which has always been the prerogative of humans. Only a small circle of specialists understand how this technology works therefore, more education to the broader public is needed as AI becomes more and more integrated into society.

I recently connected with Josh Feast, CEO and cofounder of Boston-based AI company Cogito, to discuss the role of AI in the new era of work. Heres a look into our conversation.

Igor Ikonnikov: Artificial intelligence can be an incredibly powerful tool, as you know from your experience founding and growing an AI-based company. But there are plenty of people who have expressed concerns around its impact on the workforce and whether this new technology will replace them one day. So lets cover that topic first: Do you have any concerns about AI coming for jobs?

Josh Feast: Youre right, this question has been asked many times in recent years. I believe it is time to focus on how we can shape the AI and human relationship to ensure were happy with the outcome, rather than being bystanders to an uncertain future. What I mean is, were living in a world where humans and machines are and will continue to work alongside each other. So, instead of fighting technological progress, we must embrace and harness it. Our emotionality as humans will always ensure we remain key assets in the workplace, even as companies deploy AI technology to revolutionize the modern enterprise. The idea is not to replace humans but to augment or simply help them with technology.

David De Cremer, Provosts Chair and Professor at NUS Business School, and Garry Kasparov, chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative, agree. They previously explained, The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities but, in reality, they dont. AI-based machines are fast, more accurate, and consistently rational, but they arent intuitive, emotional, or culturally sensitive. It is in combining the strengths of AI and humans that we can be even more effective.

Ikonnikov: The last 15 months have been disruptive in many ways including the steep increase in both the value of in-person interactions and the need for higher degree of automation. Is this the opportunity to combine the strengths?

Feast: More than a year in, with remote work now a norm for millions of people, almost everything we do is digitized and mediated by technology. Weve seen improvements in efficiency and productivity, but also a growing need to fill the empathy deficit and increase energy and positive interactions. In other words, AI is already working in symbiosis with humans, so its up to us to define what we want that partnership to look like going forward. This consideration requires an open mind, active optimism, and empathy to see the full potential of the human-AI relationship. I believe this is where human-aware technology can play a big role in shaping the future.

Ikonnikov: Can you elaborate on what human-aware technology is?

Feast: Human-aware technology has the ability to sense what humans need in the moment to better augment our innate skills including the ability to respond to and support our emotional and social intelligence. It opens new doors for technological augmentation in new areas. An example of this today is smart prosthetics, which lean on human-machine interfaces that help prosthetic limbs truly feel like an extension of the body, like the robotic arm being developed at Johns Hopkins Applied Physics Laboratory. Complete with humanlike reflexes and sensations, the robotic arm contains sensors that give feedback on temperature and vibration, as well as collect the data to mimic what human limbs are able to detect. As a result, it responds much like a normal arm.

The same concept applies to humans working at scale in an enterprise where a significant part of our jobs involves collaborating with other people. Sometimes, in these interactions, we miss cues, get triggered, or fail to see another persons perspective. Technology can support us here as an objective recognizer of patterns and cues.

Ikonnikov: As we continue to leverage this human-aware AI, youve said we must find a balance between machine intelligence and human intelligence. How does that translate to the workplace?

Feast: Finding that balance and optimizing for it to successfully address workplace challenges requires several levers to be pulled.

In order to empower AI to help us, we must actively and thoughtfully shape the AI the more we do so, the more helpful it will be to individuals and organizations. In fact, a team from Microsofts Human Understanding and Empathy group believes that, with the right training, AI can better understand its users, more effectively communicate with them, and improve their interactions with technology. We can train the technology through similar processes that we train people with rewarding it on the achievement of external goals like completing a task on time, but also on the achievement of our internal goals like maximizing our satisfaction, otherwise known as extrinsic and intrinsic rewards. In giving AI data about what works for us intrinsically, we increase its ability to support us.

Ikonnikov: As the workplace evolves and AI becomes more ingrained in our daily workflows, what would the outcome look like?

Feast: Increased success at work will come when organizations leverage humans, paired with AI, to drive an enhanced experience in the moments that matter most. It is those in-the-moment interactions where the new wave of opportunity arises.

For example, in an in-person conversation, both participants initiate, detect, interpret, and react to each others social signals in what some may call a conversational dance. This past year, weve all had to communicate over video and voice calls, challenging the nature of that conversational dance. In the absence of other methods of communication such as eye contact, body language, and shared in-person experiences, voice (and now video) becomes the only way a team member or manager can display emotion in a conversation. Whether its a conversation between an employee and customer or employee and manager, these are make-or-break moments for a business. Human-aware AI that is trained by humans in the same way we train ourselves can augment our abilities in these scenarios by supporting us when it matters and driving better outcomes.

Ikonnikov: There has been a big shift in AI conversations recently as it relates to regulations. The European Union, for example, unveiled a strict proposal governing the use of AI, a first-of-its-kind policy. Do you think AI needs to be regulated better?

Feast: Collectively, we have an obligation to create technology that is effective and fair for everyone were not here to build whatever can be built without limits or constraints when it comes to peoples fundamental rights. This means we have a responsibility to regulate AI.

The first step to successful AI regulation is data regulation. Data is a pivotal resource that defines the creation and deployment of AI. Were already seeing unintended consequences of unregulated AI. For example, there isnt a level playing field across organizations when it comes to AI deployment because there is a stark difference company-to-company based on the amount and quality of data they have. This imbalance will impact the development of technology, the economy, and more. We, as leaders and brands, must actively work with regulatory bodies to create common parameters to level the playing field and increase trust in AI.

Ikonnikov: How can creators of AI technology earn that trust?

Feast: We have to be focused on implementing ethical AI by delivering transparency into the technology and communicating a clear benefit to all users. This extends to supplying education and upskilling opportunities. We also have to actively mitigate the underlying biases of the models and systems deployed. AI leaders and creators must do extensive research on de-biasing approaches for examining gender and racial bias, for example. This is an important step to take on the path to increasing trust in AI and responsibly implementing the technology across organizations and populations.

We also must ensure there is opportunity given to creators of AI who are diverse themselves who have diverse demographics, immigration status, and backgrounds. It is the creators who define what problems we choose to address with AI, and more diverse creators will result in AI addressing a broader range of problems.

Without these parameters without trust we cant fully reap all the benefits of AI. On the flip side, if we get this right and, as creators of AI and leaders of related organizations, do the work to earn trust and thoughtfully shape AI, the result will be responsible AI that truly works in symbiosis with us, more effectively supporting us as we forge the future of work.

Go here to read the rest:

The new world of work: You plus AI - VentureBeat

Posted in Ai | Comments Off on The new world of work: You plus AI – VentureBeat

Page 126«..1020..125126127128..140150..»