Page 17«..10..16171819..3040..»

Category Archives: Ai

Even an AI thinks using AI to write your homework is a bad idea – PC Gamer

Posted: September 29, 2022 at 12:39 am

Kids on Reddit have been telling tales of using OpenAI's Playground to get straight A grades in their homework. It's no secret, but when someone asked the same AI its thoughts on how it was used in this schoolyard cheating scheme, it actually made some pretty good arguments against its own use.

This tweet (opens in new tab) from MIT's Computer Science & Artificial Intelligence Laboratory (CSAIL), shows OpenAI's answer to the following prompt: "Explain the moral and social issues with using AI to do your homework."

Spoiler: it's one of resounding negativity.

"They may not be learning the material as well as they could be," the AI writes. "This could lead to problems down the road when they are expected to know the material for exams or real-world applications.

"Additionally, using AI to do homework could lead to cheating."

No sh*t. Unless you've been assigned a project specifically about using AI for school, it's definitely cheating. Maybe not if you're only using it to help generate ideas, rather than writing entire essays.

I did find a couple of references to using AI for homework across Reddit, along with a few questions about how helpful it might be from prospective cheaters, but one Reddit post sticks out as the post MIT was potentially referencing.

Urdadgirl69's post (opens in new tab), headlined "Artificial Intelligence allows me to get straight A's," talks about using the tool to write essays, answer questions about movies and books for school projects, and even making "$100 profit by 'doing' homework for other classmates," they say. "Now I am looked at as a genius."

OpenAI's Playground tool is an AI training platform that allows users to create and train AI models. It's often used by researchers and developers to experiment with new AI models and applications. Now the kids have cottoned on, they're using it to turn a profit among classmates it seems.

It's a diamond hustle for sure, but as the AI explains "It could create an unfair advantage over other students who are doing their homework on their own." Imagine half the kids in class suddenly shifting from Ds to As in a single week. A bit sus if you ask me, and you'd hope the teacher would realise something was up.

Here's what the Playground text-davinci-002 model AI gave us, as part of a request for an article about kids using AI to do their homework:

"The implications of using AI to do your homework are far-reaching. For one, it could lead to a future where AI is used to do all kinds of things that humans currently do, including things like homework.

Your next machine

Best gaming PC (opens in new tab): The top pre-built machines from the prosBest gaming laptop (opens in new tab): Perfect notebooks for mobile gaming

"This could lead to a future where humans are no longer needed to do things like homework, and instead AI does it all. This could have a major impact on society, as it would likely lead to large-scale unemployment and a decrease in the quality of life for many people. It would also have a major impact on education, as it would likely lead to a decrease in the need for human teachers."

Then it took a surprisingly positive turn, and it wasn't as doom and gloom as when MIT asked it a similar question.

"It could free up a lot of time for people who currently spend a lot of time doing homework. It could also lead to better grades for people who use AI to help with their homework."

Although I'm sure teachers will have something to say about it, the AI isn't wrong. Still, it's imperative for us to have these conversations right now as there's a very real fear arising about the idea that humans could end up supplanted by AI. Artificial intelligence has already been giving people unfair advantages in art competitions (opens in new tab), and the moral implications are far reaching.

But that's a discussion for another time.

Go here to read the rest:

Even an AI thinks using AI to write your homework is a bad idea - PC Gamer

Posted in Ai | Comments Off on Even an AI thinks using AI to write your homework is a bad idea – PC Gamer

Experts warn AI assistants are hurting the social development of children – Digital Trends

Posted: at 12:39 am

The likes of Google Assistant and Alexa have been at the receiving end of privacy-related concerns for a while now, yet they continue to make inroads inside millions of homes. But it appears that they might also have a detrimental impact on the growth of children when it comes to their psycho-social development and acquiring core skills.

According to an analysis by experts from the University of Cambridges School of Clinical Medicine, interaction with AI assistants affects children in three ways. Starting at the bottom of the chain is the hindrance posed to learning opportunities.

AI assistants made by Amazon, Apple, and Google continue to improve at a scary pace, and with each passing year, their ability to pull up relevant answers from the web is also gaining momentum. With such ease at their disposal, experts believe that the traditional process of hunting and absorbing knowledge has taken a backseat.

The real issue here is that when children pose a query before an elder person, be it their parents or teachers, they are often asked about the context and reasoning behind their inquiry. Plus, when a person searches for an answer, they develop a critical approach as well as logical reasoning for parsing the right kind of information and the scope of their imagination also widens.

Children have poor understanding of how information is retrieved from the internet, where the internet is stored, and the limitations of the internet, said the report. With such a chain of faith placed on the internet, it becomes a lot easier for young minds to absorb false information.

The cesspool of misinformation plaguing the internet needs no introduction, and platforms continue to struggle to contain it but AI assistants are making matters worse. A Stanford research project in 2021 found that the likes of Alexa, Google Assistant, and Siri provide a different set of answers and search results related to health queries. Adults can be trusted with making educated decisions in such a scenario, but children are at extremely high risk here.

Next in line is stunted social growth. Human-to-human conversations help refine social etiquette and allow children to learn how to behave the right way in the world out there. Chatting with a digital assistant doesnt offer that privilege.

In a nutshell, AI assistants offer a poor path to learning social interactions, despite advances like natural language processing and Googles LaMDA innovation. Google Assistant can talk to you naturally, just like another person, but it cant teach basic manners to children and train them on how to conduct themselves like decent human beings.

For example, there is no incentive for learning polite terms like please when talking to a virtual assistant living inside a puck-sized speaker, nor is there any constructive feedback possible. In the pandemic-driven times that we live in, the scope for real human interactions has further shrunk, which poses an even bigger risk to the social development of young minds.

Finally, there is the problem of inappropriate responses. Not all guardians have the digital skills to set strict boundaries around parental software controls. This risks exposing kids to content that is not age-appropriate and could lead them straight to harmful information that could be hazardous. Per a BBC report from 2021, Amazons Alexa once put a 10-year-old kids life at risk by challenging them to touch a live circuit part with a metallic coin.

The rest is here:

Experts warn AI assistants are hurting the social development of children - Digital Trends

Posted in Ai | Comments Off on Experts warn AI assistants are hurting the social development of children – Digital Trends

How robots and AI are helping develop better batteries – MIT Technology Review

Posted: at 12:39 am

Historically, researchers in materials discovery have devised and tested options through some mix of hunches, informed speculation, and trial by error. But its a difficult and time-consuming process simply given the vast array of possible substances and combinations, which can send researchers down numerous false paths.

In the case of electrolyte ingredients, you can mix and match them in billions of ways, says Venkat Viswanathan, an associate professor at Carnegie Mellon, a co-author of the Nature Communications paper, and a cofounder and chief scientist at Aionics. He collaborated with Jay Whitacre, director of the universitys Wilton E. Scott Institute for Energy Innovation and the co-principal investigator on the project, along with other Carnegie researchers to explore how robotics and machine learning could help.

The promise of a system like Clio and Dragonfly is that it can rapidly work through a wider array of possibilities than human researchers can, and apply what it learns in a systematic way.

Dragonfly isnt equipped with information about chemistry or batteries, so it doesnt bring much bias to its suggestions beyond the fact that the researchers select the first mixture, Viswanathan says. From there, it runs through a wide variety of combinations, from mild refinements of the original to completely out-of-the-box suggestions, homing in on a mix of ingredients that delivers better and better results against its programmed goal.

In the case of battery experiments, the Carnegie Mellon team was looking for an electrolyte that would speed up the recharging time for batteries. The electrolyte solution helps shuttle ionsor atoms with a net charge due to the loss or gain of an electronbetween the two electrodes in a battery. During discharge, lithium ions are created at the negative electrode, known as the anode, and flow through the solution toward the positive electrode, the cathode, where they gain electrons. During charging, that process is reversed.

See the article here:

How robots and AI are helping develop better batteries - MIT Technology Review

Posted in Ai | Comments Off on How robots and AI are helping develop better batteries – MIT Technology Review

Where Will C3.ai Stock Be in 3 Years? – The Motley Fool

Posted: at 12:39 am

C3.ai (AI 0.23%) was one of the hottest tech debuts of 2020. But today, the enterprise artificial intelligence (AI) software company's stock trades nearly 70% below its initial public offering (IPO) price. C3.ai lost its luster as investors fretted over its slowing growth, ongoing losses, and high valuations. Rising interest rates exacerbated that pain. But could this out-of-favor stock recover over the next three years?

C3.ai only expects its revenue to rise 1% to 7% in fiscal 2023, which ends next April. That would represent a severe slowdown from its 38% growth in fiscal 2022 and 17% growth in fiscal 2021.

The company mainly attributes that slowdown to macroeconomic headwinds. That's because it provides most of its AI algorithms, which can be integrated into an organization's existing software infrastructure or sold as stand-alone services, to large customers in the macro-sensitive energy and industrial sectors.

Image source: Getty Images.

However, C3.ai also generates a large portion of its revenue from a joint venture (JV) with energy giant Baker Hughes. Approximately a third of C3.ai's revenue through fiscal 2025 will still likely come from Baker Hughes, based on Wall Street's top-line expectations and the current terms of the joint venture. This deal, which was renegotiated to be extended for an extra year last October, will expire in fiscal 2025.

Three troubling hints indicate this partnership could be in trouble: Baker Hughes already renegotiated lower revenue commitments to extend the agreement last year, it divested its own equity stake in C3.ai, and it invested in C3.ai's competitor Augury instead. If Baker Hughes walks away from the JV, C3.ai's revenue will plummet.

To diversify away from Baker Hughes and other large customers, C3.ai is aggressively pursuing smaller contracts from smaller customers. It also recently announced it would pivot away from subscriptions toward a usage-based model that only charges customers whenever they access its services.

However, that strategic shift raised eyebrows because enterprise software companies generally prefer to pursue larger customers, which generate higher revenue, and lock them in with sticky subscriptions. C3.ai has also gone through three CFOs since its IPO, and each CFO has slightly modified its customer counting methods and other key growth metrics.

C3.ai's slowing growth, customer concentration, management issues, mixed strategies, and ongoing losses all convinced investors that its stock didn't deserve a premium valuation. At its peak in late 2020, C3.ai was valued at $17 billion, or 93 times the sales it would actually generate in fiscal 2021. Today, it's worth just $1.4 billion, or five times this year's sales.

During C3.ai's latest conference call in late August, CEO Tom Siebel warned that its customers "appear to be expecting a recession" as they reined in their orders. Siebel also warned that the potential downturn "could be significant" and throttle its near-term growth.

Siebel believes that after rising just 1% to 7% in fiscal 2023, C3.ai's revenue will "revert to historical annual growth rates" of more than 30% in fiscal 2024 "and beyond." CFO Juho Parkkinen, who took the position in February, claims that its shift toward smaller usage-based contracts will stabilize its long-term growth. A recent expansion of its partnership with Alphabet's Google Cloud, which bundles C3.ai's AI services with the tech giant's cloud services, could also boost its sales.

Yet analysts aren't as optimistic. They expect C3.ai's revenue to rise 3% in fiscal 2023, 21% in fiscal 2024, and 19% in fiscal 2025. Those growth rates are still robust relative to its current price-to-sales ratio, but its sales could still drop off a cliff in fiscal 2026 if Baker Hughes ends its closely watched partnership.

Assuming that C3.ai matches analysts' expectations for $376 million in revenue in fiscal 2025, and it's still trading at about five times sales by then, it could be worth about $1.9 billion in three years -- which would represent a gain of nearly 40% from its current price but remain well below its IPO valuation of about $4 billion.

C3.ai's stock could rise even higher if investors are willing to pay a higher premium again, but I don't see that happening until it renews its deal with Baker Hughes, significantly reduces the energy giant's weight on its top line, stops switching CFOs and reporting methods, and proves that its pursuit of smaller usage-based customers actually makes sense.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Leo Sun has positions in Alphabet (A shares) and C3.ai, Inc. The Motley Fool has positions in and recommends Alphabet (A shares) and Alphabet (C shares). The Motley Fool recommends C3.ai, Inc. The Motley Fool has a disclosure policy.

Read the original here:

Where Will C3.ai Stock Be in 3 Years? - The Motley Fool

Posted in Ai | Comments Off on Where Will C3.ai Stock Be in 3 Years? – The Motley Fool

Kumo aims to bring predictive AI to the enterprise with $18M in fresh capital – TechCrunch

Posted: at 12:39 am

Kumo, a startup offering an AI-powered platform to tackle predictive problems in business, today announced that it raised $18 million in a Series B round led by Sequoia, with participation from A Capital, SV Angel and several angel investors. Co-founder and CEO Vanja Josifovski says the new funding will be put toward Kumos hiring efforts and R&D across the startups platform and services, which include data prep, data analytics and model management.

Kumos platform works specifically with graph neural networks, a class of AI system for processing data that can be represented as a series of graphs. Graphs in this context refer to mathematical constructs made up of vertices(also called nodes) that are connected byedges (or lines). Graphs can be used to model relations and processes in social, IT and even biological systems. For example, the link structure of a website can be represented by a graph where the vertices stand in for webpages and the edges represent links from one page to another.

Graph neural networks have powerful predictive capabilities. At Pinterest and LinkedIn, theyre used to recommend posts, people and more to hundreds of millions of active users. But as Josifovski notes, theyre computationally expensive to run making them cost-prohibitive for most companies.

Many enterprises today attempting to experiment with graph neural networks have been unable to scale beyond training data sets that fit in a single accelerator (memory in a single GPU), dramatically limiting their ability to take advantage of these emerging algorithmic approaches, he told TechCrunch in an email interview. Through fundamental infrastructural and algorithmic advancements, we have been able to scale to datasets in the many terabytes, allowing graph neural networks to be applied to customers with larger and more complicated enterprise graphs, such as social networks and multi-sided marketplaces.

Using Kumo, customers can connect data sources to create a graph neural network that can then be queried in structured query language (SQL). Under the hood, the platform automatically trains the neural network system, evaluating it for accuracy and readying it for deployment to production.

Josifovski says that Kumo can be used for applications like new customer acquisition, customer loyalty and retention, personalization and next best action, abuse detection and financial crime detection. Previously the CTO of Pinterest and Airbnb Homes, Josifovski worked with Kumos other co-founders, former Pinterest chief scientist Jure Leskovec and Hema Raghavan, to develop the graph technology through Stanford and Dortmund University research labs.

Companies spend millions of dollars storing terabytes of data but are able to effectively leverage only a fraction of it to generate the predictions they need to power forward-looking business decisions. The reason for this is major data science capacity gaps as well as the massive time and effort required to get predictions successfully into production, Josifovski said. We enable companies to move to a paradigm in which predictive analytics goes from being a scarce resource used sparingly into one in which it is as easy as writing a SQL query, thus enabling predictions to basically become ubiquitous far more broadly adapted in use cases across the enterprise in a much shorter timeframe.

Kumo remains in the pilot stage, but Josifovski says that it has more than a dozen early adopters in the enterprise. To date, the startup has raised $37 million in capital.

More here:

Kumo aims to bring predictive AI to the enterprise with $18M in fresh capital - TechCrunch

Posted in Ai | Comments Off on Kumo aims to bring predictive AI to the enterprise with $18M in fresh capital – TechCrunch

Ships are turning whales into ocean roadkill. This AI system is trying to stop it – The Guardian US

Posted: at 12:39 am

Fran was a celebrity whale the most photographed humpback in the San Francisco Bay, with 277 recorded sightings since 2005. Last month, she was hit by a ship and killed.

Her death marked a grim milestone: Fran was the fifth whale to be killed by a ship strike in the area this year, according to the Marine Mammal Center. Collisions with ships are one of the leading causes of death for endangered whales, who breed, eat and travel in deep channels in the same busy waters that cargo ships frequent.

Whales that spend their lives near the surface such as humpbacks and right whales are especially at risk. One 2019 study likened their plight to those of land animals forced to criss-cross the highways that cut through their habitats. Whales, they say, are becoming ocean roadkill.

The Whale Safe project, which started in 2020 and is funded by the tech billionaire and Salesforce founder Marc Benioff, hopes to overcome that challenge using artificial intelligence. It provides close to real-time data on how many whales are present in the area, and sends out alerts to shipping companies to slow their boats in the presence of the whales.

This is where tech meets Mother Nature for the benefit of marine life, said Jeff Boehm, chief external relations officer of the Marine Mammal Center, in a news release last week. Whales and ships must coexist in an increasingly busy ocean.

The Whale Safe system works by using buoys fitted with microphones to hear whales, then layers artificial intelligence and models to deliver a whale presence rating ranging from low to high. It will also create report cards for shipping companies, based on their voluntary speed reductions in areas of whale activity. Slowing down is the number one thing ships can do to avoid lethal collisions, the group says.

The system has been in use around Santa Barbara, which is home to one of the shipping channels that services the biggest ports on the west coast, and is now expanding northward, into the San Francisco Bay area, also a busy port area for international cargo ships. In the first full year of the system operating near Santa Barbara, there were no recorded whale-ship interactions in the area, the project says.

Marine biologists say the project is a good step, but not a silver bullet in addressing the core issue of whales and ships. John Calambokidis, a senior research biologist and a founder of the Cascadia Research Collective, says he welcomes the Whale Safe program because it provides additional attention to this important threat to whales. The system is exciting in that it adds a real-time component to advance detection capabilities, he says.

But he doesnt think it will represent any kind of solution to the problem until other measures such as mandatory speed restrictions for ships and moving shipping lanes out of whale routes are taken.

Calambokidis says that while the system can sense the presence of whales, it cant give details on how far away they are, which direction theyre traveling, or how many of them are present. Calls from blue whales travel tens of miles, and males make calls more often when they are traveling. Some whales dont make much noise at all, which would make sensing them difficult. The lack of sound doesnt necessarily mean that whales arent present, he says. It requires interpretation of the acoustics.

In addition, the models that the artificial intelligence is trained on, models that Calambokidis has helped to create over decades of research, arent very effective at predicting whale occurrence at the scale of shipping lanes.

Between 1988 and 2012, there were at least 100 documented large whale ship strikes along the California coast. But that probably represents only a small proportion of deaths, because most bodies sink to the bottom, and the true number of deaths from ship strikes may be 10 times higher. Blue whales, in particular, have not experienced a population bump after the end of whaling and ship collisions could be a significant reason stopping their recovery.

Cotton Rockwood, a senior marine ecologist at Point Blue Conservation Science, agrees that its a good piece of the puzzle for addressing the issue, but it wont solve the problem alone. Weve often heard from captains that yes, they get these notifications that there are higher than average whale densities present, but they dont necessarily see those whales at the surface, so they dont necessarily feel like they have to slow down.

Although more listening stations would make it easier to triangulate the location of whales, that doesnt account for the quiet moments. Youre only listening when they call, which isnt all the time.

Some projects to avoid whale-ship collisions in the Pacific north-west have tested infrared cameras, which work in some cases, but are very expensive, making them a tricky solution. Another technological fix could be sonic alarms that would shriek out warnings to help keep whales from getting hit. But again it comes with costs, says Rockwood. Unfortunately, it means youre putting more sound in the ocean, which is a pollutant for the whales, he says, adding that whales didnt respond to it in tests.

Rockwood says that while ship collisions are a visible problem along coastlines because whale carcasses wash up on beaches its a problem everywhere that ships travel, not just near the shore. The more people are aware, and the more that the issue gets out there, the more likely it is that things are going to change, he says. There are known solutions that do help.

The rest is here:

Ships are turning whales into ocean roadkill. This AI system is trying to stop it - The Guardian US

Posted in Ai | Comments Off on Ships are turning whales into ocean roadkill. This AI system is trying to stop it – The Guardian US

6 tactics to make artificial intelligence work on the frontlines – STAT

Posted: September 15, 2022 at 10:06 pm

Artificial intelligence is a transformative tool in the workplace except when it isnt.

For top managers, state-of-the art AI tools are a no-brainer: in theory, they increase revenues, decrease costs, and improve the quality of products and services. But in the wild, its often just the opposite for frontline employees who actually need to integrate these tools into their daily work. Not only can AI tools yield few benefits, but they can also introduce additional work and decrease autonomy.

Our research on the introduction of 15 AI clinical decision support tools over the past five years at Duke Health has shown that the key to successfully integrating them is recognizing that increasing the value for frontline employees is as important as making sure the tools work in the first place. The tactics we identified are useful not only in biopharma, medicine, and health care, but across a range of other industries as well.

advertisement

Here are six tactics for making artificial intelligence-based tools work on industry frontlines.

AI project leaders need to increase benefits for the frontline employees who will be the actual end users of a new tool, though this is often not the group that initially approaches them to build it.

advertisement

Cardiologists in Dukes intensive care unit asked AI project team leaders to build a tool to identify heart attack patients who did not need ICU care. Cardiologists said the tool would allow frontline emergency physicians to more easily identify these patients and triage them to noncritical care, increasing the quality of care, lowering costs, and preventing unnecessary overcrowding in the ICU.

The team developed a highly accurate tool that helped ER doctors identify low-risk patients. But within weeks of launching the tool, it was scrapped. Frontline emergency physicians complained that they didnt need a tool to tell us how to do our job. Incorporating the tool meant extra work and they resented the outsider intrusion.

The artificial intelligence team had been so focused on the needs of the group that initially approached them cardiologists that they neglected those who would actually use the tool emergency physicians.

The next time cardiologists approached the developers, the latter were savvier. This time, the cardiologists wanted an AI tool to help identify patients with low-risk pulmonary embolism (one or more blood clots in the lungs), so they could be sent home instead of hospitalized. The developers immediately reached out to emergency physicians, who would ultimately use the tool, to understand their pain points around the treatment of patients with pulmonary embolism. The developers learned that emergency physicians would use the tool only if they could be sure that patients would get the appropriate follow-up care. Cardiologists agreed to staff a special outpatient clinic for these patients.

This time, the emergency doctors accepted the tool, and it was successfully integrated into the emergency department workflow.

The key lesson here is that project leaders need to identify the frontline employees who will be the true end users of a new tool based on artificial intelligence. Otherwise, they will resist adopting it. When employees are included in the development process, they will make the tool more useful in daily work.

Successful AI project team leaders measure and reward frontline employees for accomplishing the outcomes the tool is designed to improve.

In the pulmonary embolism project described earlier, project leaders learned that emergency physicians might not use the tool because they were evaluated on how well they recognized and handled acute, common issues rather than how well they recognized and handled uncommon issues like low-risk pulmonary embolism. So the leaders worked with hospital management to change the reward system in a way that emergency physicians are now also evaluated based on how successfully they recognize and triage low-risk pulmonary embolism patients.

It may seem obvious that it is necessary to reward employees for accomplishing the outcomes a tool is designed to improve. But this is easier said than done, because AI project team leaders usually dont control compensation decisions for these employees. Project leaders need to gain top managers support to help change incentives for end users.

Data used to train a tool based on artificial intelligence must be representative of the target population in which it will be used. This requires a lot of training data, and identifying and cleaning data during AI tool design requires a lot of data work. AI project team leaders need to reduce the amount of this work that falls on frontline employees.

For example, kidney specialists asked the Duke AI team for a tool to increase early detection of people at high risk of chronic kidney disease. It would help frontline primary care physicians both detect patients who needed to be referred to nephrologists, and reduce the number of low-risk patients who were needlessly referred to nephrologists.

To build the tool, developers initially wanted to engage primary care practitioners in time-consuming work to spot and resolve data discrepancies between different data sources. But because it was the nephrologists, not the primary care practitioners, who would primarily benefit from the tool, PCPs were not enthusiastic to take on additional work to build a tool they didnt ask for. So the developers enlisted nephrologists rather than PCPs to do the work on data label generation, data curation, and data quality assurance.

Reducing data work for frontline employees makes perfect sense, so why do some AI project leaders fail to do it? Because these employees know data idiosyncrasies and the best outcome measures. The solution is to involve them, but use their labor judiciously.

Developing AI tools requires frontline employees to engage in integration work to incorporate the tool into their daily workflows. Developers can increase implementation by reducing this integration work.

Developers working on the kidney disease tool avoided requesting information they could retrieve automatically. They also made the tool easier to use by color coding high-risk patients in red, and medium-risk patients in yellow.

With integration work, AI developers often want to involve frontline employees for two reasons: because they know best how a new tool will fit into workflows and because those who are involved in development are more likely to help persuade their peers to use the tool. Instead of avoiding enlisting frontline employees altogether, developers need to assess which aspects of AI tool development will benefit most from their labor.

Most jobs include valued tasks as well as necessary scut work. One important tactic for AI developers is not infringing on the work that frontline employees value.

What emergency physicians value is diagnosing problems and efficiently triaging patients. So when Dukes artificial intelligence team began developing a tool to better detect and manage the potentially deadly bloodstream infection known as sepsis, they tried to configure it to avoid infringing on emergency physicians valued tasks. They built it instead to help with what these doctors valued less: blood test analysis, medication administration, and physical exam assessments.

AI project team leaders often fail to protect the core work of frontline employees because intervening around these important tasks often promises to yield greater gains. Smart AI leaders have discovered, however, that employees are much more likely to use the technology that helps them with their scut work rather than one that infringes on the work they love to do.

Introducing a new AI decision support tool can threaten to curtail employee autonomy. For example, because the AI sepsis tool flagged patients at high risk of this condition, it threatened clinicians autonomy around diagnosing patients. So the project team invited key frontline workers to choose the best ways to test the tools effectiveness.

AI project team leaders often fail to include frontline employees in the evaluation process because they can make it harder in the short term. When frontline employees are asked to select what will be tested, they often select the most challenging options. We have found, however, that developers cannot bypass this phase, because employees will balk at using the tools if they dont have confidence in them.

Behind the bold promise of AI lies a stark reality: AI solutions often make employees lives harder. Managers need to increase value for those working on the front lines to allow AI to function in the real world.

Katherine C. Kellogg is a professor of management and innovation and head of the Work and Organization Studies department at the MIT Sloan School of Management. Mark P. Sendak is the population health and data science lead at the Duke Institute for Health Innovation. Suresh Balu is the associate dean for innovation and partnership for the Duke University School of Medicine and director of the Duke Institute for Health Innovation.

Visit link:

6 tactics to make artificial intelligence work on the frontlines - STAT

Posted in Ai | Comments Off on 6 tactics to make artificial intelligence work on the frontlines – STAT

Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans – TechCrunch

Posted: at 10:06 pm

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column,Perceptron, aims to collect some of the most relevant recent discoveries and papers particularly in, but not limited to, artificial intelligence and explain why they matter.

Over the past few weeks, scientists developed an algorithm to uncover fascinating details about the moons dimly lit and in some cases pitch-black asteroid craters. Elsewhere, MIT researchers trained an AI model on textbooks to see whether it could independently figure out the rules of a specific language. And teams at DeepMind and Microsoft investigated whether motion capture data could be used to teach robots how to perform specific tasks, like walking.

With the pending (and predictably delayed) launch of Artemis I, lunar science is again in the spotlight. Ironically, however, it is the darkest regions of the moon that are potentially the most interesting, since they may house water ice that can be used for countless purposes. Its easy to spot the darkness, but whats in there? An international team of image experts has applied ML to the problem with some success.

Though the craters lie in deepest darkness, the Lunar Reconnaissance Orbiter still captures the occasional photon from within, and the team put together years of these underexposed (but not totally black) exposures with a physics-based, deep learning-driven post-processing tool described in Geophysical Research Letters. The result is that visible routes into the permanently shadowed regions can now be designed, greatly reducing risks to Artemis astronauts and robotic explorers, according to David Kring of the Lunar and Planetary institute.

Let there be light! The interior of the crater is reconstructed from stray photons. Image Credits: V. T. Bickel, B. Moseley, E. Hauber, M. Shirley, J.-P. Williams and D. A. Kring

Theyll have flashlights, we imagine, but its good to have a general idea of where to go beforehand, and of course it could affect where robotic exploration or landers focus their efforts.

However useful, theres nothing mysterious about turning sparse data into an image. But in the world of linguistics, AI is making fascinating inroads into how and whether language models really know what they know. In the case of learning a languages grammar, an MIT experiment found that a model trained on multiple textbooks was able to build its own model of how a given language worked, to the point where its grammar for Polish, say, could successfully answer textbook problems about it.

Linguists have thought that in order to really understand the rules of a human language, to empathize with what it is that makes the system tick, you have to be human. We wanted to see if we can emulate the kinds of knowledge and reasoning that humans (linguists) bring to the task, said MITs Adam Albright in a news release. Its very early research on this front but promising in that it shows that subtle or hidden rules can be understood by AI models without explicit instruction in them.

But the experiment didnt directly address a key, open question in AI research: how to prevent language models from outputting toxic, discriminatory or misleading language. New work out of DeepMind does tackle this, taking a philosophical approach to the problem of aligning language models with human values.

Researchers at the lab posit that theres no one-size-fits-all path to better language models, because the models need to embody different traits depending on the contexts in which theyre deployed. For example, a model designed to assist in scientific study would ideally only make true statements, while an agent playing the role of a moderator in a public debate would exercise values like toleration, civility and respect.

So how can these values be instilled in a language model? The DeepMind co-authors dont suggest one specific way. Instead, they imply models can cultivate more robust and respectful conversations over time via processes they call context constructionandelucidation. As the co-authors explain: Even when a person is not aware of the values that govern a given conversational practice, the agent may still help the human understand these values by prefiguring them in conversation, making the course of communication deeper and more fruitful for the human speaker.

Googles LaMDA language model responding to a question. Image Credits: Google

Sussing out the most promising methods to align language models takes immense time and resources financial and otherwise. But in domains beyond language, particularly scientific domains, that might not be the case for much longer, thanks to a $3.5 million grant from the National Science Foundation (NSF) awarded to a team of scientists from the University of Chicago, Argonne National Laboratory and MIT.

With the NSF grant, the recipients plan to build what they describe as model gardens, or repositories of AI models designed to solve problems in areas like physics, mathematics and chemistry. The repositories will link the models with data and computing resources as well as automated tests and screens to validate their accuracy, ideally making it simpler for scientific researchers to test and deploy the tools in their own studies.

A user can come to the [model] garden and see all that information at a glance, Ben Blaiszik, a data science researcher at Globus Labs involved with the project, said in a press release. They can cite the model, they can learn about the model, they can contact the authors, and they can invoke the model themselves in a web environment on leadership computing facilities or on their own computer.

Meanwhile, over in the robotics domain, researchers are building a platform for AI models not with software, but with hardware neuromorphic hardware to be exact. Intel claims the latest generation of its experimental Loihi chip can enable an object recognition model to learn to identify an object its never seen before using up to 175 times less power than if the model were running on a CPU.

A humanoid robot equipped with one of Intels experimental neuromorphic chips. Image Credits: Intel

Neuromorphic systems attempt to mimic the biological structures in the nervous system. While traditional machine learning systems are either fast or power efficient, neuromorphic systems achieve both speed and efficiency by using nodes to process information and connections between the nodes to transfer electrical signals using analog circuitry. The systems can modulate the amount of power flowing between the nodes, allowing each node to perform processing but only when required.

Intel and others believe that neuromorphic computing has applications in logistics, for example powering a robot built to help with manufacturing processes. Its theoretical at this point neuromorphic computing has its downsides but perhaps one day, that vision will come to pass.

Image Credits: DeepMind

Closer to reality is DeepMinds recent work in embodied intelligence, or using human and animal motions to teach robots to dribble a ball, carry boxes and even play football. Researchers at the lab devised a setup to record data from motion trackers worn by humans and animals, from which an AI system learned to infer how to complete new actions, like how to walk in a circular motion. The researchers claim that this approach translated well to real-world robots, for example allowing a four-legged robot to walk like a dog while simultaneously dribbling a ball.

Coincidentally, Microsoft earlier this summer released a library of motion capture data intended to spur research into robots that can walk like humans. Called MoCapAct, the library contains motion capture clips that, when used with other data, can be used to create agile bipedal robots at least in simulation.

[Creating this dataset] has taken the equivalent of 50 years over many GPU-equipped [servers] a testament to the computational hurdle MoCapAct removes for other researchers, the co-authors of the work wrote in a blog post. We hope the community can build off of our dataset and work to do incredible research in the control of humanoid robots.

Peer review of scientific papers is invaluable human work, and its unlikely AI will take over there, but it may actually help make sure that peer reviews are actually helpful. A Swiss research group has been looking at model-based evaluation of peer reviews, and their early results are mixed in a good way. There wasnt some obvious good or bad method or trend, and publication impact rating didnt seem to predict whether a review was thorough or helpful. Thats okay though, because although quality of reviews differs, you wouldnt want there to be a systematic lack of good review everywhere but major journals, for instance. Their work is ongoing.

Last, for anyone concerned about creativity in this domain, heres a personal project by Karen X. Cheng that shows how a bit of ingenuity and hard work can be combined with AI to produce something truly original.

Follow this link:

Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans - TechCrunch

Posted in Ai | Comments Off on Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans – TechCrunch

The Download: The Merge arrives, and Chinas AI image censorship – MIT Technology Review

Posted: at 10:06 pm

The must-reads

Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology.

1 Social medias biggest companies appeared before the US SenatePast and present Meta, Twitter, TikTok and YouTube employees answered questions on social media's impact on homeland security. (TechCrunch)+ Retaining user attention is their algorithms primary purpose. (Protocol)+ TikToks representative avoided committing to cutting off Chinas access to US data. (Bloomberg $)

2 China wants to reduce its reliance on Western techInvesting heavily in native firms is just one part of its multi-year plan. (FT $)+ Cybercriminals are increasingly interested in Chinese citizens personal data. (Bloomberg $)+ The FBI accused him of spying for China. It ruined his life. (MIT Technology Review)

3 California is suing AmazonAccusing it of triggering price rises across the state. (WSJ $)+ The two-year fight to stop Amazon from selling face recognition to the police. (MIT Technology Review)

4 Russia is waging a surveillance war on its own citizensIts authorities are increasingly targeting ordinary people, not known dissidents or journalists. (Slate $)+ Russian troops are still fleeing northern Ukraine. (The Guardian)

5 Dozens of AIs debated 100 years of climate negotiations in secondsTheyre evaluating which policies are most likely to be well-received globally. (New Scientist $)+ Patagonias owner has given the company away to fight climate change. (The Guardian)

6 Iranian hackers hijacked their victims printers to deliver ransom notesThe three men have been accused of targeting people in the US, UK and Iran. (Motherboard)

7 DARPAs tiny plane could spy from almost anywhereThe unmanned vehicle could also carry small bombs. (WP $)+ The Taliban have crashed a helicopter left behind by the US military. (Motherboard)

8 Listening to stars helps astronomers to assess whats inside themThe spooky-sounding acoustic waves transmit a lot of data. (Economist $)+ The James Webb Space Telescope has spotted newborn stars. (Space)+ The next Space Force chief thinks the US needs a satellite constellation to combat China.(Nikkei Asia)

9 Well never be able to flip and turn like a catBut the best divers and gymnasts are the closest we can get. (The Atlantic $)+ The best robotic jumpers are inspired by nature. (Quanta)

10 This robot is having a laughEven if its not terribly convincing. (The Guardian)

Quote of the day

Tesla has yet to produce anything even remotely approaching a fully self-driving car."

Briggs Matsko, a Tesla owner, explains his rationale for suing the company over the deceptive way it marketed its driver-assistance systems, according to Reuters.

View original post here:

The Download: The Merge arrives, and Chinas AI image censorship - MIT Technology Review

Posted in Ai | Comments Off on The Download: The Merge arrives, and Chinas AI image censorship – MIT Technology Review

Can Conversational AI Improve the Online Retail Experience? – CMSWire

Posted: at 10:06 pm

The pandemic, which largely restricted physical interaction, meant that both retailers and consumers had to learn and adapt to digital communication tools.

Advancements in the retail and ecommerce sector have helped provide consumers with more tailor-made product recommendations and sophisticated guidance to eliminate friction throughout the shopping experience.

While having limited face-to-face interaction with customers and potential buyers, retailers have looked to the advanced capabilities embedded within conversational artificial intelligence (AI).

The last few years of the pandemic, which largely restricted physical interaction, meant that both retailers and consumers had to learn and adapt to digital communication tools. Conversational AI not only assists shoppers as they browse through the website, but it puts them in direct contact with the products and services they are looking for right from the start.

Instead of having to rely on more conventional chatbots, which saw a sharp rise during the early months of the pandemic, businesses can minimize mundane tasks while at the same time improving the shopping experience, saving them time and helping deep machine learning and natural language processing.

Researchers in the field of conversational AI found that by 2023 around 70% of chatbot conversations will be related to the retail sector.

As more brands look to transition online and competition in the market accelerates, the online customer experience will become a smoother and more delicate process that could ultimately prevent or minimize real-time engagement.

Conversational AI has moved beyond traditional chatbots such as those found at the bottom-right screen of some websites. Developments in the field of conversational AI, deep machine learning (DML) and language processing algorithms (LPA) have immensely improved within the last decade. Consumers have already become accustomed to the likes of Siri in iPhones and Amazon Alexa, which shows both the progress and difference conversational AI has made in our everyday lives.

With a whole host of innovative opportunities, ecommerce retailers and ecommerce technology will be able to enhance and improve the relationship between brands and consumers without encountering friction throughout most of the communication process.

To better understand these opportunities and what ecommerce retailers have done to improve the online shopping experience for consumers, shoppers and potential buyers, let's take a look at some of the challenges and benefits that conversational AI can bring to the table.

Consumer trends are ever-changing, and in a dynamic landscape, this requires brands to find more digitally engaging methods that will help continuously improve the online shopping experience, highlight key offerings and remain a competitive player.

Globally, the number of digital buyers surpassed 2.14 billion at the end of 2021, which is up from the more than 1.66 billion recorded in 2016. The surge in digital shoppers alongside a growing tech-savvy population has meant that market competition has only become more challenging.

To face and overcome these challenges, online brands will need to appeal to the digital community through more personalized practices and efforts that could drive brand loyalty.

Instead of looking toward traditional solutions, which for some time included FAQ pages, chatbots, voicebots or AI assistants that were programmed using language processing methods to resolve client issues, brands can tap into the opportunities that lie within algorithmic data and information collection.

Conversational AI should be able to understand consumer questions, retrieve answers and deliver results adequately. This would mean that AI algorithms will be able to read shopper trends faster, pick up when a customer shops for specific items and help recommend shopper-specific products. Online brands and ecommerce retailers will also be able to set up shopper profiles to create measurable key data points.

With access to previous conversations and interactions, brands will be able to physically understand who their shoppers are. This would include the use of specific traits such as age, gender and location, among others. Ultimately, this would mean online retailers can build a more digitally fluid online interaction.

Having more digital natives and tech-savvy consumers while trading in a highly competitive market means that the focus for online retailers is not on how they can attract shoppers but rather on how they can retain them more effectively.

To better appeal to and retain shoppers, brands will need to focus on three key components:

The understanding here is to turn interested shoppers into paying shoppers while at the same time properly imprinting brand loyalty and ensuring a convenient shopping experience without the need for physical human interaction.

Related Article:How Will Conversational AI Transform Customer Experience?

It's already possible for AI and deep machine learning to pick up on consumer trends and behavior through the type of websites they visit, social media content they like and share, online profiles they interact with and even the keywords they search for.

As our software becomes increasingly good at spotting patterns, these digital protocols will be able to give online retailers insights based on consumer behavior.

It's not at all possible that these insights will be completely accurate in some cases. It does, however, lend itself to building predictive models, which could help to further advance the online retail experience.

Building predictive models can help to:

With the help of artificial intelligence, ecommerce brands can build predictive models that can closely relate to changing consumer behavior. As online users start to follow new trends based on social media platforms or other digital native communication channels, retailers can adjust their customer experience to focus on them.

While this is a constantly changing process, having more predictive models that can help deliver accurate results time and again, retailers will be able to leverage the opportunities to fill customer-related needs without falling behind on overhyped trends outside their scope of interest. This is part of the many reasons why conversational AI and real-time feedback from users are crucial to creating customer-tailored recommendations.

In a nutshell, we see how these practices can help improve cross-selling and up-selling as they analyze consumer trends in the broader digital sphere, track a customer's previous spending habits and preferences and monitor queries or issues raised with customer support.

Although building predictive models is seemingly harder and more complex than simply implementing conversational AI within the online shopping experience, it should remain a crucial factor worth considering that can help keep brands ahead within the competitive marketplace.

While we are well aware of the technological benefits housed within conversational AI, there are numerous challenges ecommerce retailers will still need to face. Difficulties can range across platforms and retailers, as they largely depend on the level of AI software used.

Already we see a tremendous amount of backlash forming around the use of AI that looks to capture consumer information to help build more user-centric algorithms. We see this in things such as social media feeds that are constantly changing as soon as we start interacting with a specific type of profile, brand or online personality.

This resonates with the larger picture that represents difficulties for many ecommerce retailers looking to gain more online exposure and build hyper-personalized customer experiences.

Some of the limitations within conversational AI include:

Among these challenges and limitations, it becomes clear how conversational artificial intelligence still requires further improvements to become more centered around the physical human experience.

While many customers tend to feel separated from the brand or online store when interacting with chatbots or voice assistants, greater dissatisfaction from customers would lead to some brands and online retailers stepping in and resolving issues themselves rather than relying on artificial intelligence.

Related Article:Top Conversational AI Metrics for CX Professionals

Will artificial intelligence give ecommerce retailers tremendous benefits? It's still not able to replace the full human element that helps it develop and expand to what it is today.

There are several ways through which artificial intelligence software, deep machine learning and natural language processing have helped shape a more profound understanding of the online shopping experience. Through various capabilities and complex algorithms, these systems can build and deliver customer focus insights that can further initiate a more personalized shopping experience.

Despite its dominant online presence and robust benefits, brands and online retailers will need to consider the long-term potential rather than focusing on near-term results. Regardless of which side of the aisle you may find yourself in support of conversational AI, it's clear how this software has come to permanently revolutionize our way of work, communication and online shopping.

More:

Can Conversational AI Improve the Online Retail Experience? - CMSWire

Posted in Ai | Comments Off on Can Conversational AI Improve the Online Retail Experience? – CMSWire

Page 17«..10..16171819..3040..»