Page 152«..1020..151152153154..160170..»

Category Archives: Ai

Fable Studio Introduces ‘Virtual Beings’ Ready to Converse With You – Voicebot.ai

Posted: December 29, 2020 at 12:22 am

on December 28, 2020 at 3:00 pm

Virtual reality developer Fable Studio has introduced two new artificial intelligence-powered personalities capable of holding a conversation with a human in a video call. Charlie and Beck are what Fable calls virtual beings, part of a growing number of interactive AIs that can imitate a human well enough to serve as pseudo-companions.

Fable Studio is best known for its Emmy-winning Wolves in the Walls project. Thats where the companys first virtual being, Lucy, came from, The 8-year-old start of the Wolves in the Walls book and project, shes since become an independent character that people can interact with, as opposed to being limited to 1988. For the new personalities, Fable Studio designed adults living in the present. Olympic athlete Beck is a Canadian who will be rowing at Tokyo next year, while Charlie is an introverted French musician and poet. She studies art history and lives in Paris. All of the AI characters rely on the Fable Wizard AI tool, designed by the Fable Studio to connect dialogue, voice, animation, and visuals into a seamless whole during the video call.

Demand to talk to the AI personalities is high enough that Fable Studio set up a sign-up with a waitlist. The idea is to limit the number of people speaking to Beck or Charlie at any one time and monitor for when people are just trying to test how realistic the character is. To build the personalities, dialogue written by people at Fable is expanded with GPT-3 technology. The language model introduced this year by OpenAI, makes the AI more flexible and able to respond to questions and statements that may not relate to anything in the initial text. Microsoft nabbed an exclusive license for the model in September but that doesnt apply to the GPT-3 API that OpenAI launched in June. As with other AI, Beck and Charlie will learn from every conversation and become better at imitating human interaction through the prism of the personalities sketched out by their creators.

Developers are still exploring the best way of fleshing out the personality of an AI. The value of an AI with a personality is easy to imagine if it encourages people to interact with it more and to feel comfortable with conversing with the AI. The actual shape of the personality can be almost anything. When Russias Sberbank launched a voice assistant as part of its restructuring as a tech company, it set up three separate personalities for the AI. Sber, Joy, and Athena each has a surprisingly detailed personality and background created by Sber, but the he voices are the only things that differ as the personalities all can do the same tasks. Closer to Fable Studios vision is Xiaoice, a chatbot platform originally part of Microsoft before it spun out as an independent company. Used by more than 660 million, Xiaoiceis able to understand emotions and sentiment and even anticipate what users might say. The AI was modeled after a teenage girl and is enormously popular as a virtual girlfriend in China. Limited to text alone as Xiaoice was, it could be even more popular if married to visual and voice services as well.

Microsoft Scores Exclusive License to the Much-Hyped GPT-3 Language Model

Russian Banking Giant Sber Sheds Finance For Tech, Debuts Smart Display and Voice Assistant With Multiple Personalities

Awakening Health Launches Humanoid Robot Healthcare Assistant Named Grace

Eric Hal Schwartz is a Staff Writer and Podcast Producer for Voicebot.AI. Eric has been a professional writer and editor for more than a dozen years, specializing in the stories of how science and technology intersect with business and society. Eric is based in New York City.

Original post:

Fable Studio Introduces 'Virtual Beings' Ready to Converse With You - Voicebot.ai

Posted in Ai | Comments Off on Fable Studio Introduces ‘Virtual Beings’ Ready to Converse With You – Voicebot.ai

The Turing Test is obsolete. It’s time to build a new barometer for AI – Fast Company

Posted: at 12:22 am

This year marks 70 years since Alan Turing published his paper introducing the concept of the Turing Test in response to the question, Can machines think? The tests goal was to determine if a machine can exhibit conversational behavior indistinguishable from a human. Turing predicted that by the year 2000, an average human would have less than a 70% chance of distinguishing an AI from a human in an imitation game where who is respondinga human or an AIis hidden from the evaluator.

Why havent we as an industry been able to achieve that goal, 20 years past that mark? I believe the goal put forth by Turing is not a useful one for AI scientists like myself to work toward. The Turing Test is fraught with limitations, some of which Turing himself debated in his seminal paper. With AI now ubiquitously integrated into our phones, cars, and homes, its become increasingly obvious that people care much more that their interactions with machines be useful, seamless and transparentand that the concept of machines being indistinguishable from a human is out of touch. Therefore, it is time to retire the lore that has served as an inspiration for seven decades, and set a new challenge that inspires researchers and practitioners equally.

In the years that followed its introduction, the Turing Test served as the AI north star for academia. The earliest chatbots of the 60s and 70s, ELIZA and PARRY, were centered around passing the test. As recently as 2014, chatbot Eugene Goostman declared that it had passed the Turing Test by tricking 33% of the judges that it was human. However, as others have pointed out, the bar of fooling 30% of judges is arbitrary, and even then the victory felt outdated to some.

Still, the Turing Test continues to drive popular imagination. OpenAIs Generative Pre-trained Transformer 3 (GPT-3) language model has set off headlines about its potential to beat the Turing Test. Similarly, Im still asked by journalists, business leaders, and other observers, When will Alexa pass the Turing Test? Certainly, the Turing Test is one way to measure Alexas intelligencebut is it consequential and relevant to measure Alexas intelligence that way?

To answer that question, lets go back to when Turing first laid out his thesis. In 1950, the first commercial computer had yet to be sold, groundwork for fiber-optic cables wouldnt be published for another four years, and the field of AI hadnt been formally establishedthat would come in 1956. We now have 100,000 times more computing power on our phones than Apollo 11, and together with cloud computing and high-bandwidth connectivity, AIs can now make decisions based on huge amounts of data within seconds.

While Turings original vision continues to be inspiring, interpreting his test as the ultimate mark of AIs progress is limited by the era when it was introduced. For one, the Turing Test all but discounts AIs machine-like attributes of fast computation and information lookup, features that are some of modern AIs most effective. The emphasis on tricking humans means that for an AI to pass Turings test, it has to inject pauses in responses to questions like, do you know what is the cube root of 3434756? or, how far is Seattle from Boston? In reality, AI knows these answers instantaneously, and pausing to make its answers sound more human isnt the best use of its skills. Moreover, the Turing Test doesnt take into account AIs increasing ability to use sensors to hear, see, and feel the outside world. Instead, its limited simply to text.

To make AI more useful today, these systems need to accomplish our everyday tasks efficiently. If youre asking your AI assistant to turn off your garage lights, you arent looking to have a dialogue. Instead, youd want it to fulfill that request and notify you with a simple acknowledgment, ok or done. Even when you engage in an extensive dialogue with an AI assistant on a trending topic or have a story read to your child, youd still like to know it is an AI and not a human. In fact, fooling users by pretending to be human poses a real risk. Imagine the dystopian possibilities, as weve already begun to see with bots seeding misinformation and the emergence of deep fakes.

Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligenceincluding common sense, self-supervision, and language proficiencyand combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.

This focus informs current research into areas of AI that truly mattersensory understanding, conversing, broad and deep knowledge, efficient learning, reasoning for decision-making, and eliminating any inappropriate bias or prejudice (i.e. fairness). Progress in these areas can be measured in a variety of ways. One approach is to break a challenge into constituent tasks. For example, Kaggles Abstraction and Reasoning Challenge focuses on solving reasoning tasks the AI hasnt seen before. Another approach is to design a large-scale real-world challenge for human-computer interaction such as Alexa Prize Socialbot Grand Challengea competition focused on conversational AI for university students.

In fact, when we launched the Alexa Prize in 2016, we had intense debate on how the competing socialbots should be evaluated. Are we trying to convince people that the socialbot is a human, deploying a version of the Turing Test? Or, are we trying to make the AI worthy of conversing naturally to advance learning, provide entertainment, or just a welcome distraction?

We landed on a rubric that asks socialbots to converse coherently and engagingly for 20 minutes with humans on a wide range of popular topics including entertainment, sports, politics, and technology. During the development phases leading up to the finals, customers score the bots on whether theyd like to converse with the bots again. In the finals, independent human judges assess for coherency and naturalness and assign a score on a 5-point scaleand if any of the social bots converses for an average duration of 20 minutes and scores 4.0 or higher, then it will meet the grand challenge. While the grand challenge hasnt been met yet, this methodology is guiding AI development that has human-like conversational abilities powered by deep learning-based neural methods. It prioritizes methods that allow AIs to exhibit humor and empathy where appropriate, all without pretending to be a human.

The broad adoption of AI like Alexa in our daily lives is another incredible opportunity to measure progress in AI. While these AI services depend on human-like conversational skills to complete both simple transactions (e.g. setting an alarm) and complex tasks (e.g. planning a weekend), to maximize utility they are going beyond conversational AI to Ambient AIwhere the AI answers your requests when you need it, anticipates your needs, and fades into the background when you dont. For example, Alexa can detect the sound of glass breaking, and alert you to take action. If you set an alarm while going to bed, it suggests turning off a connected light downstairs thats been left on. Another aspect of such AIs is that they need to be an expert in a large, ever-increasing number of tasks, which is only possible with more generalized learning capability instead of task-specific intelligence. Therefore, for the next decade and beyond, the utility of AI services, with their conversational and proactive assistance abilities on ambient devices, are a worthy test.

None of this is to denigrate Turings original visionTurings imitation game was designed as a thought experiment, not as the ultimate test for useful AI. However, now is the time to dispel the Turing Test and get inspired by Alan Turings bold vision to accelerate progress in building AIs that are designed to help humans.

Rohit Prasad is vice president and head scientist of Alexa at Amazon.

Here is the original post:

The Turing Test is obsolete. It's time to build a new barometer for AI - Fast Company

Posted in Ai | Comments Off on The Turing Test is obsolete. It’s time to build a new barometer for AI – Fast Company

UCF researchers developing AI to help students with autism – FOX 35 Orlando

Posted: at 12:21 am

UCF using AI to help children learn social skills

UCF researchers want to create artificial intelligence to help students who have developmental disabilities.

ORLANDO, Fla. - Nine-year-old Aiden sat in a dark tent, talking with "Zoobee."

Zoobee is an animated character on a computer who talks with students who have developmental disabilities, like Aiden.

"He does have a cognitive delay. Sometimes coupled with that comes that social-emotional growth that he could benefit from," said Dr. Karyn Scott, principal of UCP Pine Hills, where Aiden went to school.

For now, Zoobee is controlled by an actual person who replies to Aiden.

However, researchers with the University of Central Florida (UCF)are trying to gather enough data from Aiden and the other test participants to create artificial intelligence (AI) for Zoobee. That way,the computer could communicate with students like Aidenwho have trouble relating to people on a social and emotional level.

MORE NEWS: Unemployment benefits expire for millions as COVID relief bill remains unsigned

The ultimate goal of the project is to have Zoobee on a tablet or a screen that students could take with them to class and it would interact with them based on their facial expressions.

"Zoobee is the social-emotional coach through the whole project. Zoobee's the one who helps explain to the child what an emotion feels like, even what it looks like, to a degree," said Dr. Rebecca Hines, a UCF professor working on the project.

For example, Zoobee has a heart that changes colors based on its emotions.

"My heart, sometimes it shows what I'm feeling, so if I'm really happy it turns green, sometimes if I'm feeling sad it turns blue, if I'm mad it turns red, or if I'm worried it turns yellow," Zoobee explained when asked.

TRENDING NOW: Central Florida man stops at Publix, wins $1M on scratch-off before Christmas

At one point in the study, Aiden deliberately tried to make Zoobee upset just to see its heart change color. It was a moment that encouraged researchers, seeing Aiden understand that his actions could elicit specific emotional responses.

"So in this case, it's a nonthreatening way to learn that. If you noticed in that exchange, he eventually apologized for it. So those of us outside Zoobee's world, we're all looking at each other saying that's perfectbecause we want every child to understand it does have an impact on the other person when you say something hurtful," Prof Hines said.

It will be a five-year project involving special needs students from several schools. Once they do create a fully-AI Zoobee, they plan to let schools across the county use it for free.

Tune in to FOX 35 Orlando for the latest Central Florida news.

Visit link:

UCF researchers developing AI to help students with autism - FOX 35 Orlando

Posted in Ai | Comments Off on UCF researchers developing AI to help students with autism – FOX 35 Orlando

2-Acre Vertical Farm Run By AI And Robots Out-Produces 720-Acre Flat Farm – Intelligent Living

Posted: at 12:21 am

Plenty is an ag-tech startup in San Francisco, co-founded by Nate Storey, that is reinventing farms and farming. Storey, who is also the companys chief science officer, says the future of farms is vertical and indoors because that way, the food can grow anywhere in the world, year-round; and the future of farms employ robots and AI to continually improve the quality of growth for fruits, vegetables, and herbs. Plenty does all these things and uses 95% less water and 99% less land because of it.

In recent years, farmers on flat farms have been using new tools for making farming better or easier. Theyre using drones and robots to improve crop maintenance, while artificial intelligence is also on the rise, with over 1,600 startups and total investments reaching tens of billions of dollars. Plenty is one of those startups. However, flat farms still use a lot of water and land, while a Plenty vertical farm can produce the same quantity of fruits and vegetables as a 720-acre flat farm, but on only 2 acres!

Storey said:

Vertical farming exists because we want to grow the worlds capacity for fresh fruits and vegetables, and we know its necessary.

Plentys climate-controlled indoor farm has rows of plants growing vertically, hung from the ceiling. There are sun-mimicking LED lights shining on them, robots that move them around, and artificial intelligence (AI) managing all the variables of water, temperature, and light, and continually learning and optimizing how to grow bigger, faster, better crops. These futuristic features ensure every plant grows perfectly year-round. The conditions are so good that the farm produces 400 times more food per acre than an outdoor flat farm.

Storey said:

400X greater yield per acre of ground is not just an incremental improvement, and using almost two orders of magnitude less water is also critical in a time of increasing environmental stress and climate uncertainty. All of these are truly game-changers, but theyre not the only goals.

Another perk of vertical farming is locally produced food. The fruits and vegetables arent grown 1,000 miles away or more from a city; instead, at a warehouse nearby. Meaning, many transportation miles are eliminated, which is useful for reducing millions of tons of yearly CO2 emissions and prices for consumers. Imported fruits and vegetables are more expensive, so societys most impoverished are at an extreme nutritional disadvantage. Vertical farms could solve this problem.

Storey said:

Supply-chain breakdowns resulting from COVID-19 and natural disruptions like this years California wildfires demonstrate the need for a predictable and durable supply of products can only come from vertical farming.

Plentys farms grow non-GMO crops and dont use herbicides or pesticides. They recycle all water used, even capturing the evaporated water in the air. The flagship farm in San Francisco is using 100% renewable energy too.

Furthermore, all the packaging is 100% recyclable, made of recycled plastic, and specially designed to keep the food fresh longer to reduce food waste.

Storey told Forbes:

The future will be quite remarkable. And I think the size of the global fresh fruit and vegetable industry will be multiples of what it is today.

Plenty has already received $400 million in investment capital from SoftBank, former Google chairman Eric Schmidt, and Amazons Jeff Bezos. Its also struck a deal with Albertsons stores in California to supply 430 stores with fresh produce.

Ideally, the company will branch out, opening vertical farms across the country and beyond. There can never be too many places graced by better food growing with a less environmental cost.

Heres a TechFirst podcast about the story behind Plenty:

Visit link:

2-Acre Vertical Farm Run By AI And Robots Out-Produces 720-Acre Flat Farm - Intelligent Living

Posted in Ai | Comments Off on 2-Acre Vertical Farm Run By AI And Robots Out-Produces 720-Acre Flat Farm – Intelligent Living

Dear reporters, leave everything to AI, says Alibaba-Xinhua venture – Nikkei Asia

Posted: at 12:21 am

BEIJING -- Artificial intelligence is revolutionizing the news industry. The technology is good enough to write stories all by itself or personalize newsletters and send them to subscribers. But these examples tell only part of the story.

Chinese startup Xinhua Zhiyun fuses AI technology and media. Founded in 2017 by state-run Xinhua News Agency and e-commerce giant Alibaba Group, Xinhua Zhiyun has rolled out a potpourri of AI-based products and solutions to automate various aspects of news gathering, reporting and posting.

They include a "media brain," which produces machine-generated content; an intelligent media integration platform, which automates the news organization's entire process; a cultural travel intelligent communication platform, which automatically produces and uploads travel videos; a cloud news center, which automates livestreams of conferences and exhibitions; and what it calls a Magic shooting robot, which automatically films short videos.

As of November, the company had applied for more than 1,400 patents.

Under the control of a state-owned media company, Xinhua Zhiyun uses AI for everything, from planning and information gathering to writing, editing, reviewing and publishing.

Wang Min, Xinhua Zhiyun's chief product officer, says the primary objective is to allow content producers to "collect, process and manage news resources faster and better."

In reporting unpredictable events like traffic accidents and earthquakes, emergency recognition AI collects and selects related photos and videos. Working with a robot specializing in handling news stories about earthquakes and floods, it then automatically produces news content. The reports are checked and improved by other specialty robots, which are designed for specific tasks such as fact-checking, text recognition, facial recognition as well as tracking and topic selection.

36Kr recently tested Xinhua Zhiyun's product for automated video reporting at an exhibition. When a reporter selected a video clip and a text template in the cloud news center, a video news article was produced within a minute. It was like using PowerPoint.

"We are trying to offer technological solutions to problems confronting news media, which are at a major turning point, in hopes of using AI to revitalize media," Wang said.

Whether for text, audio or video formats, news stories require teams of specialists. As information and communications technologies advance and spill more media content onto computer networks, markets are demanding higher skills and expertise from journalists and other media professionals.

Information providers are now required to be experts in planning, information gathering and editing -- and to be capable of writing stories as well as shooting and editing videos. They are also expected to have in-depth knowledge of how to distribute and market their stories. And they have to be faster than their peers at myriad other news providers.

Xinhua Zhiyun has covered the annual National People's Congress and Chinese People's Political Consultative Conference -- two key political events in China -- for three consecutive years, and more than 900 media companies have adopted Xinhua Zhiyun offerings.

The startup's Magic video-shooting robot comes armed with multiple cameras, including high-definition, 360-degree surround-view cameras. They are also equipped with edge computing nodes with enormous number-crunching power. Magic uses a range of capabilities to move about, shoot videos autonomously, recognize target persons, upload videos to the cloud in real time and produce vlog content. It moves around via a hybrid positioning and navigation system that makes use of a laser vision multisensor system and person recognition and tracking.

Zhao Ji, who heads Xinhua Zhiyun's robotics unit, says Magic can learn shooting skills from human cameramen. "I am convinced," he said, "that Magic's skills can be upgraded to levels of professional cameramen in the future."

The company is also planning to expand into tourism, financial services and multichannel networks as well as the organizing and holding of conventions and sporting events.

As for its foray into the tourism sector, the company has developed an automated video creation and distribution system that allows general tourists to interact via short travel videos. The system is designed for automated shooting, production and distribution of videos to allow travel agencies to offer video blog services to their customers. The system has already been installed at over 30 of China's major tourist spots.

As for conventions and sporting events, Xinhua Zhiyun's robots can shoot, produce, distribute and archive videos. The cloud news center allows a single person to do all the work involved in livestreaming. With one click, the center connects with hundreds of media companies for real-time synchronization of materials.

"Xinhua Zhiyun will continue developing video production AI robots to simplify content production while improving quality and productivity so that creators can focus on content itself," Wang says.

36Kr, a Chinese tech news portal founded in Beijing in 2010, has more than 150 million readers worldwide. Nikkei announced a partnership with 36Kr on May 22, 2019.

For the Japanese version of this story, click here.

For the Chinese version, click here.

Excerpt from:

Dear reporters, leave everything to AI, says Alibaba-Xinhua venture - Nikkei Asia

Posted in Ai | Comments Off on Dear reporters, leave everything to AI, says Alibaba-Xinhua venture – Nikkei Asia

Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? – Forbes

Posted: November 29, 2020 at 6:17 am

ENGLAND - 1958: English Electric developed several notable pioneering computers during the 1950s. ... [+] The DEUCE: Digital Electronic Universal Computing Engine, was the first commercially produced digital model and was developed from earlier plans by Alan Turing. 30 were sold and in 1956 one cost 50,000. The DEUCE took up a huge space compared to modern computers and worked from 1450 thermionic valves which grew hot , blow outs were frequent. However the DEUCE proved a popular innovation and some models were working in to the 1970s. Photograph by Walter Nurnberg who transformed industrial photography after WWII using film studio lighting techniques. (Photo by Walter Nurnberg/SSPL/Getty Images)

When computers were still in the nascent stages, Alan Turing published his legendary paper, Computing Machinery And Intelligence, in the Mind journal in 1950.In it, he set forth the intriguing question:Can machines think?

At the time, the notion of Artificial Intelligence (AI) did not exist (this would not come until about six years later at a conference at Dartmouth University).Yet Turing was already thinking about the implications of this category.

In his paper, he described a framework to determine if a machine had intelligence.This essentially involved a thought experiment.Assume there are three players in a game.Two are human and the other is a computer.An evaluatorwho is a humanthen asks open-ended questions to the players.If this person cannot determine who is the human, then the computer is considered to be intelligent.

The Turing Test was quite ingenious because there was no need to define intelligence, which is fraught with complexities.Even today this concept is far from clear-cut.

Keep in mind that Turing thought the test would ultimately be cracked by 2000 or so.But interestingly enough, this turned out to be way too optimistic.The Turing Test has remained elusive for AI systems.

If Alan Turing was alive, he might be shocked that given 175 billion neurons from GPT-3 we are still unable to pass his test, but we will soon, said Ben Taylor, who is the Chief AI Evangelist at DataRobot.

So why has it been so difficult to beat the test?A key reason is that it can be tricked.If you ask a nonsensical question, the results will often be non-human like.Lets face it, people are very good at detecting when something is not quite right.

When you ask a GPT-3 system how many eyes the sun has, it will respond that there is one and when asked who was the president of the U.S. in 1600, the answer will be Queen Elizabeth I, said Noah Giansiracusa, who is an Assistant Professor of Mathematics and Data Science at Bentley University.The basic problem seems to be that GPT-3 always tries in earnest to answer the question, rather than refusing and pointing out the absurdity and unanswerability of a question.

But over time, it seems reasonable that these issues will be worked out.The fact is that AI technology is continuing to progress at a staggering pace.

There may also be a need for another test as well.Since the Turing test, humans have actually discovered much more insight into our own minds through fMRI and what makes us superior in our own intelligence, said Taylor.This insight into our own brains justifies changing the goals of a test beyond mimicking behavior. Defining a new test might help us get out of the deep-learning rut, which is currently insufficient for achieving AGI or Artificial General Intelligence. The Turing test was our moonshot, so let's figure out our Mars-shot.

Over the years, other tests have emerged.According Druhin Bala, who is the CEO and co-founder of getchefnow.com, there are:

But my favorite is the Wozniak Test (yes, this is from the co-founder of Apple). This is where a robot can enter a strangers home and make a cup of coffee!

Now of course, all these tests have their own issues.The fact is that no test is fool-proof.But in the coming years, there will probably be new ones and this will help with the development of AI.

The Turing Test is brilliant in its simplicity and elegance, which is why it's held up so well for 70 years, said Zach Mayer, who is the Vice President of Data Science at DataRobot.It's an important milestone for machine intelligence, and GPT-3 is very close to passing it.And yet, as we pass this milestone, I think it's also clear that GPT-3 is nowhere near human-level intelligence.I think discovering another Turning Test for AI will illuminate the next step on our journey towards understanding human intelligence.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. He also has developed various online courses, such as for the COBOL and Python programming languages.

More here:

Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? - Forbes

Posted in Ai | Comments Off on Turing Test At 70: Still Relevant For AI (Artificial Intelligence)? – Forbes

Opinion/Middendorf: Artificial intelligence and the future of warfare – The Providence Journal

Posted: at 6:17 am

By J. William Middendorf| The Providence Journal

J. William Middendorf, who lives in Little Compton, served as Secretary of the Navy during the Ford administration. His recent book is "The Great Nightfall: How We Win the New Cold War."

Thirteen days passed in October 1962 while President John F. Kennedy and his advisers perched at the edge of the nuclear abyss, pondering their response to the discovery of Russian missiles in Cuba. Today, a president may not have 13 minutes. Indeed, a president may not be involved at all.

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

This statement from Vladimir Putin, Russian president, comes at a time when artificial intelligence is already coming to the battlefield and some would say it is already here. Weapons systems driven by artificial intelligence algorithms will soon be making potentially deadly decisions on the battlefield. This transition is not theoretical. The immense capability of large numbers of autonomous systems represent a revolution in warfare that no country can ignore.

The Russian Military Industrial Committee has approved a plan that would have 30% of Russian combat power consist of remote controlled and autonomous robotic platforms by 2030. China has vowed to achieve AI dominance by 2030. It is already the second-largest R&D spender, accounting for 21% of the worlds total of nearly $2 trillion in 2015. Only the United States at 26%ranks higher. If recent growth rates continue, China will soon become the biggest spender.

If China makes a breakthrough in crucial AI technology satellites, missiles, cyber-warfare or electromagnetic weapons it could result in a major shift in the strategic balance. Chinas leadership sees increased military usage of AI as inevitable and is aggressively pursuing it. Zeng Yi, a senior executive at Chinas third-largest defense company, recently predicted that in future battlegrounds there will be no people fighting, and, by 2025, lethal autonomous weapons would be commonplace.

Well-intentioned scientists have called for rules that will always keep humans in the loop of the military use of AI. Elon Musk, founder of Tesla, has warned that AI could be humanitys greatest existential threat for starting a third world war. Musk is one of 100 signatories calling for a United Nations-led ban of lethal autonomous weapons. These scientists forget that countries like China, Russia, North Korea and Iran will use every form of AI if they have it.

Recently, Diane Greene, CEO of Google, announced that her company would not renew its contract to provide recognition software for U.S. military drones. Google had agreed to partner with the Department of Defense in a program aimed at improving Americas ability to win wars with computer algorithms.

The world will be safer and more powerful with strong leadership in AI. Here are three steps we should take immediately.

Convince technological companies that refusal to work with the U.S. military could have the opposite effect of what they intend. If technology companies want to promote peace, they should stand with, not against, the U.S, defense community.

Increase federal spending on basic research that will help us compete with China, Russia, North Korea and Iran in AI.

Remain ever alert to the serious risk of accidental conflict in the military applications of machine learning or algorithmic automation. Ignorant or unintentional use of AI is understandably feared as a major potential cause of an accidental war.

View post:

Opinion/Middendorf: Artificial intelligence and the future of warfare - The Providence Journal

Posted in Ai | Comments Off on Opinion/Middendorf: Artificial intelligence and the future of warfare – The Providence Journal

Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI – Forbes

Posted: at 6:17 am

There is nothing ordinary about the year 2020, and in this highly charged political year, everything gets more attention than might have been deserved in previous years. A few weeks ago, Emily Murphy, Administrator of the General Services Administration (GSA) started making waves in the news. For many who may not have known her before, she has been leading the GSA for a few years helping bring innovative programs and initiatives to the agency.

Among many things that Emily Murphy is responsible for, one of the most consequential every four years is the ascertainment of a Presidential election winner so that a transition of power can proceed. That particular aspect of the GSA got a lot more news and publicity over the past few weeks than perhaps it ever has in recent history. Even those that deal with the government regularly might not have been aware of such a pivotal role that the GSA has with regards to elections.

Indeed, there are many other things that the GSA is much more known for being responsible for, from its role as the primary manager of real estate and buildings for the government to procuring billions of dollars of products, goods, and services. Its in that latter light that I had the opportunity to interview Emily on how the GSA is approaching artificial intelligence. Automation and AI have served a particularly important role at the GSA, impacting how it goes about running its operations and procuring solutions on behalf of government agencies.

In a recent AI Today podcast, recorded right before the election, Emily Murphy shared her insights into how AI is transforming the federal government. In this article she further shares her insights into AI, the GSA, and the federal government.

During your time at the GSA you helped to launch the Centers of Excellence program. Can you share what the CoE is and how its helping advance use of data?

Emily Murphy, GSA Administrator

Emily Murphy: I was a senior advisor at GSA, prior to being confirmed as Administrator, and one of the areas I focused on was how GSA could better manage the intersection between contracting and technology innovation. GSAs Technology Transformation Services, which is part of our Federal Acquisition Service, worked with other agencies to launch the first Centers of Excellence in late 2017, with the first partner (also known as our lighthouse agency) being USDA. We now have announced ten partnerships with different agencies where we have a presence. The Centers of Excellence teams provide technical expertise in support of the following areas: cloud adoption, contact center, customer experience, data analytics, infrastructure optimization, and artificial intelligence.

I am especially excited about our AI CoE, which is the sixth and latest pillar in our Centers of Excellence program. With the AI CoE, TTS brings together machine learning, neural networks, intelligent process design and Robotic Process Automation (RPA) to develop AI solutions that address unique business challenges agency-wide. The team provides strategic tools and infrastructure support to rapidly discover use cases, identify applicable artificial intelligence methods, and deploy scalable solutions across the enterprise.

Another highlight has been our partnership with the Joint Artificial Intelligence Center at the Department of Defense. We recently marked our 1 year anniversary of the JAIC and CoE partnership and released an announcement of our many achievements. One of the first things CoE worked on was assisting in the creation of the First Five Consortium, a first-of-its-kind Public Private Partnership which seeks to augment Humanitarian Assistance / Disaster Response efforts with AI-enabled capabilities. The objective is to provide no-cost AI-enhanced imagery to incident commanders and first responders within 5 minutes of a disaster to rapidly accelerate coordination times, saving lives and property. The CoE supported JAIC in developing a robust AI Capability Maturity Model and AI Workforce Model to help gauge operational maturity across multiple functional areas.

What do you see as some of the unique opportunities the public sector has around AI?

Emily Murphy: There are a world of possibilities when it comes to the benefits of AI for the public sector. One core benefit of AI is that it allows the government to test concepts before spending money building them out. So, instead of having to build something to see if it works, we can now use computers to do the first level of testing virtually. AI, and specifically natural language processing (NLP), can be leveraged to streamline many processes that were previously manual or document driven.

AI is also being leveraged by the federal government as part of a strategy to understand specific areas of need, such as rulemaking and regulatory reform. A few examples: currently were using AI to analyze comments made during the public comment period of rulemaking, to update regulations so that they reflect todays technology and products, to identify areas of overlap and duplication in the Federal code and in regulatory guidance and make it easier to streamline regulations, and to make predictions about the effect of regulations on stakeholders. AI can also be used to accelerate hiring and the onboarding process at federal agencies.

Were focused on finding and creating smart systems, services, and solutions that make it easier to interact with the government on every level.

What do you see as some of the unique challenges the public sector has around AI?

Emily Murphy: A few of the challenges that the public sector faces when it comes to AI are: data cleanliness, managing data, and hiring tech savvy AI federal system owners to properly operate, manage, and evaluate the system.

There are also unresolved issues surrounding the responsible and ethical use of AI. These are hard problems and require thoughtful consideration and cooperation to come up with a solution. Responsibility and ethics are embedded in every stage of the AI development lifecycle from design to development to deployment, and must include continuous monitoring and feedback collection to ensure it is behaving as intended without causing harm or causing unintended consequences. It is not just a checklist you run through after a solution is developed.

Because the responsible use of AI occurs at every stage of the development lifecycle, we need to reframe what it means to use AI responsibly. People often speak of evaluating an AI system after it has been developed. We need to move away from that to a mindset where the use of AI must be thoughtfully considered at every step. And this means that its not just the job of an AI ethicist to ensure this. The technical developers of AI are making ethical choices as they are building the system, so they need to understand how those technical decisions are also choices being made from an ethical perspective.

Partnership with private industry will be critical to ensure we are building responsible AI solutions. Government cannot be buying a black box of technologies with no insight, explanation or oversight as to how it is operating. Government experts need to partner with industry as they build AI solutions to embed responsibility throughout. Monitoring, evaluation and updating models must be at the forefront of the process, not an afterthought after a solution is built.

Teams need to engage across the organization to establish oversight and audit procedures to ensure that AI and automation continue to perform as intended.

What are some of the ways GSA is currently leveraging AI and Machine Learning?

Emily Murphy: We have implemented automation, AI and Machine Learning in a variety of ways. Robotics Process Automation (RPA) is an automated scripting technique that is sometimes categorized in the same overarching ecosystem of technologies as AI. We have implemented an enterprise platform for implementing RPA and have automated many processes. We continue to see the vendor community enhancing software offerings with new capabilities that are powered by AI and ML in areas such as anomaly detection, natural language processing, and image recognition. We see great value in using those advanced capabilities in the tools we already use or in new implementations. We also are growing our data science capabilities to use predictive analytics as an extension of our existing analytics and data management capabilities. We are accomplishing this through both investment in our staffs learning, as well as providing them with data science tools.

What are some ways GSA is hoping to leverage AI and Machine Learning in the next few years?

Emily Murphy: In general, GSA is looking to implement AI and Machine Learning technology provided by vendors for existing software, as well as implementing custom solutions using this technology. Here are a few examples of how we hope to leverage AI and Machine Learning:

How is the GSA engaging industry and private sector in your AI efforts?

Emily Murphy: The federal government is using crowdsourcing in dynamic ways to engage industry and subject matter experts across the USA to advance innovation with artificial intelligence. In fact, in just the past six months, challenge.gov has hosted over a dozen federally sponsored prize competitions that focus on the use of artificial intelligence.

This past summer, GSA hosted the AI/ML End User License Agreement (EULA) challenge which showed how industry could provide IT solutions by leveraging AI and ML capabilities within the acquisition business process. This was hosted on challenge.gov and received 20 AI and ML solutions from solvers.

Another exciting example is Polaris, for which we just issued a Request for Information. Polaris is a next generation contract worth $50 billion geared at small innovative companies.

What is the GSA doing to develop an AI ready workforce?

Emily Murphy: We launched an AI Community of Practice to get smart people from across government talking and sharing best practices, then we set up an AI Center of Excellence to put their knowledge to work. This is how we lay the intellectual infrastructure needed to support the tens of thousands of federal workers, contractors, and citizens who will be working with this technology.

GSA is also very interested in looking at ways to build up data science and AI skill sets across federal agencies, as well as engaging externally to attract additional data science and AI talent into government. The TTS CoE and AI Portfolio have hosted three webinars focused on AI Acquisitions. These webinars have discussed topics such as defining your problem for AI acquisitions, how to draft objectives for your AI acquisition problems, and understanding data rights and IP clauses for AI acquisitions for federal employees.

As well, GSA OCTO hosts a speaker series called Tech Talks for GSA employees. These are sessions designed to introduce and explain new and emerging technologies to the staff of GSA. Related tech talks have included AI/ML Overview and Battle of the Bots. Our Chief Data Officers organization operates a training program for GSA employees called the Data Science Practioners Training Program. This program develops the core data science skills underlying effective implementation of AI and ML.

How important is AI to the GSAs vision of the future?

Emily Murphy: AI is a critical part of GSAs vision for the future, and it should be for all agencies. Advances in AI and ML have fundamentally changed the way private industry does business. Government should be leveraging AI technologies in a responsible manner to serve the people of this country. GSA specifically will better be able to support our partner agencies through faster, more efficient and more informed mechanisms with the support of AI.

How is AI helping with GSAs mission today?

Emily Murphy: Like many private companies, we have lots of work - more than our workforce can always handle. We have also discovered that many of our current processes exist simply because we have always done things that way. As well, when we would get new systems, wed program them to automate the old process without thinking through whether there was any value in that and whether the process was one we wanted or needed. AI is allowing us to modernize our systems and processes, and shift from low value work to high value work. We have over 70 bots currently in operation that have saved over 260,000 hours. Here are a few examples:

What AI technologies are you most looking forward to in the coming years?

Emily Murphy: Im looking forward to universal communication across languages to around the world, and the accelerated digitization and processing of government forms. Many AI technologies and tools hold promise of helping the federal government to become more efficient, reveal greater insight, and make better decisions. The key is to ensure that we leverage both human and AI systems together.

Go here to see the original:

Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI - Forbes

Posted in Ai | Comments Off on Emily Murphy, Administrator Of The GSA Shares Her Thoughts On AI – Forbes

AI and us – The Hindu

Posted: at 6:17 am

The dystopian society depicted in George Orwells novel 1984 and his well-known phrase Big Brother is watching you implying relentless surveillance have a particularly grim relevance in todays world. We are observed closely everywhere be careful of what you do and follow accepted practice and you are safe. Sure, but the growing watching by what seems not just the government but the whole world becomes a nightmare of fear of fraudulent exploitation through easily available personal information garnered and misused via the Internet and other Artificial Intelligence-enabled means. Strangely enough, however, encroachment of privacy is nothing new, nor can its proliferation be blamed solely on advancements in AI. Its all just a continuing extension of the way we have always been, vastly compounded by fast-advancing technology old wine in new bottles. Only, wine is not so dangerous.

In the film Social Dilemma, the tech pundits who develop the technology used in social networking warn us of the latters harmful effect on people. The extent to which it can affect your life is startling. This increasing vulnerability to rapidly developing methods of communication and information technology invites some serious questions. How much of personal independence and privacy must we sacrifice at the altar of progress? All human beings have three lives: public, private, and secret, says Gabriel Garcia Marquez, a winner of the Nobel Prize for literature. Lamentably, there is now just the public life with private details and vital statistics laid bare through anything from nanny cams and cyberstalking to data mining. Throughout history, humankind has sought knowledge to enable the achievement of objectives and as an end in itself, which has driven the evolution of civilisation. But today, in this quest there is much more to reckon with, in our subjugation to AI.

Every smart child asserts that the Internet observes and manipulates you. Ordering pizza online? Hey presto, you are offered half-a-dozen bewildering alternatives. Some even claim that occasionally, just thinking of ordering something is followed instantly by the sudden appearance of suitable choices! Hmm some sort of clandestine avant garde telepathic detection maybe? Not so bizarre if you consider the recent overwhelming progress in these areas. According to a report, leading scientists say neural interfaces that link human brains to computers using Artificial Intelligence will allow people to read others thoughts. There could be severe risks if such technology falls into the wrong hands. It seems only sensible that methods of blocking undesired telepathy and monitoring of other privacy-violating software should be seriously considered.

Personal space

In view of the unparalleled ease of confidentiality infringement today, steps to safeguard personal space become essential. As Edward Snowden says, Arguing that you dont care about the right to privacy because you have nothing to hide is no different than saying you dont care about free speech because you have nothing to say.

Increasing use of commercial cybersecurity software, customising confidentiality agreements, curbing excessive intrusion of the government or becoming a complete maverick shunning all virtual transactions seem to be some of the best options to guard against information leaks, from a laymans point of view. However, former American President James Madison had said that the invasion of private rights is chiefly to be feared not from acts of the government contrary to the will of the people, but acts of the government in which it is merely an instrument of its constituents. The most potent recourse to preserve privacy, therefore, is the will of a nations people to use the government as their tool for the purpose.

Though the transformation of the acquisition of essential personal information for routine work purposes into a situation of potential misuse, abetted by rapidly advancing cybertechnology, may be merely another manifestation of age-old human traits of inquisitiveness and greed, it clearly cannot be left unchecked. It rests on the community to use the administrative authority of the nation to check this growing threat. Importantly, where the line should be drawn that separates the preservation of privacy and the use of beneficial technology is the question that faces us now.

rosemary.isaac@gmail.com

Go here to read the rest:

AI and us - The Hindu

Posted in Ai | Comments Off on AI and us – The Hindu

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh – HPCwire

Posted: at 6:17 am

As HPEs chief technology officer for artificial intelligence, Dr. Eng Lim Goh devotes much of his time talking and consulting with enterprise customers about how AI can benefit their business operations and products.

As the start of 2021 approaches, HPCwire sister publication EnterpriseAI spoke with Goh in a telephone interview to learn about his impressions and expectations for the still-developing technology as it continues to be used by HPEs customers.

Goh, who is widely-known as one of the leading HPC visionaries today, has a deep professional background in AI and HPC. He was CTO for most of his 27 years at Silicon Graphics before joining HPE in 2016 after the company was acquired by HPE. He has co-invented blockchain-based swarm learning applications, overseen the deployment of AI for Formula 1 auto racing, and has co-designed the systems architecture for simulating a biologically detailed mammalian brain. He has been named twice, in 2005 and 2015, to HPCwires People to Watch list, for his work. A Shell Cambridge University Scholar, he completed his PhD research and dissertation on parallel architectures and computer graphics, and holds a first-class honors degree in mechanical engineering from Birmingham University in the U.K.

This interview is edited for clarity and brevity.

EnterpriseAI: Is the development of AI today where you thought it would be when it comes to enterprise use of the technology? Or do we still have a way to go before it becomes more important in enterprises?

Dr. Eng Lim Goh: You do see research with companies and industries. Some are deploying AI in a very advanced way now, while others are moving from their proof of concept to production. I think it comes down to a number of factors, including which category they are in are they coping with making decisions manually, or are they coping with writing rules into computer programs to help them automate some of the decision making? If they are coping, then there is less of an incentive to move to using machine learning and deep neural networks, other than being concerned that competition is doing that and they will out-compete them.

There are some industries that that are still making decisions manually or writing rules to automate some of that. There are others where the amount of data to be considered to make an even better decision would be insurmountable with manual decision making and manual analytics. If you asked me a few years back where things would be, I would have been conservative on one hand and also very optimistic on the other hand, depending on companies and industries.

EnterpriseAI: Are we at the beginning of AIs capabilities for business, or are we reaching the realities of what it can and cant do? Has its maturity arrived?

Goh:For some users it is maturing, if you are focused on how the machine wants to help you in decision support, or in some cases, to help you take over some decision-making. That decision is very specific in an area, and you to have enough data for it. I think things are getting very advanced now.

EnterpriseAI: What are AIs biggest technology needs to help it further solve business problems and help grow the use of AI in enterprises? Are there features and improvements that still must arrive to help deliver AI for industries, manufacturing and more?

Goh: At HPE, we spend a lot of our energy working with customers, deploying their machine learning, artificial intelligence and data analytics solutions. Thats what we focus on, the use cases. Other bigger internet companies focus more on the fundamentals of making AI more advanced. We spend more of our energy in the application of it. From the application point of view, some customer use cases are similar, but its interesting that a lot of times, the needs are in best practices.

In the best practices, a lot of times for example, proof of concepts succeed, but then they fail in their deployment into production. A lot of times, proof of concepts fail because of reasons other than the concept being a failure. A discipline, like engineering, over years, over decades, develops into a discipline, like computer engineering or programming. And over the years, these develop into disciplines where there are certain sets of best practices that people follow. In the practice of artificial intelligence, this will also develop. Thats part of the reason why we develop sets of best practices. First, to get from proof of concept to successful deployment, which is where we see a lot of our customers right now. We have one Fortune 500 customer, a large industrial customer, where the CTO/CIO invested in 50 proof of concepts for AI. We were called in to help, to provide guidance as to how to pick from these proof of concepts.

A lot of times they like to test to see if, for a particular use case, does it make sense to apply machine learning in decision support? Then they will invest in a small team, give them funding and get them going. So you see companies doing proof of concepts, like a medium-sized company doing one or two proof of concepts. The key, when Im brought into to do a workshop with them on this in transitioning from proof of concept to deployment, is to look at the best practices weve gathered over the use cases weve done over the years.

One lesson is not to say that the proof of concept is successful until you also prove that you can scale it. You have to address the scale question at the beginning. One example is that if you prove that 100 cameras work for facial recognition within certain performance thresholds, it doesnt mean the same concept will work for 100,000 cameras. You have to think through whether what you are implementing can actually scale. This is just one of the different best practices that we saw over time.

Another best practice is that this AI, when deployed, you must plug into the existing workflow in a seamless way, so the user doesnt even feel it. Also, you have to be very realistic. We have examples where they promise too much at the beginning, saying that we will deploy on day one. No, you set aside enough time for tuning, because since this is a very new capability for many customers, you need to give them time to interact with it. So dont promise that youll deploy on day one. Once you implement in production, allow a few months to interact with a customer so they can find what their key performance indicators should be.

EnterpriseAI: Are we yet at a point where AI has become a commodity, or are we still seeing enterprise AI technology breakthroughs?

Goh: Both are right. The specific AI where you have good data to feed machine learning models or deep neural network models, the accuracy is quite high, to the point that people after using it for a while, trust it. And its quite prevalent, but some people think that it is not prevalent enough to commoditize. AI skills are like programming skills a few decades ago they were highly sought after because very few people knew what it was, knew how to program. But after a few decades of prevalence, you now have enough people to do programming. So perhaps AI has gone that way.

EnterpriseAI: Where do you see the biggest impacts of AI in business? Are there still many things that we havent seen using AI that we havent even dreamed up yet?

Goh: Anytime that you youre having someone make a decision, AI can be helpful and can be used as a decision support tool. Then theres of course the question about whether you let the machine make the decision for you. In some cases, yes, in a very specific way and if the impact of a wrong decision is less significant. Treat AI as a tool like you would think automation was a tool. Its just another way to automate. If you look back decades ago, machine learning was already being used, it was just not called machine learning. It was a technique used by people in doing statistics, analytics, applying statistics. There definitely is that overlap, where statistics overlap with machine learning, and then machine learning stretches out to deep neural networks where we reach a point where this method can work, where we essentially have enough data out there, and enough compute power out there to consume it. And therefore, to be able to get the neural network to tune itself to a point where you can actually have it make good decisions. Essentially, you are brute-forcing it with data. Thats the overlap. I say weve been at it for a long time, right, were just looking for new ways to automate.

EnterpriseAI: What interesting enterprise AI projects are you working on right now that you can share with us?

Goh: Two things are in the minds of most people now COVID-19 vaccines, and back-to-work. These are two areas we have focused on over the last few months.

On the vaccine, clinical trials and gene expression data, with applying analytics to it. We realized that analytics, machine learning and deep neural networks can be quite useful in making predictions just based on gene expression data. Not just for clinical trials, but also to look ahead to the well-being of persons, by just looking at one sample. It requires highly-skilled analytics, machine learning and deep neural network techniques, to try and make predictions ahead of time, when you get a blood sample and genus expressed and measured from it.

The other area is back-to-work [after COVID-19 shutdowns around the nation and world]. Its likely that the workplace is changed now. We call it the new intelligent hybrid workplace. By hybrid we mean a portion is continuing to be remote, while a portion of factory, manufacturing plant or office employees will return to their workplaces. But even on their returns depending on companies, communities, industries and countries therell be different requirements and needs.

EnterpriseAI: And AI can help with these kinds of things that we are still dealing with under COVID-19?

Goh: Yes, in certain jurisdictions, for example, if someone is ill with the coronavirus in a factory or an office, and you are required to do specialized cleaning in the area around that high-risk person. If you do not have a tool to assist you, there are companies that clean their entire factory because theyre not quite sure where that person has been. An office may have cleaned an entire floor hoping that a person didnt go to other floors. We built an in-building tracing system with our Aruba technology, using Bluetooth Low Energy, talking to WiFi routers and access points. Immediately when you identify a particular quarter-sized Bluetooth tag that employees carry, immediately a floorplan shows up and it shows hotspots and warm spots as to where to send the cleaning services to. Youre very targeted with your cleaning. The names of the users of those tags are highly restricted for privacy.

EnterpriseAI: Lets dive into the ethics of AI, which is a growing discussion. Do you have concerns about the ethics and policies of using AI in business?

Goh: Like many things in science and engineering, this is as much a social question as it is a technical one. I get asked this a lot by CEOs in companies. Many times, from boards of directors and CEOs, this is the first question, because it affects employees. It affects the community they serve and it affects their business. Its more a societal question as it is a technical one, thats what I always tell them.

And because of this, thats the reason you dont hear people giving you rules on this issue hard and fast. There needs to be a constant dialogue. It will vary by community, by industry, to have a dialogue and then converge on consensus. I always tell them, focus on understanding the differences between how a machine makes decisions, and how a human makes decisions. Whenever we make a decision, there is a link immediately to the emotional side, and to the generalization capability. We apply judgment.

EnterpriseAI: What do you see as the evolving relationship between HPC and AI?

Goh: Interestingly, the relationship has been there for some time, its just that we didnt call it AI. Lets take hurricane prediction, for example. In HPC, this is one of the stalwart applications for high performance computing. You put in your physics and physics simulations on a supercomputer. Next, you measure where the hurricane is forming in the ocean. You then make sure you run your simulation ahead of time faster than the hurricane that is coming at you. Thats one of the major applications of HPC, building your model out of physics, and then running the simulation based on starting that mission that youve measured out in the ocean.

Machine learning and AI is now used to look at the simulation early on and predict likelihood of failure. You are using history. People in weather forecasting, or climate weather forecasting, will already tell you that theyre using this technique of historical data to make predictions. And today we are just formalizing this for the other industries.

EnterpriseAI: What do you think of the emerging AI hardware landscape today, with established chip makers and some 80 startups working on AI chips and platforms for training and inference?

Goh: Through history, its been the same thing. In the end, there will probably be tens of these chip companies. They came up with different techniques. Were back to the Thinking Machines, the vector machines, its all RISC processes and so on. Theres a proliferation of ideas of how to do this. And eventually, a few of them will stand out here and there will be a clear demarcation I believe between training and inference. Because inference needs to be low and lower energy to the point that should be the vision, that IoTs should have some inference capability. That means you need to sip energy at a very low level. Were talking about an IoT tag, a Bluetooth Low Energy tag, with a coin battery that should last two years. Today the tag that sends out and receives the information, has very little decision-making, let alone inference-level type decision-making. In the future you want that to be an intelligent tag, too. There will be a clear demarcation between inference and training.

EnterpriseAI: In the future, where do you see AI capabilities being brought into traditional CPUs? Will they remain separate or could we see chips combining?

Goh: I think it could go one way, or it could totally go the other way and everything gets integrated. If you look at historical trends, in the old days, when we built the first high-performance computers, we had a chip for our CPU, and we had another chip on board called FPU, the floating point unit, and a board for graphics. And then over time the FPU got integrated into the CPU, and now every CPU has an FPU in it for floating point calculations. Then there were networking chips that were on the outside. Now we are starting to see networking chips incorporating into the CPU. But GPUs got so much more powerful in a very specific way.

The big question is, will the CPU go into the GPU, or will the GPU go into the CPU? I think it will be dependent on a chip companys power and vision. But I believe integration, one way or the other the CPU to GPU or GPU going into CPU will be the case.

EnterpriseAI: What else should I be asking you about the future of AI as we look toward 2021?

Goh: I want to emphasize that many CEOs are keen on starting with AI. They are in phase one, where it is important to understand that data is the key to train machines with. And as such, data quality needs to be there. Quantity is important, but quality needs to be there, the trust of it, the data bias.

We focus on the fact that 80% of the time should be spent on the data even before you start on the AI project. Once you put in that effort, your analytics engine can make better use of it. If you are in phase one, thats what I would recommend. If you are in a proof of concept state, then spend time in the workshop to discuss best practices with those who have implemented AI quite a bit. And if youre in the advanced stage, if you know what youre doing, especially if youre successful, do take note that after a while with a good deployment, the accuracy of the prediction drops, so you have to continually retrain your machines. I think it is the practice that I am more focused on.

This article first appeared on sister website EnterpriseAI.news.

Follow this link:

The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh - HPCwire

Posted in Ai | Comments Off on The Present and Future of AI: A Discussion with HPC Visionary Dr. Eng Lim Goh – HPCwire

Page 152«..1020..151152153154..160170..»