Unleashing the power of AI for education – MIT Technology Review

Artificial intelligence (AI) is a major influence on the state of education today, and the implications are huge. AI has the potential to transform how our education system operates, heighten the competitiveness of institutions, and empower teachers and learners of all abilities.

Dan Ayoub is general manager of education at Microsoft.

The opportunities for AI to support education are so broad that recently Microsoft commissioned research on this topic from IDCto understand where the company can help. The findings illustrate the strategic nature of AI in education and highlight the need for technologies and skills to make the promise of AI a reality.

The results showed almost universal acceptance among educators that AI is important for their future99.4% said AI would be instrumental to their institutions competitiveness within the next three years, with 15% calling it a game-changer. Nearly all are trying to work with it too92% said they have started to experiment with the technology.

Yet on the other hand, most institutions still lack a formal data strategy or practical measures in place to advance AI capabilities, which remains a key inhibitor. The finding indicates that although the vast majority of leaders understand the need for an AI strategy, they may lack clarity on how to implement one. And it could be that they just dont know where to start.

David Kellermann has become a pioneer in how to use AI in the classroom. At the University of New South Wales in Sydney, Australia, Kellermann has built a question bot capable of answering questions on its own or delivering video of past lectures. The bot can also flag student questions for teaching assistants (TAs) to follow up. Whats more, it keeps getting better at its job as its exposed to more and different questions over time.

Kellermann began his classrooms transformation with a single Surface laptop. Hes also employed out-of-the-box systems like Microsoft Teams to foster collaboration among his students. Kellermann used the Microsoft Power Platform to build the question bot, and hes also built a dashboard using Power BI that plots the classs exam scores and builds personalized study packs based on students past performance.

Educators see AI as instrumental to their institutions competitiveness, yet most institutions still lack a formal data strategy to advance AI.

Kellermanns project illustrates a key principle for organizations in nearly every industry when it comes to working with AI and machine learningknowing where to start, starting small, and adding to your capabilities over time. The potential applications of AI are so vast, even the most sophisticated organizations can become bogged down trying to do too much, too soon. Often, it comes down to simply having a small goal and building from there.

As an AI initiative gradually grows and becomes more sophisticated, its also important to have access to experts who can navigate technology and put the right systems in place. To gain a foothold with AI, institutions need tools, technologies, and skills.

This is a big focus for our work at Microsoftto support educational institutions and classrooms. Weve seen the strides some institutions have already taken to bring the potential of AI technologies into the classroom. But we also know there is much more work to do. Over the next few years, AIs impact will be felt in several waysmanaging operations and processes, data-driven programs to increase effectiveness, saving energy with smart buildings, creating a modern campus with a secure and safe learning environment.

But its most important and far-reaching impact may lie in AIs potential to change the way teachers teach and students learn, helping maximize student success and prepare them for the future.

Collective intelligence tools will be available to save teachers time with tasks like grading papers so teachers and TAs can spend more time with students. AI can help identify struggling students through behavioral cues and give them a nudge in the right direction.

AI can also help educators foster greater inclusivityAI-based language translation, for example, can enable more students with diverse backgrounds to participate in a class or listen to a lecture. Syracuse Universitys School of Information Studies is working to drive experiential learning for students while also helping solve real-world problems, such as Our Ability, a website that helps people with disabilities get jobs.

AI has the power to become an equalizer in education and a key differentiator for institutions that embrace it.

Schools can even use AI to offer a truly personalized learning experienceovercoming one of the biggest limitations of our modern, one-to-many education model. Kellermanns personalized learning system in Sydney shows that the technology is here today.

AI has the power to become a great equalizer in education and a key differentiator for institutions that embrace it. Schools that adopt AI in clever ways are going to show better student success and empower their learners to enter the work force of tomorrow.

Given its importance, institutions among that 92% should start thinking about the impact they can achieve with AI technologies now. Do you want to more quickly grade papers? Empower teachers to spend more time with students? Whatever it is, its important to have that goal in mind, and then maybe dream a little.

This is a movement still in its early days, and there is an opportunity for institutions to learn from one another. As our customers build out increasingly sophisticated systems, Microsoft is learning and innovating along with them, helping build out the tools, technologies, and services to turn the vision for AI into reality.

Continue reading here:

Unleashing the power of AI for education - MIT Technology Review

The problem with AI: When hard skills are automated and soft skills are needed, the next generation is in big trouble – National Post

Artificial intelligence is approaching critical mass at the office, but humans are still likely to be necessary, according to a new study by executive development firm, Future Workplace, in partnership with Oracle.

Future Workplace found an 18 per cent jump over last year in the number of workers who use AI in some facet of their jobs, representing more than half of those surveyed.

Reuters spoke with Dan Schawbel, the research director at Future Workplace and bestselling author of Back to Human, about the studys key findings and the future of work.

You found that 64 per cent of people trust a robot more than their manager. What can robots do better than managers and what can managers do better than robots?

What managers can do better are soft skills: understanding employees feelings, coaching employees, creating a work culture things that are hard to measure, but affect someones workday.

The things robots can do better are hard skills: providing unbiased information, maintaining work schedules, problem solving and maintaining a budget.

Is AI advancing to take over soft skills?

Right now, were not seeing that. I think the future of work is that human resources is going to be managing the human workforce, whereas information technology is going to be managing the robot workforce. There is no doubt that humans and robots will be working side by side.

Are we properly preparing the next generation to work alongside AI?

I think technology is making people more antisocial as they grow up because theyre getting it earlier. Yet the demand right now is for a lot of hard skills that are going to be automated. So eventually, when the hard skills are automated and the soft skills are more in demand, the next generation is in big trouble.

Which countries are using AI the most?

India and China, and then Singapore. The countries that are gaining more power and prominence in the world are using AI at work.

If AI does all the easy tasks, will managers be mentally drained with only difficult tasks left to do?

I think its very possible. I always do tasks that require the most thought in the beginning of my day. After 5 or 6 oclock, Im exhausted mentally. But if administrative tasks are automated, potentially, the work day becomes consolidated.

That would free us to do more personal things. We have to see if our workday gets shorter if AI eliminates those tasks. If it doesnt, the burnout culture will increase dramatically.

Seventy percent of your survey respondents were concerned about AI collecting data on them at work. Is that concern legitimate?

Yes. Youre seeing more and more technology vendors enabling companies to monitor employees use of their computers.

If we collect data on employees in the workplace and make the employees suffer consequences for not being focused for eight hours a day, thats going to be a huge problem. No one can focus for that long. Its going to accelerate our burnout epidemic.

How is AI changing hiring practices?

A: One example is Unilever. The first half of their entry-level recruiting process is really AI-centric. You do a video interview and the AI collects data on you and matches it against successful employees. That lowers the pool of candidates. Then candidates spend a day at Unilever doing interviews, and a percentage get a job offer. Thats machines and humans working side-by-side.

See more here:

The problem with AI: When hard skills are automated and soft skills are needed, the next generation is in big trouble - National Post

Massive AI Project Will Supercharge Tesla Stock – TheStreet

(Tech stock columnist Jon D. Markman publishes Strategic Advantage, a lively guide to investing in the digital transformation of business and society. Click here for a trial.)

Tesla (TSLA) is on the verge of a game-changing breakthrough in machine learning yet the only thing people are talking about is its plan for a stupid humanoid robot.

Executives at the electric vehicle company on Thursday held an artificial intelligence day. The two-pronged goal of the event was to show off its AI progress, and to recruit of new engineers.

Plans went awry when Tesla-bot, a 58 nonfunctional humanoid robot appeared on stage.

This is why investors should consider buying buy Tesla shares anyway.

Lets be clear. The AI day presentation was mind-blowing. Tesla engineers are aiming so high it is hard to put the scale of innovation in perspective. Lex Fridman, an acclaimed AI researcher working at MIT and often a Tesla critic characterized the event succinctly:

Tesla AI day presented the most amazing real-world AI and engineering effort I have ever seen in my life.

In the past, Fridman criticized Elon Musk, Teslas chief executive officer for downplaying the difficulty of the full self-driving problem. In Fridmans view, the obstacles to FSD are so daunting he didnt believe any firm could successfully navigate the landscape within the next 5-10 years.

The Tesla AI day changed his mind. That is saying something.

Musk and his team completely reimagined computer vision by thinking exponentially bigger. Then they built models to collect and label the data, and a new processor to make sense of it all.

The idea of AI conjures up rooms full of computers deciphering data and making choices on the fly. In reality, Tesla still employs 1,000 engineers who manually label pedestrians and orange road cones. The latest software iteration is getting much better at auto-labeling. Musk said on Thursday that the neural network model is being completely retrained about every 2 weeks.

Processing is certain to improve when Teslas Dojo computer is outfitted with the latest in-house chips designed specifically to optimize neural networks. Engineers claim these breakthrough mega chips offer 4x the performance of the current processors yet they consume 1/5 the footprint.

Fridman believes the virtuous cycle of data collection, labeling, model retraining, and redeployment will give Tesla a real fighting chance to finally solve FSD. This is potentially a $1 trillion opportunity.

Unfortunately, this monumental development is getting lost in the analysis of the Tesla bot.

Investors should focus.

Tesla is building a fully integrated AI powerhouse. Longer-term investors should consider buying any near-term weakness in its shares.

View post:

Massive AI Project Will Supercharge Tesla Stock - TheStreet

Newly-launched Boulder AI wants to help build the artificial … – The Denver Channel

BOULDER, Colo. A new startup in Boulder is looking to make a splash in the quickly-growing field of artificial intelligence.

Boulder AI just launched this week, billing itself as one of Colorados only AI consulting companies. Boulder AIs aim is to help other businesses create artificial intelligence systems that can perform menial work that is costly or dangerous for humans.

Its really [about] taking boring, monotonous tasks and allowing computers to do those tasks in an automated way, Founder Darren Odom said.

Odom says his company offers a breadth of experience that other companies in the AI space cant match. While most companies focus on either hardware or software, Boulder AI is firmly focused on both.

We have both hardware and software experience that we can bring to the table, Odom said.

For example, Boulder AI already has built a rugged, AI-equipped camera that the company says has a wide range of uses, from counting wildlife and identifying fish to tracking customer behaviors in a retail setting.

Our team can design AI hardware, with either on-board or cloud processing. Weve also got a deep bench of AI software developers, Odom said.

Boulder AI joins a quickly growing field of tech companies calling the city home. Odom said he knew Boulder was the perfect place to set up shop, in large part because of the huge pool of tech talent.

Boulder is this amazing hub of really intelligent folks and its really, I believe, the new Silicon Valley, Odom said. I cant imagine launching Boulder AI anywhere else.

Learn more about the company at boulderai.com.

Visit link:

Newly-launched Boulder AI wants to help build the artificial ... - The Denver Channel

Why Micron is Getting into the AI Accelerator Business – The Next Platform

Micron has a habit of building interesting research prototypes that offer a vague hope of commercialization for the sheer purpose of learning how to make its own memory and storage subsystem approaches more tuned to next generation applications.

We saw this a few years ago with the Automata processor, which was a neuromorphic inspired bit of hardware that focused on large-scale pattern recognition. That project has since folded internally and moved into a privately funded effort from a startup aiming to make it market ready, which is to say that it has all but disappeared from view since that was a couple of years ago.

There is more here for anyone interested in the Automata architecture, but for those curious about why Micron wants to get into the accelerator business with one-off silicon projects like that or its newly announced deep learning accelerator (DLA) for inference, its far less about commercial success than it is learning how to tune memory and storage systems for AI on custom accelerators. In fact, the market viability of such a chip would be a delightful bonus since the real value is getting a firsthand understanding of what deep learning applications need out of memory and storage subsystems.

This deep learning accelerator might be counted among those on the market (and thats a list too long to keep these days) but we do not expect the company to make a concentrated push to go after a large share. This is for the same reasons we dont expect much to emerge into IBMs product line from its research divisions. They are all efforts to build better mainstream products. If there is commercial gain, great, but it is not the wellspring of motivation.

Nonetheless, it is worth taking a quick look at what Micron has done with its inference accelerator since it could set the tone for what we may see in other products functionally, especially for inference at the edge.

Last year, Micron bought a small FPGA-based startup that spun out of Purdue University called FWDNXT (as in Forward Next). It also acquired FPGA startup, Pico Computing, in 2015 and has since been hard at work looking for where reprogrammable devices will fit for future applications and what to bake into memory to make those perform better and more efficiently.

The FWDNXT technology is at the heart of Microns new FPGA based deep learning accelerator, which gets some added internal expertise from Micron via the Pico assets. The architecture is similar to what weve seen in the market over the last few years for AI. A sea of multiply/accumulate units geared toward matrix vector multiply and the ability to do some of the key non-linear transfer functions. Micron took the FWDNXT platform against some tough problems and worked to do things like build tensor primitives inside the memory (so instead of floating point based scatter gather they could go fetch a matrix sitting in a buffer versus going over memory) They have also used the platform to build a software framework that is hands-off from an FPGA programming perspective (just specifying the neural network).

Micron wants to target energy efficiency by going to the heart of the problemdata movement with the performance goal of better memory bandwidth. All of this creates an accelerator that can be useful, but Micron was better able to see how to create future memory by working with FWDNXT to get the device ready.

It became obvious that if we are tasked with building optimized memory and storage we need to come up with what is optimal rather than just throwing in a bag of chips and hoping it works, explains Steve Pawlowski, VP of Advanced Technology at Micron. We are learning about what need to do in our memory and storage to make them a fit for the kinds of hard problems in neural networks we see ahead, especially at the edge.

Pawlowski is one of the leads behind some of Microns most notable efforts in creating specialized or novel architectures like Automata. He previously led architectural research initiatives at Intel where part of his job was to look how prototype chips were solving emerging problems in interesting ways and if those architectures held promise or competitive value. In the process, he developed an eye for building out new programs at Micron that took a research concept and tested its viability and role in using or improving memory devices.

By not having observability into the various networks on the compute side we could only guess if the things we were building into memory would be useful, Pawlowski says. The only way we could get real observability into how neural networks area executing was to have the entire pipeline so we could go in and instrument every piece of it. This is how we end up making better memory.

He adds that they build this base of knowledge by looking at some of the most complex problems and architecting from there, including with a cancer center that is doing disease detection at scale. Here accuracy is the biggest challenge. Theyve also been working with a very large high-energy physics entity (venture a guess) where the drivers are performance and latency. By taking a view of solving problems with different optimization points (accuracy versus raw performance) Micron is hoping to strike a balance that can inform next generation memory.

During these research and productization experiments, Micron does get a forward look at what future memory might need for a rapidly evolving set of workloads like AI.

The funny thing is, what theyre learning is the inherent value of what they already built as a commercial product several years agosomething that had great potential but strong competition. That would be hybrid memory cube (HMC) which has since been folded as a product as Micron focuses on what is next for that concept of memory stacked on top of logic.

As Micron looks at AI workloads the potential for this exact thing, which exists in plenty of devices now as rival HBM, has even more potential, even for inference. It might sound heavy-handed for an energy-efficiency-focused set of workloads, but more demands from the inference side will mean greater compute requirements. Doing all of that in a stacked memory device at the edge might seem like an expensive stretch, but Pawlowski says this is what he sees in his crystal ball.

There may be a renaissance of memory stacked on logic doing neural networks at the edge. The need for higher memory bandwidth will matter more in the years ahead. There will also be a need to reduce memory interconnect power too, Pawloski says, adding, I believe there will come a day when an architecture that is in the HMC style will be the right thing. By then it might not just be a memory device, it could be an accelerator. There will be other capabilities that come along there as well, including better ECC, for instance.

Its hard to tell where the research ends and the commercial potential begins with some of Microns research efforts for new chips or accelerators. If indeed these flow into the next instantiation of HMC, whatever that might be, this is interesting backstory. But when it comes to innovating in memory in a meaningful way that captures what AI chips need now the market might move on before Micron has a chance to intercept it with who knows what analog and other inference devices at the fore.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

Read the original:

Why Micron is Getting into the AI Accelerator Business - The Next Platform

MIT and Google researchers have made AI that can link sound, sight, and text to understand the world – Quartz

If we ever want future robots to do our bidding, theyll have to understand the world around them in a complete wayif a robot hears a barking noise, whats making it? What does a dog look like, and what do dogs need?

AI research has typically treated the ability to recognize images, identify noises, and understand text as three different problems, and built algorithms suited to each individual task. Imagine if you could only use one sense at a time, and couldnt match anything you heard to anything you saw. Thats AI today, and part of the reason why were so far from creating an algorithm that can learn like a human. But two new papers from MIT and Google explain first steps for making AI see, hear, and read in a holistic wayan approach that could upend how we teach our machines about the world.

It doesnt matter if you see a car or hear an engine, you instantly recognize the same concept. The information in our brain is aligned naturally, says Yusuf Aytar, a post-doctoral AI research at MIT who co-authored the paper.

That word Aytar usesalignedis the key idea here. Researchers arent teaching the algorithms anything new, but instead creating a way for them to link, or align, knowledge from one sense to another. Aytar offers the example of a self-driving car hearing an ambulance before it sees it. The knowledge of what an ambulance sounds like, looks like, and its function could allow the self-driving car to prepare for other cars around it to slow down, and move out of the way.

To train this system, the MIT group first showed the neural network video frames that were associated with audio. After the network found the objects in the video and the sounds in the audio, it tried to predict which objects correlated to which sounds. At what point, for instance, do waves make a sound?

Next, the team fed images with captions showing similar situations into the same algorithm, so it could associate words with the objects and actions pictured. Same idea: first the network separately identified all the objects it could find in the pictures, and the relevant words, and then matched them.

The network might not seem incredibly impressive from that descriptionafter all, we have AI that can do those things separately. But when trained on audio/images and images/text, the system was then able to match audio to text, when it had never been trained to know which words correspond to different sounds. Researchers claim this indicated the network had built a more objective idea of what it was seeing, hearing, or reading, one that didnt entirely rely on the medium it used to learn the information.

One algorithm that can align its idea of an object across sight, sound, and text can automatically transfer what its learned from what it hears to what it sees. Aytar offers the examples that if the algorithm hears a zebra braying, it assumes that a zebra is similar to a horse.

It knows that [the zebra] is an animal, it knows that it generates these kinds of sounds, and kind of inherently it transfers this information across modalities, Aytar says. These kinds of assumptions allow the algorithm to make new connections between ideas, strengthening its understanding of the world.

Googles model behaves similarly, except with the addition of being able to translate text as well. Google declined to provide a researcher to talk more about how its network operated. However, the algorithm has been made available online to other researchers.

Neither of these techniques from Google or MIT actually performed better than the single-use algorithms, but Aytar says that this wont be the case for long.

If you have more senses, you have more accuracy, he said.

See more here:

MIT and Google researchers have made AI that can link sound, sight, and text to understand the world - Quartz

The AI Cosmos Intelligent Algorithms Begin Processing the Universe – The Daily Galaxy –Great Discoveries Channel

This June, 2020, NASA announced that intelligent computer systems will be installed on space probes to direct the search for life on distant planets and moons, starting with the 2022/23 ESA ExoMars mission, before moving beyond to moons such as Jupiters Europa, and of Saturns Enceladus and Titan.

This is a visionary step in space exploration. said NASA researcher Victoria Da Poian. It means that over time well have moved from the idea that humans are involved with nearly everything in space, to the idea that computers are equipped with intelligent systems, and they are trained to make some decisions and are able to transmit in priority the most interesting or time-critical information.

When first gathered, the data produced by the Mars Organic Molecule Analyzer (MOMA) toaster-sized life-searching instrument will not shout out Ive found life here, but will give us probabilities which will need to be analyzed, says Eric Lyness, software lead in the Planetary Environments Lab at NASA Goddard Space Flight Center. Well still need humans to interpret the findings, but the first filter will be the AI system.

Is There Life There, HAL? NASA Announces Intelligent AI Systems Installed on Probes of Distant Planets

Classifying Galaxies

If artificial intelligence can search for alien life, it should be able to distinguish galaxies with spiral patterns from galaxies without spiral patterns, said Ken-ichi Tadaki, at the National Astronomical Observatory of Japan (NAOJ), who came up with the idea that using training data prepared by humans, allowed AI to successfully classify galaxy morphologies with an accuracy of 97.5%. Then applying the trained AI to the full data set, it identified spirals in about 80,000 galaxies.

The NAOJ research group, applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention.

To find the very faint, rare galaxies, deep, wide-field data taken with the Subaru Telescope was indispensable, said Dr. Takashi Kojima, about big data captured this June and the power of machine learning that led to the discovery of a galaxy with an extremely low oxygen abundance of 1.6% solar abundance, breaking the previous record of the lowest oxygen abundance. The measured oxygen abundance suggests that most of the stars in this galaxy were formed very recently.

Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012. Now they usually surpass humans in terms of accuracy and are used for autonomous vehicles, security cameras, and many other applications.

Now that this technique has been proven effective, it can be extended to classify galaxies into more detailed classes, by training the AI on the basis of a substantial number of galaxies classified by humans.

NAOJ is running a citizen-science project GALAXY CRUISE, where citizens examine galaxy images taken with the Subaru Telescope to search for features suggesting that the galaxy is colliding or merging with another galaxy. The advisor of GALAXY CRUISE,

The Subaru Strategic Program is serious Big Data containing an almost countless number of galaxies. Scientifically, it is very interesting to tackle such big data with a collaboration of citizen astronomers and machines, said NAOJ associate professor Masayuki Tanaka. By employing deep-learning on top of the classifications made by citizen scientists in GALAXY CRUISE, chances are, we can find a great number of colliding and merging galaxies.

We are Galactic Babies Something Similar to the AI Revolution May Have Happened at Other Points in the Universe

A First AI Step for Homo Sapiens?

Theres currently an AI revolution, and we see artificial intelligence getting smarter and smarter by the day, Susan Schneider, an associate professor of cognitive science and philosophy at the University of Connecticut who has written about the intersection of SETI and AI, says. That suggests to me something similar may be going on at other points in the universe. Once a society creates the technology that could put them in touch with the cosmos, they are only a few hundred years away from changing their own paradigm from biology to AI.

As SETI Institute astronomer Seth Shostak suggests, looking far into our future planets are volatile, prone to eruptions and earthquakes and the effects of an aging star. Machines arent necessarily going to stay on a planet, he says. Planets are dangerous for machines.

The Daily Galaxy via NAOJ and Goldschmidt Conference

Image Credit: NAOJ/HSC-SSP

More:

The AI Cosmos Intelligent Algorithms Begin Processing the Universe - The Daily Galaxy --Great Discoveries Channel

AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare – Forbes

Paul Scharre / Senior Fellow & Director at CNAS

Paul Scharreis a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author ofArmy of None: Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was named one of Bill Gates top five books of 2018.

Aswin Pranam: To start, what classifies as an autonomous weapon?

Paul Scharre: An autonomous weapon, quite simply, makes its own decisions of whom to engage in the battlefield. The core challenge is in figuring out which of those decisions matter. For example, modern-day missiles and torpedoes maneuver on their own to course-correct and adjust positioning. Do these decisions matter on a grand scale? Not so much. But munitions that can make kill decisions on their own, without human supervision, matter a great deal.

Pranam: Why should the average citizen care about autonomous weapons?

Scharre: Everyone will have to live in the future we're building, and we should all have a personal stake in what that future looks like. The question of AI being used in warfare isnt a question of when, but rather a question of how. What are the rules? What is the degree of human control? Who sets those rules? There is a real possibility that militaries transition to a world in which human control over war is significantly reduced, and that could be quite dangerous. So, I think engaging in broad conversation internationally and bringing together nations, human rights groups, and subject matter experts (lawyers, ethicists, technologists) to have a productive dialogue is necessary to chart the right course.

Pranam: People concerned about the destructive power of AI weapons want to halt development completely. Is this a realistic solution?

Scharre: We cant stop the underlying technology from being developed because AI and automation are dual-use. The same sensors and algorithms that prevent a self-driving car from hitting pedestrians may also enable an autonomous weapon in war. The basic tools to build a simple autonomous weapon exist today and can be found freely online. If a person is a reasonably competent engineer and has a bit of free time, they could build a crude autonomous drone that could inflict harm for under a thousand dollars.

The open question is still what militaries choose to build with regards to their weapons arsenal. If you look at chemical and biological weapons as a parallel, certain rogue states develop and use them, but the majority of civilized countries have generally agreed not to move forward with development. Historically, attempts to control technology has been a mixed bag, with some successes and many failures.

Pranam: In the United States, the International Traffic in Arms Regulations (ITAR) compliance framework controls & restricts the export of munitions and military technologies. Do you believe AI should fall under the export restricted category?

Scharre: This is an area with a lot of active policy debate in both Washington DC and the broader tech community. Personally, I dont see it as being realistic in most cases. However, theres room for non-ITAR export controls that restrict the sale of AI technologies to countries engaged in human rights abuses. We shouldnt have American enterprises or universities enabling foreign entities that will repurpose the technology to suppress their citizens.

By and large, as far as the access to research is concerned, the AI world is very open. Papers are published online and breakthroughs are freely shared, so it is difficult to imagine components of AI technology being controlled under tight restrictions. If anything, I could see sensitive data sets being ITAR controlled, along with potentially specialized chips or hardware used exclusively for military applications, if they were developed.

Pranam: Conversations around AI and morality generally go hand-in-hand. In your research, have you encountered any examples of AI weapons today that use ethical considerations as an input to decision making?

Scharre: I havent, and I dont fully agree with the characterization that AI needs to engage in moral reasoning in the human sense. We should focus attention on outcomes. We want behavior from AI that is consistent with human ethics and values. Does this mean that machines need to reason or understand abstract ethical concepts? Not necessarily. We do, however, need to control the manifestation of external behavior in machines and reassert human control in cases where ethical dilemmas present themselves. Commercial airliners use autopilots to improve safety. It doesnt need to have moral reasoning programmed in to get to a moral outcome a safe plane flight. In a similar vein, self-driving cars will pilot themselves with the primary outcome of driving safely. Programming in ethical scenarios to manage variations of the trolley problem is largely a red herring.

More:

AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare - Forbes

ITRI Exhibits Innovations in AI, Robotics and e-Health at CES 2021 – PRNewswire

Its highlight technologies include the Dual Arm Robot System (DARS);the Self-Learning Battery Management System (SL-BMS);the iCardioGuard wearable physiological and psychologicalmonitoring system; the Handheld Skin Quality Optical Coherence Tomographydevice for skin quality analysis; and the Sleep Learning Technology (SLT) for learning and memory improvement during sleep.

WHAT:

WHEN: Monday, January 11-Thursday, January 14, 2021.

WHERE:https://event.itri.org/CES2021

PRESS KIT: https://ces.vporoom.com/ITRI

About ITRIIndustrial Technology Research Institute (ITRI) is one of the world's leading technology R&D institutions aiming to innovate a better future for society. Founded in 1973, ITRI has played a vital role in transforming Taiwan's industries from labor-intensive into innovation-driven. To address market needs and global trends, it has launched its 2030 Technology Strategy & Roadmap and focuses on innovation development in Smart Living, Quality Health, and Sustainable Environment. It also strives to strengthen Intelligentization Enabling Technology to support diversified applications.

Over the years, ITRI is dedicated to incubating startups and spinoffs, including well-known names such as UMC and TSMC. In addition to its headquarters in Taiwan, ITRI has branch offices in the U.S., Europe, and Japan in an effort to extend its R&D scope and promote international cooperation across the globe. For more information, please visit https://www.itri.org/eng.

SOURCE Industrial Technology Research Institute (ITRI)

https://www.itri.org/eng

See more here:

ITRI Exhibits Innovations in AI, Robotics and e-Health at CES 2021 - PRNewswire

ThoughtSpot Co-Founder Ajeet Singh – we need trust in AI, but what kind of trust is just as important – Diginomica

(via Pixabay )

Can we trust Artificial Intelligence (AI)? Its one of the big questions of our age, but the best response to it is to first understand that trust in AI takes many forms.

For example: trust that the algorithm is not automating historic or systemic bias, after being trained on flawed or biased data; trust that its processes are transparent and explainable, rather than locked in a black box; trust that the system is well designed and reflects the diversity of the real world; and trust that its users have not abdicated from common sense and personal responsibility by adopting it.

We also need to trust that they have our best interests at heart and are not excluding groups or individuals deliberately or accidentally; trust that the picture an AI presents is fair and accurate, and will not create a cascade of problems in our lives; trust that it will help us perhaps to be fitter, healthier, and happier; and simply trust that its reliable and it works.

Hysterical media coverage of malignant AIs and job-stealing robots have contributed to public unease about the rise of intelligence in technology, abetted by decades of sci-fi dystopias. The latter invariably present the march of progress as something to fear, rather than, say, speed cures for killer diseases or help us use natural resources more sustainably and responsibly. A lack of trust helps no one.

Ajeet Singh is Executive Chairmanand Co-Founderof AI company ThoughtSpot, which he co-founded to be a self-styled Google for numbers. The aim was to democratize organizations structured data by putting the underlying facts and figures in the hands of decision makers. It was a tall order, he explains:

It's easier said than done, because it is not about just taking some off-the-shelf NLP libraries and slapping them on top of a database. Search and NLP have been done for unstructured data, but when you apply it to structured data, those technologies don't work in search.

Google ranks answers, but the the onus is on the user to pick the right one. But when you're dealing with numbers, you have to provide people with precise answers.

This is another part of the challenge of building trust in, and into, intelligence-infused technologies. While Google may have been set up to make information easier to find, the knock-on effect has been people making information easier for Google to find, gaming its algorithms and distorting the nature of information in the process. Google itself is an advertising behemoth; none of this engenders trust.

Yet when it comes to the kind of structured data that ThoughtSpot aims to address, there is a different set of challenges, says Singh. One is that analysts and data scientists waste too much of their time organising and cleaning data, and precious little taking a deep dive to retrieve its value:

Business people should be able to answer their own questions. There are a couple of million analysts in the world. They tend to be well-educated, well-trained people, but often their job is reduced to just building a pie chart. They are grossly under-utilized; they should be doing deeper analysis and finding new opportunities. Meanwhile, the business keeps waiting. So we said, if we empower the business directly, we can uplift both of these communities, the business user community as well as the analysts.

Trust is core to this issue, says Singh, because new technologies succeed or fail on whether people trust them in all of the ways outlined above:

We know in real life that we build relationships with humans that we trust. I fundamentally believe that of the AI-based technologies that are being created now, the ones that can create that trust with the users a virtuous cycle of trust are the ones that will grow. If they don't have that trust, they will die.I don't subscribe to a dystopian view of the world. I think that it's going to be a partnership between humans and machines. But there needs to be that trust.

For Singh, there are four principles to building trust in AI and related technologies which he calls the STAR model: security, transparency, accuracy, and relevance. After all, another element of building trust in intelligent systems involves them not filling our lives with noise:

Users need to trust that you're going to treat the data with the respect it deserves, which means it needs to be safe and secure with you. If you're using AI to build, say, healthcare applications where you're actually making medical recommendations, the case for transparency is very strong. You need to explain how the AI is working, how it is coming up with recommendations or decisions.

The end user may not be in a position to understand a complex technology, so a certain level of transparency is essential in how it is working, how the decisions are being made, and how you're training it.

Technology innovation has been ahead of societal understanding of how information is being used and how it is being processed as we saw in the case of Facebook. Sometimes computer scientists dont understand the implications of their own creations, because technologies can now get adopted at a scale that has never happened before.

Not everything should be regulated, because then we will slow down progress. But there needs to be accountability, and I would think that transparency is more important than regulation. As long as there is transparency in the system.

But is slowing down the headlong rush to AI necessarily a bad thing? Wouldnt it be better to adopt it smartly, appropriately, manageably, and sensibly, rather than in a tactical arms race? He says:

I always like to make the distinction between driving fast and driving rash. You can drive fast, but you're still within limits, you're still aware of others, you have responsibility to others that are driving on the road with you.

Take AI in healthcare. There is a huge opportunity such as what is going on right now. If we [the technology community] can use AI to accelerate the discovery and development of a vaccine, that would be an amazing thing, and I wouldnt want that to be slowed down at all.

It doesnt mean that technology companies should not move rationally, or that society should not move rationally, but we should move fast. I really believe that there is still so much to be done as a society, particularly when it comes to healthcare, or education in developing countries. There is a lot of opportunity to uplift society at large.

I feel that the regulators and the government have been very reactive, but the technology industry hasn't taken the responsibility of bringing them along. And they also haven't taken the proactive effort to say, We need to be partners. We can make very positive impact on the world at least, thats my personal view.

Continue reading here:

ThoughtSpot Co-Founder Ajeet Singh - we need trust in AI, but what kind of trust is just as important - Diginomica

Two US Army projects seek to improve comms between soldiers and AI – C4ISRNet

WASHINGTON A pair of artificial intelligence projects from U.S. Army researchers are easing communication barriers that limit the relationship between AI systems and soldiers.

The artificial intelligence projects are designed to support ongoing efforts for the Armys next-generation combat vehicle modernization priority, which includes a focus on autonomous vehicles and AI-enabled platforms.

The first project, named the Joint Understanding and Dialogue Interface, or JUDI, is an AI system that can understand the intent of a soldier when that individual gives a robot verbal instructions. The second project, Transparent Multi-Modal Crew Interface Designs, is meant to give soldiers a better understanding of why AI systems make decisions.

Were attacking a similar problem from opposite ends, said Brandon Perelman, a research psychologist at the Army Research Laboratory who worked on Transparent Multi-Modal Crew Interface Designs.

The JUDI system will improve soldiers situational awareness when working with robots because it will transform that relationship from a heads down, hands full to a heads up, hands free interaction, according to Matthew Marge, a computer scientist at the lab. Simply put, this means that soldiers will be more aware of their surroundings, he said.

Natural language AI systems are available on the commercial market, but the JUDI system requires a level of awareness in a physical environment that isnt matched by commercial products. Commercial systems can understand what a person is saying and take instructions, but they dont know what is going on in the surrounding area. For the Army, the autonomous system needs to know that.

You want a robot to be able to process what youre saying and ground it to the immediate physical context, Marge said. So that means the robot has to not only interpret the speech, but also have a good idea of where it is [in] the world, the mapping of its surroundings, and how it represents those surroundings in a way that can relate to what the soldier is saying.

Researchers looked into how soldiers speak to robots, and how robots talk back. In prior research, Marge found that humans speak to technology in much simpler, direct language; but when talking to other people, they usually talk about a course of action and the steps involved. However, those studies were done in a safe environment, and not a stressful one similar to combat, during which a soldiers language could be different. Thats an area where Marge knows the Army must perform more research.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

When a soldier is under pressure, we dont want to have any limit on the range of words or phrases they have to memorize to speak to the robot, Marge said. So from the beginning, we are taking an approach of so-called natural language. We dont want to impose any restrictions on what a soldier might say to a robot.

JUDIs ability to determine a soldiers intent or what Army researchers define as whatever the soldier wants JUDI to do is based on an algorithm that tries to match the verbal instructions with existing data. The algorithm finds an instruction from its training data with the highest overlap and sends it to the robot as a command.

The JUDI system, Marge said, is scheduled for field testing in September. JUDI was developed with help from researchers at the University of Southern Californias Institute for Creative Technologies.

The Transparent Multi-Modal Crew Interface Designs is tackling the AI-human interaction from the other side.

Were looking at ways of improving the ability of AI to communicate information to the soldier to show the soldier what its thinking and what its doing so its more predictable and trustworthy, Perelman said. Because we know that if soldiers dont understand why the AI is doing something and it fails, theyre not going to trust it. And if they dont trust it, theyre not going to use it.

Mission planning is the one area where the Transparent Multi-Modal Crew Interface Designs may prove useful. Perelman compared the program to driving down the highway while a navigation app responds to changes along a route. A driver may want to stay on the highway for the sake of convenience not having to steer through extra turns even if it takes a few minutes longer.

You can imagine a situation during mission planning, for example, where an AI proposes a number of courses of action that you could take, and if its not able to accurately communicate how its coming up with those decisions, then the soldier is really not going to be able to understand and accurately calculate the trade-offs that its taking into account, Perelman said.

He added that through lab testing, the team improved soldiers ability to predict the AIs future mobility actions by 60 percent and allowed the soldiers to decide between multiple courses of actions 40 percent quicker.

The program has transitioned over to the Army Combat Capabilities Development Commands Ground Vehicle System Centers Crew Optimization and Augmentation Technologies program. Thats where it will take part in Mission Enabler Technologies-Demonstrators phase 2 field testing.

Excerpt from:

Two US Army projects seek to improve comms between soldiers and AI - C4ISRNet

AI is here to stay, but are we sacrificing safety and privacy? A free public Seattle U course will explore that – Seattle Times

The future of artificial intelligence (AI) is here: self-driving cars, grocery-delivering drones and voice assistants like Alexa that control more and more of our lives, from the locks on our front doors to the temperatures of our homes.

But as AI permeates everyday life, what about the ethics and morality of the systems? For example, should an autonomous vehicle swerve into a pedestrian or stay its course when facing a collision?

These questions plague technology companies as they develop AI at a clip outpacing government regulation, and have led Seattle University to develop a new ethics course for the public.

Launched last week, the free, online course for businesses is the first step in a Microsoft-funded initiative to merge ethics and technology education at the Jesuit university.

Seattle U senior business-school instructor Nathan Colaner hopes the new course will become a well-known resource for businesses as they realize that [AI] is changing things, he said. We should probably stop to figure out how.

The course developed by Colaner, law professor Mark Chinen and adjunct business and law professor Tracy Ann Kosa explores the meaning of ethics in AI by looking at guiding principles proposed by some nonprofits and technology companies. A case study on facial recognition in the course encourages students to evaluate different uses of facial-recognition technology, such as surveillance or identification, and to determine how the technology should be regulated. The module draws on recent studies that revealed facial-analysis systems have higher error rateswhen identifying images of darker-skinned females in comparison to lighter-skinned males.

The course also explores the impact of AI on different occupations.

The publics desire for more guidance around AI may be reflected in a recent Northeastern University and Gallup survey that found only 22% of U.S. respondents believed colleges or universities were adequately preparing students for the future of work.

Many people who work in tech arent required to complete a philosophy or ethics course in school, said Quinn, which he believes contributes to blind spots in the development of technology. Those blind spots may have lead to breaches of public trust, such as government agencies use of facial recognition to scan license photos without consent, Alexa workers listening to the voice commands of unaware consumers and racial bias in AI algorithms.

As regulations on emerging technology wend through state legislatures, colleges, such as University of Washington and Stanford University, have created ethics courses to mitigate potential harmful effects. Seattle Universitys course goes a step further by opening a course to the public.

The six-to-eight-hour online course is designed to encourage those on the front end of AI deployment, such as managers, to understand the ethical issues behind some of the technologies. Students test their understanding of the self-paced course through quizzes at the end of each module. Instructors will follow up withpaid in-person workshops at the university that cater to the needs of individual businesses.

The initiative was spawned by an August 2018 meeting between Microsoft president Brad Smith and Seattle University administrators, in which the tech company promised $2.5 million toward the construction of the schools new engineering building.The conversation quickly veered into a lengthy discussion about ethical issues around AI development, such as fairness and accountability of tech companies and their workers, said Michael Quinn, the dean of the universitys College of Science and Engineering.

At the meeting, Microsoft promised Seattle University another $500,000 to support the development of a Seattle University ethics and technology initiative. Quinn called the AI ethics lab a natural opportunity to jump at for the college that requires an ethics course to graduate. It was already a topic circulating around campus: Staff and faculty had recently spearheaded a book club to discuss contemporary issues related to ethics and technology.

The initiative will also provide funding for graduate research assistants to create a website with articles and resources on moral issues around AI, as well as for the university to hire a faculty director to manage the initiative. Seattle University philosophy professors will offer an ethics and technology course for students in 2021.

Quinn believes institutions of higher education have a role in educating the public and legislators on finding a middle ground between advancing AI technology and protecting basic human rights. People are starting to worry about the implications [of AI] in terms of their privacy, safety and employment, Quinn said.

AI is developing faster than legislation can keep up with, so its a prime subject for ethics, said Colaner. He is particularly concerned about the use of AI in decision making, such as algorithms used to predict recidivism rates in court, and in warfare through drone strikes.

Washington state Sen. Joe Nguyen, D-White Center, agrees that higher education has a large role in preparing the public for a future more reliant on AI. In an industry setting, said Quinn, employers often push workers to advance technology as far as possible without considering its impact on different communities.AI ethics in education, however, serves as a safeguard [for] meaningful innovation, offers a critical eye and shows how it impacts people in a social-justice aspect.

Ahead of the current session, Seattle University instructors consulted Nguyen on draft legislation about an algorithmic accountability bill that was re-introduced this legislative session after failing to pass last year.

The bill would provide guidelines for the adoption of automated systems that assist in government decision making, and requires agencies to produce an accountability report on the capabilities of software as well as how data is collected and used.

Law professor Ben Alarie, who is also the CEO of a company that uses AI to make decisions in tax cases, believes the public accessibility of the Seattle University course could help businesses avoid potential disruptions.

One of the benefits of having a program like this available to everyone, is that businesses can build in safeguards and develop these technologies in a responsible way, he said.

Read more here:

AI is here to stay, but are we sacrificing safety and privacy? A free public Seattle U course will explore that - Seattle Times

Will AI really transform education? – The Hechinger Report

The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox.

For all the talk about how artificial intelligence could transform what happens in the classroom, AI hasnt yet lived up to the hype.

AI involves creating computer systems that can perform tasks that typically require human intelligence. Its already being experimented with to help automate grading, tailor lessons to students individual needs and assist English language learners. We heard about a few promising ideas at a conference I attended last week on artificial intelligence hosted by Teachers College, Columbia University. (Disclosure: The Hechinger Report is an independent unit of Teachers College.)

Shipeng Li, corporate vice president of iFLYTEK, talked about how the Chinese company is working to increase teachers efficiency by individualizing homework assignments. Class time can be spent on the problems that are tripping up the largest numbers of students, and young people can use their homework to focus on their particular weaknesses. Margaret Price, a principal design strategist with Microsoft, mentioned a PowerPoint plug-in that provides subtitles in students native languages useful for a teacher leading a class filled with young people from many different places. Sandra Okita, an associate professor at Teachers College, talked about how AI could be used to detect over time why certain groups of learners are succeeding or failing.

But none of these artificial intelligence applications are particularly wide-reaching yet, the transformation of every aspect of the traditional learning environment which will usher in a bold new era of human history that promoters have imagined.

There is also plenty of reason to worry about what might happen as tech developers accelerate efforts to bring artificial intelligence into classrooms and onto campuses.

Paulo Blikstein, an associate professor at Teachers College, drew laughs by talking about Silicon Valleys public relations coup in getting us so excited about technologys promise that we happily parted with our private data, only to learn much later of the costs. A handful of tech CEOs caused enormous harm to our society, he said. I dont want that to happen in education yet again. Stavros Yiannouka, chief executive of the World Innovation Summit for Education (WISE), a project of the Qatar Foundation, and a panel moderator, agreed that there are great risks in letting artificial intelligence loose in classrooms. He pointed out, You dont need to have sinister objectives or plans for world domination to get things horribly wrong. Andre Perry, a fellow at the Brookings Institution and a Hechinger contributor, talked about how tech companies may cement racismand other biases into algorithms unless they employ diverse teams and consciously fight against inequities.

As Blikstein noted, AI educational applications come in two types tools that involve computers shaping how learning happens, and those that engage students in using AI to code and program. In a panel moderated by my colleague Jill Barshay, Stefania Druga, a PhD candidate at the University of Washington, discussed a platform shed created called Cognimates. It enables children to use artificial intelligence to train and build robots.

Druga talked about how kids first assumed the robots were super brainy. But once students learned how to train a robot, she said, their perception goes from, its smarter than me to, its not smart, significantly. We see that kids become not only more critical of these technologies but also more fluent.

She mentioned the creative and unexpected projects students wanted to tackle, including building a chatbot that gave back-handed compliments (a concept that Druga, who grew up in Romania, wasnt initially familiar with). We need more silly instead of smart technologies, Druga said, that puts the focus on people and allows people to do what they do best. In her evaluations of Cognimates, she found that students who gained the deepest understanding of AI werent those who spent the most time coding; rather, they were the students who spent the most time talking about the process with their peers. That left me thinking that its from other humans that we tend to learn the most and peers and teachers will always play a central role in education.

Editors note: This story led off this weeks Future of Learning newsletter, which is delivered free to subscribers inboxes every other Wednesday with trends and top stories about education innovation. Subscribe today!

This story about artificial intelligence was produced byThe Hechinger Report, a nonprofit, independent news organization focused oninequality and innovation in education. Sign up forHechingers newsletter.

Join us today.

Original post:

Will AI really transform education? - The Hechinger Report

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters – PaymentsJournal

Virtually every credit card and debit card user has had their card suspended due to suspicious activityand unfortunately fraud has not slowed with the rest of the world during the pandemic. In fact, since the beginning of the COVID-19 outbreak, 40% of financial services firms have seen an increase in fraudulent activityaccording to a LIMRA surveyleading notable banks and even the FBI to issue fraud alerts to their communities.

Over the past few years, many technologies have come onto the market that help banks and credit unions catch out-of-the ordinary activity and alert the card holder as quickly as possible. However, with more people making deposits and taking part in financial activities digitally via apps and chatbots due to current stay at home orders, the onus is solely on the technology to detect the fraudulent activity. Now more than ever, banks and other financial service providers need to implement AI technologies so they can become even more capable of identifying fraudulent patterns and data points that rudimentary, rule-based software can easily miss. Here are the three ways AI technology helps banks with fraud detection:

In recent years, companies have invested in AI primarily toimprove efficiency by automating mundane tasks like data entry. However, according to a recent report from MIT Technology Review, organizations have expanded its use to improve the customer experience by increasing personalization and bringing a deeper level of customer understanding. This use of AI is particularly important for communicating with customers who could potentially be the target for fraudulent activity.

Detecting fraud is critical for banks to build trust with their customers. Leveraging a technology like conversational AI can alert banks to fraud warning signs so they can instantly notify the affected customer, give them the option to verify those suspicious transactions and then suggest next steps for fraud resolution. Banks should specifically look toward conversational AI providers who offer solutions with natural language understanding (NLU), which digests text and voice, translates it into computer language and produces a text and audio output in a natural way that humans can easily understand. This goes beyond simply offering customers an experience personalized just by their name and account detailsit creates a more human interaction that connects them interpersonally through a language they are most familiar with, fostering trust between the customer and financial service provider.

Anti-money laundering (AML) is another area where banks are beginning to tap into the power of AI. With hundreds of thousands of wire transfers a day totaling trillions of dollarsnot to mention the various privacy laws designed to protect customersits almost impossible to identify every instance of money laundering. Nevertheless, banks are required to do everything possible to identify and help combat money laundering. While banks have been using rule-based software to identify money laundering for some time, AI offers a significant improvement as it learns, grows and adapts with each experience. Much of this is due to AIs ability to process large quantities of data and see trends, patterns and outliers in a much larger context than the average human could easily discern.

As part of the fight against financial crime, governments across the world require their financial institutions to put in place AML compliance programs that oversee internal AML policies and ensure the organization remains compliant with important regulations. However, managing AML legislation has proven to be a challenging task for compliance officers. According to Accentures 2019 Compliance Risk Study, compliance officers have reported being overworked and exhausted resulting in potentially detrimental human-caused errors. As a result, there is an increased urgency to improve compliance productivity and shift operations from check-the-box to a risk-prevention outlook.

Organizations that incorporate AI into their businesses are forced to re-imagine their processes a common barrier to technology adoption. For example, with traditional compliance processes, humans might look at 15% of a banks loans to ensure things are being done correctly, while AI processes can review 85% of the data. This not only improves accuracy, but it also means banking employees can be freed up to do more meaningful work.

With the rise of AI, banks have a new tool to handle any number of tasks that are traditionally time-consuming, labor intensive and prone to mistakes. Whether it be document processing, anti-money laundering, fraud detection, risk prevention or customer service, AI offers a level of support that is unparalleled in the history of banking. Best of all, with an increasing focus on privacy, AI represents a viable way to use that data in a safe, trusting manner.

Summary

Article Name

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters

Description

In fact, since the beginning of the COVID-19 outbreak, 40% of financial services firms have seen an increase in fraudulent activityaccording to a LIMRA surveyleading notable banks and even the FBI to issue fraud alerts to their communities.

Author

Lingjia Tang

Publisher Name

PaymentsJournal

Publisher Logo

Read the original here:

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters - PaymentsJournal

This AI Researcher Thinks We Have It All Wrong – Forbes

Dr. Luis Perez-Breva

Luis Perez-Breva is a Massachusetts Institute of Technology (MIT) professor and the faculty director of innovation teams at the MIT School or Engineering. He is also an entrepreneur and part of The Martin Trust Center for MIT Entrepreneurship. Luis works to see how we can use technology to make our lives better and also on how we can work to get new technology out into the world. On an episode of the AI Today podcast, Professor Perez-Breva managed to get us to think deeply into our understanding of both artificial intelligence and machine learning.

Are we too focused on data?

Anyone who has been following artificial intelligence and machine learning knows the vital centrality of data. Without data, we cant train machine learning models. And without machine learning models, we dont have a way for systems to learn from experience. Surely, data needs to be the center of our attention to make AI systems a reality.

However, Dr. Perez-Breva thinks that we are overly focusing on data and perhaps that extensive focus is causing goals for machine learning and AI to go astray. According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence. He says that when our systems arent learning as we want, the primary gut reaction is to give these AI system more data so that we dont have to think as much about the hard parts of generalization and intelligence.

Many would argue that there are some areas where you do need data to help teach AI. Computers are better able to learn image recognition and similar tasks by having more data. The more data, the better the networks, and the more accurate the results. On the podcast, Luis asked whether deep learning is great enough that this works or if we have a big enough data set that image recognition now works. Basically: is it the algorithm or just the sheer quantity of data that is making this work?

Rather, what Luis argues is that if we can find a better way to structure the system as a whole, then the AI system should be able to reason through problems, even with very limited data. Luis compares using machine learning in every application to the retail world. He talks about how physical stores are seeing the success of online stores and trying to copy on that success. One of the ways they are doing this is by using apps to help customers navigate stores. Luis mentioned that he visited a Target where he had to use his phone to navigate the store which was harder than being able to look at signs. Having a human to ask questions and talk to is both faster and also part of the traditional experience of being in a brick and mortar retail location. Luis says he would much rather have a human to interact with at one of these locations than a computer.

Is the problem deep learning?

He compares this to machine learning by saying that machine learning has a very narrow application. If you try to apply machine learning to every aspect of AI then you will end up with issues similar to the ones he experienced at the Target. Basically, looking at neural networks as a hammer and every AI problem as a nail. No one technology or solution works for every application. Perhaps deep learning only works because of vast quantities of data? Maybe theres another algorithm that can generalize better, apply knowledge learned in one domain to another better, and use smaller amounts of data to get much better quality insights.

People have tried recently to automate many of the jobs that people do. Throughout history, Luis says that technology has killed businesses when it tries to replace humans. Technology and businesses are successful when they expand on what humans can do. Attempting to replace humans is a difficult task and one that is going to lead companies down the road to failure. As humans, he points out, we crave human interaction. Even in the age where people are constantly on their technology people desire human interaction greatly.

Luis also makes a point that many people mistakenly confuse automation and AI. Automation is using a computer to carry out specific tasks, and it is not the creation of intelligence. This is something that many are mentioning on several occasions. Indeed, its the fear of automation and the fictional superintelligence that has many people worried about AI. Dr. Perez-Breva makes the point that many ascribe human characteristics to machines. But this should not be the case with AI systems.

Rather, he sees AI systems more akin to a new species with a different mode of intelligence than humans. His opinion is that researchers are very far from creating an AI that is similar to what you will find in books and movies. He blames movies for giving people the impression of robots (AI) killing people and being dangerous technologies. While there are good robots in movies, there are a few of them and they get pushed to the side by bad robots. He points out that we need to move away from this pushing images of bad robots. Our focus needs to be on how artificial intelligence can help humans grow. It would be beneficial if the movie-making industry could help with this. As such, AI should be thought of as a new intelligent species were trying to create, not something that is meant to replace us.

A positive AI future

Despite negative images and talk, Luis is sure that artificial intelligence is here to stay, at least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development.

As a final question in the interview, Luis was asked where he sees the industry of artificial intelligence going. Prefacing his answer with the fact that based on the earlier discussion people are investing in machine learning and not true artificial intelligence, Luis said that he is happy in the investment that businesses are making in what they call AI. He believes that these investments will help the development of this technology to stay around for many years.

Once we can stop comparing humans to artificial intelligence, Luis believes that we will see great advancements in what AI can do. He believes that AI has the power to work alongside humans to unlock knowledge and tasks that we werent previously able to do. The point when this happens, he believes, is not that far away. We are getting closer to it every day.

Many of Luiss ideas are contrary to popular beliefs by many people who are interested in the world of artificial intelligence. At the same time, the ideas that he shares are presented in a very logical manner and are very thought-provoking. Time will tell if these ideas are in fact correct.

See the rest here:

This AI Researcher Thinks We Have It All Wrong - Forbes

Navigating the AI and Analytics Job Market During COVID-19 – Datanami

(metamorworks/Shutterstock)

The market for AI and analytics jobs has not been spared from the wrath of COVID-19, which has directly led to the loss of more than 30 million American jobs over the past four months. It may not appear to be an ideal time to look for a new job, but those hunting for employment in AI and analytics still have options.

The overall job market is abysmal in the United States at the moment, thanks to government orders to temporarily shudder unnecessary businesses in an attempt to slow the novel coronaviruss spread. Unemployment rolls keep growing as politicians debate how much money to borrow to keep everybody afloat a little bit longer.

Tech startups with questionable business plans are especially vulnerable to the carnage. Nearly 77,000 people have been laid off from tech startup jobs since March, according to the Layoffs.fyi tracker. Silicon Valley, in particular, has been hit hard by layoffs, with companies like Uber, Airbnb, Yelp, and Groupon parting ways with large groups of workers.

It wont likely end soon, especially if COVID-19 cases continue to surge into the fall. As businesses are forced to shut their doors, workers across all industries are laid off, which hurts consumer spending goes down. Businesses slow investment in reaction to lowered spending, which in turn leads to fewer jobs, which leads to lower consumer spending, etc. Rinse and repeat. Its a nasty feedback loop, to be sure, and until something breaks the cycle, consumer confidence and business investment will continue to be dulled.

COVID-19 has cost more than 30 million people their jobs in the U.S.(Kira_Yan/Shutterstock)

Early in the pandemic, companies resisted plans to scale back their hiring efforts in big data, analytics, and AI. They have mostly maintained those employment plans, according to the most recent Burtch Works survey, which it conducted with the International Institute for Analytics.

The July Burtch Works survey concluded that 50% of analytics and data science organizations have either suffered no impacts (42.1%) or have actually grown in size (7.6%). Only 14.5% of respondents expected additional staffing and hiring actions, it says.

Jobs in AI are also not immune to the slowdown. According to a June report from LinkedIn, COVID-19 caused about a 10% drop in demand for AI jobs compared to 2019, when folks with AI skills were in very high demand and commanded top dollar in the job market. In fact, the job title Artificial Intelligence Specialist was the number one job in LinkedIns 2020 Emerging Jobs Report.

Specifically, job listings for such roles grew at 14.0% YOY [year over year] before COVID-19 outbreak and slowed down to only 4.6% YOY, the author of the June report, Zhichun Jenny Ying, writes. Job applications grew at 50.8% YOY before COVID-19 outbreak and dipped to 30.2% YOY post-COVID.

While the AI job market has slowed down, its still chugging along at a 5% annual growth rate (when measured by AI job postings). Clearly, the overall job market is doing much worse than the AI sector. When Ying normalized AI job postings against overall job postings, AI jobs actually posted an 8.3% increase during the 10 weeks after the COVID-19 outbreak began.

But theres a twist: fewer people are applying for those AI jobs. According to Lings data, AI job applications dropped 14.1% during the 10 weeks after the COVID-19 outbreak in the U.S., compared to the 10 weeks prior, when normalized against overall job postings, she writes. This suggests that candidates may be playing it safe during a period of uncertainty.

AI and data jobs are resilient to COVID-19 cutbacks, but theyre not immune (Jozsef Bagota/Shutterstock)

You cant blame people for holding onto their current positions during the pandemic. With schools mostly shut down and non-essential workers asked to work from home, the day-to-day existence of the American workforce has experienced unprecedented shock. Many companies wont survive the pandemic, so why take a risk with job hopping?

Everybodys situation is different. Depending on the conditions, there could be very good reasons to take a new job in data, according to Tamara Iakiri, the vice president of talent experience at Open Systems Technologies (OST), a technology consultancy based in Grand Rapids, Michigan.

While we have certainly seen strong talent come into the market due to job cuts, it is important to remember that companies are still hiring and great companies are using the current situation to gain outstanding talent especially with these desired skills sets that have been hard to recruit for, Iakiri tells Datanami.

Iakiri recommends that folks in data-related industries update their resume every six months. Keeping an eye on the job market can let folks know what skills are currently hot, as well as who is hiring.

Now is not the time to bury our heads and hope, Iakiri says. If the company is not doing well, keep doing great work, but also be prepared have your resume ready, know who is hiring and build your network so you are ready to respond if the unfortunate happens and you are laid off. The best recruiters are willing to have conversations even if they dont have current openings, having these connections can be invaluable now and in the future.

Related Items:

AWS re:Invent Goes Virtual (and Free) as COVID-19 Conference Cancellations Continue

How COVID-19 Is Impacting the Market for Data Jobs

Is the Tech Boom Over?

Read more:

Navigating the AI and Analytics Job Market During COVID-19 - Datanami

Skafos.ai appoints Jody Stoehr as Chief Revenue Officer as part of surging sales growth – PR Web

Skafos.ai allows brands to authentically connect with their customer similar to a brick and mortar experience.

CHARLOTTESVILLE, Va. (PRWEB) July 20, 2020

Skafos.ai, the worlds leading interactive guided shopping platform for the eCommerce industry, today announced the executive appointment of Jody Stoehr as Chief Revenue Officer.

After meeting the team at Skafos.ai, and their customers, I quickly realized the Companys innovations in interactive UX, Visual AI, and deep machine learning were allowing brands to authentically influence the preferences and the personal intent of each shopper, said Jody Stoehr, Chief Revenue Officer at Skafos.ai. Its evident that Skafos.ai is the only next-gen eCommerce product available that simplifies the online shopping experience in an interactive, highly visual, and personal manner and allows brands to authentically connect with their customer similar to a brick and mortar experience. Im proud to join this amazing team.

Stoehr brings more than 25 years of sales and consultative retail marketing experience to Skafos.ai, where she will leverage her deep retail industry expertise and network to lead and coordinate Skafos.ais sales and business development functions.

Previously, Stoehr served as the VP of Client Engagement at Reflektion where she was responsible for building a cross-functional team designed to assist some of the worlds notable retailers in the adoption and expansion of the companys personalization solutions. She also was the Managing Director of R2integrateds (R2i) Seattle office where she was responsible for the office profitability, growth, and development of full-service marketing solutions for Fortune 1000 companies including Amazon, Microsoft, and Bluetooth. She also held prominent sales and growth positions in other notable companies such as Marconi, Bell Atlantic Mobile, and Fujitsu/DMR Consulting.

Jodys depth and breadth of experience assisting major retail brands will help further our engagement with customers and the retail industry at large, said Michael Prichard, CEO, Skafos.ai. Jody brings tremendous value to the Skafos.ai team. Were excited to have her on board and value her leadership as we continue to rapidly grow.

In addition to her executive career, Stoehr has served as an advisor, board member, and committee chair for various tech startups and organizations including R2integrated and Sales and Marketing Executives International (SMEI). She and her teams have received numerous awards for extraordinary performance including the SIIA CODiE Awards, Adobe Marketing Cloud Partner of the Year Award, Top Seattle Digital Agency, Best Marketing Engagement Campaign, and Pulse Award in Integrated B2B, among others.

About Skafos.ai

Skafos.ais interactive guided shopping platform helps online retailers improve sales conversion, increase basket size, and authentically connect with their shoppers through interactive guided experiences and real-time personalization. Founded in 2017, by pioneers in user experience, mobile, visual AI, and deep learning, Skafos.ai is a venture-backed company bringing the next generation of authentic personalization to online shopping. Its platform uniquely combines individual shopper preferences, visual AI and deep learning to create more intimate and impactful commerce experiences throughout the shopping continuum.

Share article on social media or email:

See more here:

Skafos.ai appoints Jody Stoehr as Chief Revenue Officer as part of surging sales growth - PR Web

Lunchclub raises $4M from a16z for its AI warm intro service – TechCrunch

There are apps out there that help you find friends, find dates and find your distant family histories, but when it comes to growing your professional network, the options are shockingly bad, were talking LinkedIn here.

Lunchclub is a startup thats looking to help users navigate finding new connections inside specific industries. The company has recently closed a $4 million seed round led by Andreessen Horowitz with other investments coming in from Quoras co-founder, the Robinhood cofounders, and Flexports cofounders.

The app follows in the footsteps of others that aimed to be dating app-like marketplaces for growing out your professional network via 1:1 lunch and coffee meetings. Lunchclub is more focused on setting up a handful of meetings for users that have a specific goal in mind. Lunchclub is aiming to be your warm intro and connect you with other users via email that can assist you in your professional goals.

When youre on-boarded to the service, you are asked to highlight some objectives that you might have and this is where the app really makes its goals clear. Options include, raise funding, find a co-founder or parter, explore other companies, and brainstorm with peers. These objectives are pretty explicit and complementary, i.e. for every raise funding objective, theres an invest option.

There isnt a ton being asked for on the part of the user when it comes to building up the data on their profile, Lunchclub is hoping to get most of the data that they need from the rest of the web.

Our view is that theres tons of data already out there, Lunchclub CEO Vlad Novakovski told TechCrunch in an interview. Anything that comes from the existing social networks, be in things like Twitter, be it things that are more specific to what people might be working on, like Github or Dribble or AngelList all of those data sources are in the public domain and are fair game.

Lunchclubs sell is that they can learn from what matches are successful via user feedback and use that to hone further matches. Novakovski most previously was the CTO of Euclid Analytics which WeWork acquired in 2017. Previous to that, he led the machine learning team at Quora.

The web app, which currently has a lengthy-waitlist, is available for users in seven cities including the SF Bay Area, Los Angeles, New York, Boston, Austin, Seattle and London.

Co-founders Vlad Novakovski, Scott Wu and Hayley Leibson

See more here:

Lunchclub raises $4M from a16z for its AI warm intro service - TechCrunch

What Will a World Governed by AI Look Like? – Futurism

Artificial intelligence already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI.

This will be partly owing to advances in deep learning, which uses multi layer neural networks that were first theorized in the 1980s. With todays greater computing power and storage, deep learning is now a practical possibility, and a deep-learning application gained worldwide attention in 2016 by beating the world champion in Go. Commercial enterprises and governments alike hope to adapt the technology to find useful patterns in Big Data of all kinds.

In 2011, IBMs Watson marked another AI watershed, by beating two previous champions in Jeopardy!, a game that combines general knowledge with lateral thinking. And yet another significant development is the emerging Internet of Things, which will continue to grow as more gadgets, home appliances, wearable devices, and publicly-sited sensors become connected and begin to broadcast messages around the clock. Big Brother wont be watching you; but a trillion little brothers might be.

Beyond these innovations, we can expect to see countless more examples of what were once called expert systems: AI applications that aid, or even replace, human professionals in various specialties. Similarly, robots will be able to perform tasks that could not be automated before. Already, robots can carry out virtually every role that humans once filled on a warehouse floor.

Given this trend, it is not surprising that some people foresee a point known as the Singularity, when AI systems will exceed human intelligence, by intelligently improving themselves. At that point, whether it is in 2030 or at the end of this century, the robots will truly have taken over, and AI will consign war, poverty, disease, and even death to the past.

To all of this, I say: Dream on. Artificial general intelligence (AGI) is still a pipe dream. Its simply too difficult to master. And while it may be achieved one of these days, it is certainly not in our foreseeable future.

But there are still major developments on the horizon, many of which will give us hope for the future. For example, AI can make reliable legal advice available to more people, and at a very low cost. And it can help us tackle currently incurable diseases and expand access to credible medical advice, without requiring additional medical specialists.

In other areas, we should be prudently pessimistic not to say dystopian about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a universal income, assuming there is still a sufficient tax base through which to fund it.

A different but equally troubling implication of AI is that it could become a substitute for one-on-one human contact. To take a trivial example, think about the annoyance of trying to reach a real person on the phone, only to be passed along from one automated menu to another. Sometimes, this is vexing simply because you cannot get the answer you need without the intervention of human intelligence. Or, it may be emotionally frustrating, because you are barred from expressing your feelings to a fellow human being, who would understand, and might even share your sentiments.

Other examples are less trivial, and I am particularly worried about computers being used as carers or companions for elderly people. To be sure, AI systems that are linked to the Internet and furnished with personalized apps could inform and entertain a lonely person, as well as monitor their vital signs and alert physicians or family members when necessary. Domestic robots could prove to be very useful for fetching food from the fridge and completing other household tasks. But whether an AI system can provide genuine care or companionship is another matter altogether.

Those who believe that this is possible assume that natural-language processing will be up to the task. But the task would include having emotionally-laden conversations about peoples personal memories. While an AI system might be able to recognize a limited range of emotions in someones vocabulary, intonation, pauses, or facial expressions, it will never be able to match an appropriate human response. It might say, Im sorry youre sad about that, or, What a lovely thing to have happened! But either phrase would be literally meaningless. A demented person could be comforted by such words, but at what cost to their human dignity?

The alternative, of course, is to keep humans in these roles. Rather than replacing humans, robots can be human aids. Today, many human-to-human jobs that involve physical and emotional caretaking are undervalued. Ideally, these jobs will gain more respect and remuneration in the future.

But perhaps that is wishful thinking. Ultimately, the future of AI our AI future is bright. But the brighter it becomes, the more shadows it will cast.

More:

What Will a World Governed by AI Look Like? - Futurism

Human + Machine Collaboration: Work in the Age of AI – Interesting Engineering

In this age of Artificial Intelligence (AI), we are witnessing a transformation in the way we live, work, and do business. From robots that share our environment and smart homes to supply chains that think and act in real-time, forward-thinking companies are using AI to innovate and expand their business more rapidly than ever.

Indeed, this is a time of change and change happens fast. Those able to understand that the future includes living, working, co-existing, and collaborating with AI are set to succeed in the coming years. On the other hand, those who neglect the fact that business transformation in the digital age depends on human and machine collaboration will inevitably be left behind.

Humans and machines can complement each other resulting in increasing productivity. This collaboration could increase revenue by 38 percent by 2022, according to Accenture Research. At least 61 percent of business leaders agree that the intersection of human and machine collaboration is going to help them achieve their strategic priorities faster and more efficiently.

Human and machine collaboration is paramount for organizations. Having the right mindset for AI means being at ease with the concept of human+machine, leaving the mindset of human Vs. machine behind. Thanks to AI, factories are now requiring a little more humanity; and AI is boosting the value of engineers and manufacturers.

The emergence of AI is creating brand new roles and opportunities for humans up and down the value chain. From workers in the assembly line and maintenance specialists to robot engineers and operations managers, AI is regenerating the concept and meaning of work in an industrial setting.

According to Accenture's Paul Daugherty, Chief Technology and Innovation Officer, and H. James Wilson, Managing Director of Information Technology and Business Research, AI is transforming business processes in five ways:

Flexibility: A change from rigid manufacturing processes with automation done in the past by dumb robots to smart individualized production following real-time customer choices brings flexibility to businesses. This is particularly visible in the automotive manufacturing industry where customers can customize their vehicle at the dealership. They can choose everything from dashboard components to the seat leather --or vegan leather-- to tire valve caps. For instance, at Stuttgart's Mercedes-Benz assembly line there are no two vehicles that are the same.

Speed: Speed is super important in many industries, including finance. The detection of credit card fraud on the spot can guarantee a card holder that a transaction will not be approved if fraud was involved, saving time and headaches if this is detected too late. According to Daugherty and Wilson, HSBC Holdings developed an AI-based solution that uses improved speed and accuracy in fraud detection. The solution can monitor millions of transactions on a daily basis seeking subtle pattern that can possibly signal fraud. This type of solution is great for financial institutions. Yet, they need the human collaboration to be continually updated. Without the updates required, soon the algorithms would become useless for combating fraud. Data analysts and financial fraud experts must keep an eye on the software at all times to assure the AI solution is at least one step ahead of criminals.

Scale: In order to accelerate its recruiting evaluation to improve diversity, Unilever adopted an AI-based hiring system that assesses candidate's body language and personality traits. Using this solution, Unilever was able to broaden its recruiting scale; job applicants doubled to 30,000, and the average time for arriving to a hiring decision decreased to four weeks. The process used to take up to four months before the adoption of the AI system.

Decision Making: There is no secret to the fact that the best decision that people make are based on specific, tailored information received in vast amounts. Using machine learning and AI a huge amount of data can be quickly available at the fingertips of workers on the factory floor, or to service technicians solving problems out in the field. All data previously collected and analyzed brings invaluable information that helps humans solve problems much faster or even prevent such problems before they happen. Take the case of GE and its Predix application. The solution uses machine-learning algorithms to predict when a specific part in a specific machine might fail. Predix alerts workers to potential problems before they become serious. In many cases, GE could save millions of dollars thanks to this technology collaborating with fast human action.

Personalization: AI makes possible individual tailoring, on-demand brand experiences at great scale. Music streaming service Pandora, for instance, applies AI algorithms to generate personalized playlists based on preferences in songs, artists, and genres. AI can use data to personalize anything and everything delivering a more enjoyable user experience. AI brings marketing to a new level.

Of course, some roles will come to an end as it has happened in the history of humanity every time there has been a technological revolution. However, the changes toward human and machine collaboration require the creation of new roles and the recruiting of new talent; it is not just a matter of implementing AI technology. We also need to remember that there is no evolution without change.

Robotics and AI will replace some jobs liberating humans for other kinds of tasks, many that do not yet exist as many of today's positions and jobs did not exist a few decades ago. Since 2000, the United States has lost five million manufacturing jobs. However, Daugherty and Wilson think that things are not as clear cut as they might seem.

In the United States alone, there are going to be needed around 3.4 million more job openings covered in the manufacturing sector. One reason for this is the need to cover the Baby Boomers' retirement plans.

Re-skilling is now paramount and applies to everyone who wishes to remain relevant.Paul Daugherty recommends enterprises to help existing employees develop what he calls fusion skills.

In their book Human + Machine: Reimagining Work in the Age of AI, a must-read for business leaders looking for a practical guide on adopting AI into their organization, Paul Daugherty and H. James Wilson identify eight fusion skills for the workplace:

Rehumanizing time: People will have more time to dedicate toward more human activities, such as increasing interpersonal interactions and creativity.

Responsible normalizing: It is time to normalize the purpose and perception of human and machine interaction as it relates to individuals, businesses, and society as a whole.

Judgment integration: A machine may be uncertain about something or lack the necessary business or ethical context to make decisions. In such case, humans must be prepared to sense where, how, and when to step in and provide input.

Intelligent interrogation: Humans simply cant probe massively complex systems or predict interactions between complex layers of data on their own. It is imperative to have the ability to ask machines the right smart questions across multiple levels.

Bot-based empowerment: A variety of bots are available to help people be more productive and become better at their jobs. Using the power of AI agents can extend human's capabilities, reinvent business processes, and even boost a human's professional career.

Holistic (physical and mental) melding: In the age of human and machine fusion, holistic melding will become increasingly important. The full reimagination of business processes only becomes possible when humans create working mental models of how machines work and learn, and when machines capture user-behavior data to update their interactions.

Reciprocal apprenticing: In the past, technological education has gone in one direction: People have learned how to use machines. But with AI, machines are learning from humans, and humans, in turn, learn again from machines. In the future, humans will perform tasks alongside AI agents to learn new skills, and will receive on-the-job training to work well within AI-enhanced processes.

Relentless reimagining: This hybrid skill is the ability to reimagine how things currently areand to keep reimagining how AI can transform and improve work, organizational processes, business models, and even entire industries.

In Human + Machine, the authors propose a continuous circle of learning, an exchange of knowledge between humans and machines. Humans can work better and more efficiently with the help of AI. According to the authors, in the long term, companies will start rethinking their business processes, and as they do they will cover the needs for new humans in the new ways of doing business.

They believe that "before we rewrite the business processes, job descriptions, and business models, we need to answer these questions: What tasks do humans do best? And, what do machines do best?" The transfer of jobs is not simply one way. In many cases, AI is freeing up to creativity and human capital, letting people work more like humans and less like robots.

Giving these paramount questions and the concepts proposed by Daugherty and Wilson, giving them some thought might be crucial at the time of deciding what is the best strategy you should take as a business leader in your organization in order to change and adapt in the age of AI.

The authors highlight how embracing the new rules of AI can be beneficial at the time businesses are reimagining processes with a focus on an exchange of knowledge between humans and machines.

Related Articles:

Read more from the original source:

Human + Machine Collaboration: Work in the Age of AI - Interesting Engineering