Artificial intelligence and drones 'future of policing' – BBC News
Gwent Police's chief constable says technology will help fight crime amid cutbacks.
View original post here:
Artificial intelligence and drones 'future of policing' – BBC News
Gwent Police's chief constable says technology will help fight crime amid cutbacks.
View original post here:
As the Masters Tournament kicks off on Thursday, nearly 100 golfers are vying to win the coveted green jacket. Collectively, they’ll perform more than 20,000 drives, chips, and putts over the course of the weekend. So which ones will you, the viewer sitting at home or at work or watching on your phone, get to see?
That’s what IBM’s Watson is here to determine. Beginning this year, the artificial intelligence system will help the Masters quickly decide which highlights to push out to fans. Watson will use a variety of factors to assign every single shot an “excitement level” score to determine which replays to roll out to viewers.
According to Golf.com, the A.I. system measures how exciting a particular shot is based on the sound of the crowd’s roar, the commentator’s analysis, and the players’ reactions. A chip that announcer Jim Nantz calls “nice” will get less of a bump than one he refers to as “outstanding,” for example, and a golfer’s polite wave to the crowd will be measured differently than an ecstatic fist pump.
Those factors then feed into an algorithm, which produces an “Overall Excitement Level” rating. The editorial team at Augusta National then uses those ratings to post the best highlights soon after they happen, so a viewer can catch up on the biggest moments he or she has missed that day or throughout the tournament.
The system is currently being used on Masters.com and the tournament’s iPhone app. The plan is to eventually give fans more control, letting them filter the videos to show only highlights of their favorite golfers.
It’s the latest application for Watson, the system that first gained fame for handily beating Ken Jennings at Jeopardy in 2011. Watson is used to recommend treatments for patients at some medical facilities, including the Cleveland Clinic and New York’s Sloan-Kettering Cancer Center. And starting this year, H&R Block is using Watson’s A.I. to help with client tax preparation.
See the article here:
Discover how artificial intelligence is shaping eCommerce today. The age of AI and machine learning is upon us. Don’t let your business get left behind.
Artificial intelligence will take over. But its not going to be an apocalyptic scenario, unless the latest U.S. military developments actually come up with a mind of their own.
AI will have control over our everyday lives, but only because we want it to. For some people, Siri or Cortana already play this role, as AI assistants.
A perfect example of AI automation is the US stock market that sees around around 70 percent of trades being done by automated algorithms. Gartner predicts that by 2020 over 80 percent of all customer interactions will be handled by AI. We can already see automation taking over with services like Amazon Go, being advertised as an AI-based shopping experience.
But how will AI change the landscape of online shopping and other forms of online commerce? How can business owners leverage this sprawling AI ecosystem to their advantage? Lets find out.
With services like CamFind, people can already leverage the power of artificial intelligence to facilitate their shopping. Its the perfect mix of augmented reality and AI that has the potential to transform how businesses do their marketing, address user experience issues and creates revenue streams. Given that, according to some studies, over 50 percent of young shoppers are interested in VR and AR products, AI will see even more implementations like that.
RankBrain is Googles own take on artificial intelligence — an AI-based search algorithm that has a lot of practical implications for businesses. Algorithms like these will eventually remove any possibility of gaming search engines to get traffic and increase sales.
They should focus the majority of their eCommerce website development efforts on better user experience and quality content. After all, this is what will matter to AI and companies that create machine-learning products. Google, for example, likes to stress the importance of user experience. With more information, instant product discovery and the growing pace of online shopping, the average attention span of an online user has decreased by 30 percent over the last 15 years.
Now you have even less time to capture users, with exactly the right products that they were looking for and in a convenient way. You can also use content that might give your brand an opportunity to build a relationship with the user. Its pretty obvious from these developments that SEO and other technical marketing tools will be neglected by artificial intelligence and app-based shopping assistants. And this takes us to our next point.
Although theres the browsing versus buying gap when it comes to mobile users (only 16 percent of eCommerce dollars are spent via mobile), AI-based technologies are already here to close it. This is a huge window of opportunity for businesses. We can already see big companies pioneer the machine-learning way in hopes of getting a competitive advantage.
Macys teamed up with IBMs Watson to simplify shopping for mobile users. Theres a growing number of various shopping assistance apps, like Mona or AI apps by famous brands, like “My Starbucks Barista.” All of these products have a single goal to make mobile the default shopping domainand close that revenue gap, where desktop shoppers are at the top.
Companies that really want to exploit this trend have to include mobile app development or mobile UX efforts as parts of their growth strategy.
Providing proper customer care is one of the most important aspects of todays business. For example, 73 percent of customers tend to like brands specifically for their support. And since customers prefer human interactions for a quality customer care experience, a growing business might find this specific chunk of their expenditures to be taxing on their bottom line. But theres no getting away from this important part of running a business — its six times cheaper to keep a customer than to bring a new one. And if a business wants to keep customers, adequate customer support is crucial.
Luckily, with the latest advances in AI and machine-learning, customer care is getting cheaper every day. Conversational chat bots are very popular right now. Companies like DigitalGenius merge real customer care departments with AI-based solutions.
These products greatly extend the reach of any business and its ability to communicate. Customer care becomes more effortless. This means that companies can discover additional growth and marketing opportunities. Theres no excuse for eCommerce businesses that dont have a proper customer care process in place.
Machine-learning tools have greatly simplified modeling and analysis for various business niches. For example, companies like BigML and DataRobot present amazing advances in the world of data science and automated machine learning.
Although these kinds of technologies seem to be more suited for FinTech industry players, like loan and car insurance companies, theres a window of opportunity for eCommerce businesses that are ready to fully embrace these new technologies.
AI is perfect for handling customer data, predicting visitors and their behaviors, analyzing purchasing patterns and doing all kinds of other manipulations with big sets of data.
As machine-learning tools are becoming more prevalent, eCommerce businesses find it easier to implement automation and AI solutions for their specific product or marketing needs. This is the next big thing in eCommerce that will change how businesses address planning and development.
As if businesses didnt have enough on their hands, competitors arent going to wait for anybody. Thats why there is already a plenty of services that handle various elements of competitive analysis with the help of artificial intelligence. Price scraping, dynamic pricing patterns and many other intelligence nomenclatures are now handled by companies like Clavis, Indix or Quicklizard.
It is projected that AI will be responsible for an economic impact of up to $33 trillion in annual growth and cost reduction. Companies that fail to get on board and efficiently utilize machine-learning tools are going to get left behind in terms of revenue and expansion.
In general, AI and machine-learning platforms open a myriad of opportunities for eCommerce businesses. The biggest problem, at this point, is cost/benefit rationalization for many of these practices and products. Companies that adopt automated data science and AI tools early on may suffer due to increased costs and imperfections in many of the products offered in this niche. At the same time, companies that refuse to innovate may soon end up on the curb and stagnate.
Photo credit: Shutterstock /Willyam Bradberry
See the rest here:
Eighty-four percent of businesses see the use of artificial intelligence (AI) as essential to competitiveness, while half view the technology as transformative, according to Tata Consultancy Services’Global Trend Studyreleased in March 2017. AI, once limited to experimentation within large enterprises and R&D labs, is becoming an accessible and cost-feasible tool for all market segments — including entrepreneurs and the businesses they run.
But, as was the case with cloud computing and big data, broader availability of AI does not by default translate into productivity gains, better customer service or operational efficiency. As vendors bake AI into existing workforce communications and collaboration solutions, they must keep an eye firmly fixed on areas where AI can improve productivity. Here are five ways AI can do just that.
OK, so its not physical paper anymore, but employees are struggling to manage an avalanche of emails, messages, tasks, files and meetings. Its little wonder the typical workerspendsnearly 20 percent of their week just searching for and gathering information. The good news: Artificial intelligence and machine learning capabilities through bots are helping to shift the burden away from workersstruggling to manually filter the influx of content, communications and notifications.
Related: The Best Productivity Tools for Small Businesses
AI helps reduce the amount of time workers must dedicate each day to the orchestration of work, and more time on the work itself.For example, the ability to search through all a users cloud applications to find the documents, messages, social profiles and any content relevant to the conversation or meeting theyre having enables workers to collaborate effectivelyand leads to a more efficient workforce.
300,000 hours a year: Thats how much manpower consultants from Bain & Companyestimatedone large firm was losing as a result of just one weekly executive meeting. Drilling down further, professionalsattendmore than 60 meetings per month, and consider more than half of these meetings a waste of time.
A key reason meetings are so unproductive is that employees spend an inordinate amount of time preparing for them. Trying to find the right files, notes and tasks associated with each meeting eats several additional hours every week, and if an employee misses a meeting the problem becomes even worse. AI doesnt have the power to get rid of meetings altogether, but it can make meetings more productive by identifying additional information, triggered by workflow demands, that is derived from a deeper understanding of relationships — with people, entitiesand information. This includes past collaboration with meeting participants, additional potential stakeholders that should be included, and relevant information from cloud applications.
Related: Create Your Own Chatbot Online and Increase Productivity
Consider this common scenario: An employee in your increasingly mobile workforce is asked at the last minute to hop on a conference call with an important client. Shes at the airport, dialing in from an iPhone. Typically, in this scenario, it would be next to impossible to quickly locate past emails, messages and files pertinent to the client and the topic of the call. But, AI flips that script, searching in real-time to present the employee only with contextually relevant information for that particular call.
Most businesses provide workers with tools to communicate while working remotely, but that is not the same as tools that keep them engaged and fully productive. Entrepreneurs must extend beyond just hooking employees up with email, internet and network access to technologies specifically designed to foster collaboration and recognize todays work teams are often virtual ones that want to engage and interact from any location, at any time, and using any device.
Related: 10 Productivity Tools for the Sole Proprietor
Knowledge workers areinterruptedevery 3 minutes on average, and it takes up to 8 uninterrupted minutes to re-establish focus. With the average employee suffering through 56 interruptions per day, it is easy to see why these distractions are adding up big time for businesses in the form of lost productivity and efficiency.
Distractions come in many forms, and a major culprit is that employees are getting hit by communications coming from all sides, be it through apps, emails, chat, video, tasks and yes, in-person.From 2014-2016 the number of applications that employees usegrew25 percent. Equally problematic is the time spent toggling back and forth between each app and work task.
By reducing the time employees spend moving between apps to communicate, collaborate and retrieve information, AI can reduce these types of distractions and in turn, boost productivity.
TaherBehbehani brings over 20 years of operational, product strategy and marketing expertise toBroadSoft. Behbehani frequently writes and speaks on the increasingly millennial, mobile and dispersed workforce, and the technology,…
When we complain, feel lonely or are going through a hard time, its often said that all we really need is someone to listen to us, not try to fix things for us. Something like I hear what youre saying or I support you 100 percent will often work wonders, despite how robotic the supportive listener might feel the offering to be.
Well, then, what if an actual robot were offering this kind of emotional support? Could it be as effective as a human listener?
According to an Israeli research study completed last year, the answer is yes.
Study participants were asked to tell a personal story to a small desktop robot. Half the participants spoke to a robot that was unresponsive, while the other half spoke to a robot who responded with supportive comments and common gestures of understanding and sympathy, like nodding and turning to look the participant in the eye. Researchers found that people can develop attachments to responsive robots, and they have the same feelings and response behaviors they would have had if the listener had been human.
Rapid advances in artificial intelligence (AI) and robotics technology is creating all kinds of possibilities, and raising all kinds of questions, too. However, researchers are discovering that this new technology could most help most those who understand it the least: older adults.
Technologies like Siri and Alexa already exist that can help provide a natural language interface to online resources and that dont require keyboard skills or computer literacy, said Richard Adler, a distinguished research fellow at the Institute for the Future in Palo Alto, California, and a nationally recognized expert on the relationship between technology and aging. As this kind of technology becomes more powerful, it will become easier to use and more helpful.
In other words, Mom and Dad can interact with technology the same way they would with family and friends.
There are also interesting experiments underway to use AI for predictive monitoring that can do things like detect changes in gait that could signal a greater risk of falling, Adler said.
At Standford University, theres a special Artificial Intelligence Assisted Care Research Program in which researchers are developing AI technology that can monitor seniors in their homes using multiple sensors to detect lifestyle patterns, physical movements, vital signs even emotions and then use that data to accurately access the seniors health, safety and well-being.
Indeed, while AI technology is being developed that can help older adults directly, much of the research is being focused on providing support for caregivers, doctors and other health other care professionals. At the Stanford program, they are even working on an AI-powered ICU hospital unit that can monitor patients.
I see AI helping doctors make better diagnoses, managing patients remotely and helping to coordinate caregiving teams that could include both doctors and family members, Alder said.
And while AI like this is being developed to provide practical assistance, it is also being developed to provide human-like companion support as well, which can help reduce the isolation that often comes with living alone with limited mobility.
Thats the theory behind ElliQ (which takes its cue from the aforementioned Israeli research study), a new device thats being called an autonomous active-aging companion, and which is currently being tested with older adults in San Francisco. ElliQ, which looks more like a friendly extension lamp than a humanoid robot, can speak and respond with a combination of movements, sounds and light displays to convey shyness, assertiveness, and even sympathy and understanding.
For example, ElliQ might prompt you to take a walk if its a nice day outside, either with a gentle reminder or something more forceful, depending on what its learned about its owner. Some family photos might arrive on the tablet screen beside the robot, and ElliQ might tilt its little abstract head and say what a beautiful family you have. ElliQ also can provide reminders about taking medications, upcoming doctors appointments and caregiving schedules with a human touch.
If all this sounds like scary, Brave New World-type stuff, well, it is.
While Alder said there are many benefits to using AI technology, like better connecting older adults with caregivers, family members and health professionals, and reducing senior isolation, he has a few warnings.
I worry that AI and other media will be used to provide pseudo-social interactions rather than actual human interactions, he said, adding that theres also a danger in taking agency and privacy away from older adults in the name of better, more intrusive monitoring by others. My hope is that AI can be used to facilitate and orchestrate more and better human-to-human interactions. But the jury is still out on which way well go.
David McNair handles publicity, marketing, media relations and social media efforts for the Jefferson Area Board for Aging.
Crain’s Cleveland Business (blog)
The future is now: Artificial intelligence in the workplace
Crain’s Cleveland Business (blog)
… to work in flying cars or teleport to our company's lunar outpost, a concept once thought to be outside the realm of possibility is now on the verge of transforming the modern workplace — working side-by-side with robotics capable of artificial …
When Yoky Matsuoka was a small child in Tokyo, she dreamed that one day she would be the next Serena Williams. But while she never made it to Wimbledon, Matsuokas pioneering work studying robotics and the human brain earned her the prestigious MacArthur genius fellowship.
Speaking at Fortune ‘s Brainstorm Tech dinner in San Francisco on Wednesday, Matsuoka discussed her work in the red-hot field of artificial intelligence and her career at Apple ( aapl ) , Googles ( googl ) experimental research group, and Nest, the home technology arm of Google’s parent company.
Get Data Sheet , Fortunes technology newsletter.
Matsuoka first became enamored with robotics when she attended the University of California at Berkeley. At the time she wanted to merge her love of tennis with robotics, and thought it would be cool to potentially build a tennis buddy with legs and arms and eyes with computer vision that could track a tennis ball and hit with her like a human.
But building such a robot is very complex. Even with the leaps in artificial intelligence technology in recent years, which has made advances like semi-autonomous driving possible, Matsuoka says that her tennis wonder robot is still 10 to 15 years away.
In the meantime, she’s concentrating on building cutting-edge products that impact people daily. At Nest, for example, she is working on more than just Internet-connected thermostats that adjust temperatures based on peoples habits.
For more about finance and technology, watch Fortune’s video :
Matsuoka wants the company to build so-called smart appliances that use artificial intelligence so the home is doing the work for you, she explained. That would include, presumably, every home appliance you can think ofall sucking in data and learning your habits so that you dont have to tell them what to do.
Originally posted here:
The relationship between humans and machines is becoming ever more intertwined.Already we can see how artificial intelligence (AI) is invading our lives. But soon, everything we think of as human could be intimately tied to computer intelligence. Including our sexual and romantic lives.
Computer science pioneer J.C.R. Licklider wrote about the cooperative interaction between men and electronic computers in his 1960 paper, Man-Computer Symbiosis. And futurist Ray Kurzweil, who has made a number of correct predictions, said that by 2045 we will be able to multiply our intelligence a billionfold by wirelessly connecting our neocortex to an artificial neocortex in the cloud.
Its hard to imagine. It sounds way too far-fetched. But its based on the exponential growthof computer processing power.
A sex doll company unveileda robot that can speak and be programmed to have a personality. Its created by Realbotix and is called Harmony 2.0. Its also considered to be the worlds first AI sex robot. Its creators say it has a persistent memory, so it can bring up information from previous conversations. On the companys website, it statesthat Harmony 2.0 has been created with the aim of:
alleviating loneliness and helping individuals to conquer social anxiety and intimacy phobia.
But soon it may become quite common to develop intimate relationships with robots. Something previously the preserve of fiction. For example, In the film Her (2013), heartbroken Theodore Twombly (Joaquin Phoenix) develops a relationship with Samantha (Scarlett Johansson), a computer. And in Charlie Brookers series Black Mirror, the episode Be Right Back in Season 2 tells the story of a woman who loses her boyfriend in a car accident. She replaces him with an AI duplicate.
Michael Harre, a lecturer in Complex Systems at the University of Sydney, recently saidthat workplaces will soon include AI. Indeed, asThe Canary previously reported, Japanese insurance firm Fukoku Mutual Life Insurance is replacing its employees with an AI system.
Harre believes that the introduction of AI in the workplace will raise some interesting questions about our sense of self. He said:
The fact that we will be interacting with the appearance of consciousness in things that are clearly not biological will be enough for us to at least unconsciously revise what we think consciousness is.
There are worries about AI making all of us unemployed. But Harre isnt so pessimistic. He believes that people who are flexible and open to learning will still be very much in demand. Also, its not clear if unique human traits such as intuition, empathy, and creativity can be computed.
But astudy completed by Israeli researchers last year did suggest that a robot could be as effective a listener as a person.
In the experiment, half of the participants told a personal story to a robot that was unresponsive. The other half spoke to a robot that responded with supportive comments, as well as gestures that indicate understanding and sympathy, like nodding and looking the participant in the eyes. The study found that people can develop an attachment to the sympathetic robots. They also showed the same feelings and responses they would have if the listener was an actual person.
The hope is that this human-like companion support will offer relief to people who suffer from isolation, such as elderly people. ElliQ is a robot being tested with elderly people in San Francisco. It can speak and respond with a variation of movements, sounds, and lights which offer emotional support. It can also offer suggestions and reminders, like a caregiver.
We dont know if these interactions with robots will improve human-to-human interactions or replace them to an extent. Right now, for many of us, our smartphones are more or less another appendage. We cant go anywhere without them. One papersupports the widely held notion that theyre ruining real-life conversations. So as AI starts to invade more aspects of our lives, what we regard as human and how we communicate is likely to become radically altered.
Read more about jobs that will be at risk of automation.
Check out more articles fromThe Canarys science section.
Featured image via Flickr
Artificial Intelligence Still Needs a Human Touch
Wall Street Journal (subscription)
Artificial intelligence has been flexing its creative muscles recently, making images, music, logos and other designs. In most cases, though, humans are still very much a part of the design process. When left to its own devices, AI software can create …
Here is the original post:
Artificial intelligence could enhance the decision-making capacities of human beings and make us much better than we are. Or, it could destroy the human race entirely. We could soon find out.
In an engrossing lecture Friday morning, political scientist and software developer Clifton van der Linden said the world may be on the brink of a super machine intelligence that has the full range of human intelligence, as well as autonomous decision-making. And that emerging reality has many of the great human minds worried about our future.
Van der Linden is the co-founder and CEO of Vox Pop Labs, a software company that developed Vote Compass, a civic engagement application that shows voters how their views align with those of candidates running for election. Over two million people have used it to gauge where they stand with candidates in recent federal and provincial election campaigns.
He was the keynote speaker at the inaugural University of Guelph Political Science and International Development Studies Departments’ Graduate Conference, which had as it theme Politics in the Age of Artificial Intelligence.
The conference was held all day Friday at The Arboretum Centre, and attracted political science graduate students from across the province.
Van der Linden has his finger on the pulse of current AI development. It is a rapid, frenetic pulse that is changing so exponentially that few are able to fathom the implications or consequences of it for political systems and society in general. But they could be disastrous.
Technology, and especially AI technology, is evolving at an unprecedented rate, he said. Last year, Googles GO computer beat the worlds most dominant GO master. It was believed to be an impossibility. There are currently self-driving cars in Pittsburgh, and weapons that can target and strike without human intervention.
AI is emerging in the medical and legal fields, and some believe it could one day replace judges in courtrooms, delivering better trial decisions than fallible human judges. Some even envision a time when sex workers will be replaced by robots.
AI is changing the landscape in extraordinary ways, he said. Many see it as our biggest existential threat.
One area where artificial intelligence is exploding is in the world of Big Data. And one highly influential branch of that is in the gathering of personal information based on Facebook, Twitter and Google activity.
Information is formulated by machine algorithms into profiles for the purpose of strategically targeting so-called programmatic advertising campaigns. Our profiles are then auctioned off in milliseconds to advertisers using AI bidding technology.
We are all being tracked throughout the Internet, he said. Wherever we visit online, we leave evidence of our visit.
It is now believed that such technology was used during the recent American election that brought Donald Trump to power, whereby swing voters were specifically targeted for election advertisements based on their Facebook likes and other online activity, van de Linden said.
This type of microtargeting advertising could become a staple of future election campaigns, specifically targeting swing voters that are likely to go out and vote.
On the bright side, while human beings are believed to be incapable of perfectly rational choices, that is what intelligent machines do best. AI has great potential as a supplement to our decision-making processes, enabling us to optimize our preferences and make more effective choices.
It is difficult to know where AI technology is leading us, but it is clear that it is now being used to amass power and influence among the elite of society, van der Linden concluded.
Government policy based in a strong understanding of the implications of the technology, is necessary. Critical inquiry and robust research is a must.
Van der Linden ended his presentation with a call to action to those present to take on the mantle of investigation into AIs repercussions for the electoral system and democracy.
The conference explored a broad range of subjects throughout the day, includinginternational development, food security, and populist politics.
Continue reading here:
3/12/2017 8:00AM Recommended for you Tastes Are Changing in the Luxury Kitchen 1/5/2017 9:00AM What Can ‘Star Trek’ Teach Us About Our Work Life? 3/10/2017 3:45PM Range Rover Velar Debuts at the Geneva Motor Show 3/7/2017 12:02PM Healthcares Toughest Age Bracket 3/10/2017 7:00AM Tot Throws Tantrum in Front of the Queen of England 3/10/2017 4:20PM Ferrari’s New 812 Superfast 3/8/2017 9:34AM Which Lightbulb Should You Buy? Think in Lumens 3/10/2017 5:24PM Breast Cancer: ‘Cold Capping’ to Prevent Hair Loss 3/10/2017 2:45PM The Keys to Better Vacation Photos 3/8/2017 12:00PM Porsche Unveils the Panamera Turbo Sport Turismo 3/7/2017 8:13AM Barron’s Buzz: The Future of ETFs 3/10/2017 4:23PM What Can ‘Star Trek’ Teach Us About Our Work Life? 3/10/2017 3:45PM
In “Star Trek,” the Starship Enterprise navigates space, the final frontier. But WSJ contributor Alexandra Samuel tells Tanya Rivero how “Star Trek” helped her navigate workplace dilemmas and learn valuable lessons. Photo: Getty
Some parents give their children an engraved pen set as a special gift. Some give a car. But many now give an apartment. Leonard Steinberg, president of real estate company Compass, and WSJ’s Tanya Rivero, discuss the growing trend of parents purchasing apartments for their children. Photo: iStock
With more than 300 feet of waterfront space, this house is the perfect port for any boat owner.
WSJ’s Paul Vigna and Nick Timiraos analyze the February employment report, representing the first full month of the Trump administration. They discuss whether the upbeat payrolls and hourly wages figures are likely to give the Federal Reserve the green light to carry out several interest rate increases this year. Photo: iStock
Researchers at Ohio State University say they’ve found a way to use food waste as an alternative to some of the carbon black in tires. Photo: OSU/Tell Collective
Uber Technologies says it will stop using technological tools such as “Greyball” to evade government officials seeking to identify and block the service’s drivers. WSJ’s Lee Hawkins explains. Photo: Associated Press
Clashes erupted between police and supporters of South Korea’s impeached leader Park Geun-hye after the country’s Constitutional Court ruled to eject her from office. At least two people have died in the protests, police said. Photo: Reuters
Follow this link:
Artificial Intelligence (AI) is hot. One breathless press release predicted that by 2025, 95% of all customer interactions will be powered by AI.
AI is not new. Its not just about bots for self-service. Or self-driving cars. In general usage it means the usage of advanced analytics more than process automation based on rules. Can include the processing of natural language (e.g. Alexa, Siri, Watson), decision making using complex algorithms, and machine learning where the algorithms get better over time.
Heres one definition from AlanTuring.net:
Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains. However, the five decades since the inception of AI have brought only very slow progress, and early optimism concerning the attainment of human-level intelligence has given way to an appreciation of the profound difficulty of the problem.
And another from Wikipedia:
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of intelligent agents: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving (known as Machine Learning).
IBM has been pushing Watson (of Jeopardy fame), Salesforce.com launched Einstein last year, and my inbox is full of press releases and briefing requests this year from vendors big and small, all touting AI.
My question is: Can AI improve the Customer Experience? Please answer yes or no and explain in the comments below. Examples appreciated!
According to the Office for Budget Responsibility, the NHS budget will need to increase by 88billion over the next 50 years if it is to keep pace with the rising demand for healthcare in the UK. But with the 2017 Budget showcasing a massive leaning towards building up its Brexit reserves and allocating a mere 100 million for 100 onsite GP treatment centres in A&Es across England, the NHS is justifiably bracing itself for a painful future.
With 20billion worth of cuts scheduled by 2020, combined with fierce warnings that the UKs health services are on the edge of an unprecedented crisis, the urgent call for solutions to be brought to the healthcare table has incontrovertibly intensified.
With deep cuts looming, its time to properly consider how Artificial Intelligence can answer this call and shed light on how its technologies could provide the healthcare industry with some much-needed respite and real solutions to meet the ever spiralling rise in demand for healthcare.
The issue of voluminous data that draws relentlessly on healthcare professionals resources is something that could benefit significantly from the implementation of an AI-based system.
It has been estimated that it would take at least 160 hours of reading a week just to keep up with new medical knowledge as it’s published, let alone consider its relevance. It soon becomes apparent then, that it would be physically impossible for a doctor to be able to process all of the patient information as well as digest insight from new materials and medical journals, and still be able to treat patients.
Imagine a scenario wherein supercomputers could process the information and far more efficiently, too making sense of the sheer quantity of data, flagging any relevant information to the doctors and nurses that might be pertinent to a patients case, and providing them with access to up-to-the-minute and highly applicable insight in the field.
Such an AI system would effectively unshackle medical professionals from these time-consuming processes, freeing them up to focus on work that requires human skills. Contrary to popular belief that AI will result in mass job losses, the implementation of AI systems in this instance would actually augment the roles and skills of the human workers performing the tasks they dont have the time or capacity to do. Moreover, this rapid analysis and provision of data would enhance the overall efficiency of the human decision-making processes. And so, rather than replace jobs, the AI systems would empower human services.
This is exactly what IBM Watson has been working on in collaboration with Memorial Sloan-Kettering Cancer Center. World-renowned oncologists have been training Watson to compare a patients medical information against a vast array of treatment guidelines and research to provide recommendations to physicians on a patient-by-patient basis.
Supporting evidence is provided for each recommendation in order to provide transparency and to aid in the doctors decision-making process, and Watson will update its suggestions as new data is added. Watson is being used to facilitate access to the best of oncologys collective knowledge, therefore demonstrating how this can be applied across the entire medical profession.
Having recognised the potential that AI tech can bring to the wider industry, community healthcare service Fluid Motion has rolled out pilot trials in a bid to overcome the challenges they face in relation to cost, staffing, efficient decision-making processes and data crunching.
Born from the frustration of facing barriers presented by the current healthcare system, Fluid Motions group aquatic therapy programme is a tailored rehabilitation concept that has been designed to be both fun and beneficial for people with a range of musculoskeletal conditions, with an overall aim to treat, manage and prevent such conditions.
With one in five GP appointments being related to musculoskeletal disorders translating into a cost to the UK economy of 24.8 billion per year due to sick leave the need for fast and effective healthcare solutions is clear. But the challenge, as indicated Ben Wilkins of Fluid Motion, is that while these programmes are successful, there simply arent enough professionals to sustain the growing levels of demand for the service. Additionally the very nature of the programmes means that they depend heavily on vast amounts of data input and analysis to determine the right solution.
Fluid Motion recognised that, if they could generate these rehabilitation plans automatically, it would allow them to lower their staff costs and increasing their reach. Fitness Instructors could quickly generate a high-quality tailored plan based on a model of the Physiotherapist and Osteopaths expertise, modelled in AI-powered cognitive reasoning platform, Rainbird.
Rainbird modelled the knowledge of Fluid Motions qualified physiotherapists and osteopaths, including the suitability of numerous exercises to individual patient symptoms, and added it to an interface that could be accessed by Fluid Motions network of fitness instructors. The tool allowed them to create a tailored, illustrated rehabilitation plan for patients, based on the results of an initial interaction with a virtual physiotherapist or osteopath.
The next step will be to provide access to patients directly so that they can create their own rehabilitation plans. Patients will have the facility to give feedback so that Rainbird can learn and, where necessary, adapt their plan or make alternative recommendations if specific exercises are uncomfortable.
Fluid Motion has since been able to track and reflect on participants progress in real-time, meaning the data can be utilised to improve clinical decision-making in rehabilitative healthcare. The application of AI helps patients get better sooner, and prevents pain and disability for longer.
The time and cost saving possibilities resulting from the implementation of such a programme are indubitable. According to Wilkins, the cumulative cost for a healthcare professional per session is 75 (50 for hiring an Osteo/Physio for the whole session and 25 to pay them to review feedback data to make recommendation). When Fluid Motion sessions now only cost the company 35 (for a Fluid Motion fitness instructor) and 25 (for pool hire), theres a full 150 per cent saving. With this model, it means that Fluid Motion can charge participants less than the average price of a swim to attend sessions.
Up to this point, Fluid Motion had been subsidising cost with grant payments, but now the company breaks even each session. Moreover, this is a model which is scalable. As a result of this initiative, Fluid Motion is now working to become an organisation that provides support and treatment for musculoskeletal health conditions alongside the NHS.
Indeed, the Fluid Motion case study clearly illustrates how challenges in healthcare can be overcome through the implementation of AI systems, and also highlights the potential time and cost saving benefits that the NHS could reap, if such an approach were adopted.
By mapping knowledge of some of the medical roles that are in high demand, there are many ways that the technology can help to streamline some of the more rudimentary elements of those roles. This would free up time to devote to face-to-face consultancy that would have the most impact for patients, reduce waiting times and even enable medical professionals to engage in a more personalised service.
This application of AI has the potential to address the rise in demand for NHS services, whilst ensuring that doctors and nurses spend more time doing the work that they are trained to do; treating patients to the best of their ability. Indeed, with the assistance of AI-powered technologies, the NHS may not only survive the crisis but, like the Phoenix, rise from the ashes to achieve its original goal of bringing good healthcare to all.
Katie Gibbs, Head of Accelerated Consulting, Aigen Image Credit: John Williams RUS / Shutterstock
Whats AI, and whats not
Artificial intelligence has become as meaningless a description of technology as all natural is when it refers to fresh eggs. At least, thats the conclusion reached by Devin Coldewey, a Tech Crunch contributor.
AI is also often mentioned as a potential cybersecurity technology. At the recent RSA conference in San Francisco, RSA CTO Zulfikar Ramzan advised potential users to consider AI-based solutions carefully, in particular machine learning-based solutions, according to an article on CIO.
AI-based tools are not as new or productive as some vendors claim, he cautioned, explaining that machine learning-based cybersecurity has been available for over a decade via spam filters, antivirus software and online fraud detection systems. Plus, such tools suffer from marketing hype, he added.
Even so, AI tools can still benefit those with cybersecurity challenges, according to the article, which noted that IBM had announced its Watson supercomputer can now also help organizations enhance their cybersecurity defenses.
AI has become a popular buzzword, he said, precisely because its so poorly defined. Marketers use it to create an impression of competence and to more easily promote intelligent capabilities as trends change.
The popularity of the AI buzzword, however, has to do at least partly with the conflation of neural networks with artificial intelligence, he said. Without getting too into the weeds, the two are not interchangeable — but marketers treat them as if they are.
AI vs. neural networks
By using the human brain and large digital databases as metaphors, developers have been able to show ways AI has at least mimicked, if not substituted for, human cognition.
The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis, Coldeway wrote.
The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos — but in another, more important way its misleading, he added. While these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, theyre not intelligent.
AI analyst Maureen Caudill, meanwhile, described artificial neural networks (ANNs) as algorithms or actual hardware loosely modeled after the structure of the mammalian cerebral cortex but on much smaller scales.
A large neural network might have hundreds or thousands of processor units, whereas a brain has billions of neurons.
Caudill, the author of Naturally Intelligent Systems, said that while researchers have generally not been concerned with whether their ANNs resemble actual neurological systems, they have built systems that have accurately simulated the function of the retina and modeled the eye rather well.
So what is AI?
There about as many definitions of AI as researchers developing the technology.
The late MIT professor Marvin Minsky, often called the father of artificial intelligence, defined AI as the science of making machines do those things that would be considered intelligent if they were done by people.
Infosys CEO Vishal Sikka sums up AI as any activity that used to only be done via human intelligence that now can be executed by a computer, including speech recognition, machine learning and natural language processing.
When someone talks about AI, or machine learning, or deep convolutional networks, what theyre really talking about is a lot of carefully manicured math, Coldewey recently wrote.
In fact, he said, the cost of a bit of fancy supercomputing is mainly what stands in the way of using AI in devices like phones or sensors that now boast comparatively little brain power.
If the cost could be cut by a couple orders of magnitude, he said, AI would be unfettered from its banks of parallel processors and free to inhabit practically any device.
The federal government sketched out its own definition of AI last October. In a paper on Preparing for the future of AI, the National Science and Technology Councilsurveyed the current state of AI and its existing and potential applications.
The panel reported progress made on narrow AI,” which addresses single-task applications, including playing strategic games, language translation, self-driving vehicles and image recognition.
Narrow AI now underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, according to the paper.
The opposite end of the spectrum, sometimes called artificial general intelligence (AGI), refers to a future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. NSTC said those capabilities will not be achieved for a decade or more.
In the meantime, the panel recommended the federal government explore ways for agencies to apply AI to their missions by creating organizations to support high-risk, high-reward AI research. Models for such an organization include the Defense Advanced Research Projects Agency and what the Department of Education Department has done with its proposal to create an ARPA-ED, which was designed to support research on whether AI could help significantly improve student learning.
Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients’ interventional radiology care, according to research presented today at the Society of Interventional Radiology’s 2017 Annual Scientific Meeting.
The researchers used cutting-edge artificial intelligence to create a “chatbot” interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment.
“We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care,” said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA’s David Geffen School of Medicine and one of the authors of the study. “Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care.”
In this research, deep learning was used to understand a wide range of clinical questions and respond appropriately in a conversational manner similar to text messaging. Deep learning is a technology inspired by the workings of the human brain, where networks of artificial neurons analyze large datasets to automatically discover patterns and “learn” without human intervention. Deep learning networks can analyze complex datasets and provide rich insights in areas such as early detection, treatment planning, and disease monitoring.
“This research will benefit many groups within the hospital setting. Patient care team members get faster, more convenient access to evidence-based information; interventional radiologists spend less time on the phone and more time caring for their patients; and, most importantly, patients have better-informed providers able to deliver higher-quality care,” said co-author Kevin Seals, MD, resident physician in radiology at UCLA and the programmer of the application.
The UCLA team enabled the application, which resembles online customer service chats, to develop a foundation of knowledge by feeding it more than 2,000 example data points simulating common inquiries interventional radiologists receive during a consultation. Through this type of learning, the application can instantly provide the best answer to the referring clinician’s question. The responses can include information in various forms, including websites, infographics, and custom programs. If the tool determines that an answer requires a human response, the program provides the contact information for a human interventional radiologist. As clinicians use the application, it learns from each scenario and progressively becomes smarter and more powerful.
The researchers used a technology called Natural Language Processing, implemented using IBM’s Watson artificial intelligence computer, which can answer questions posed in natural language and perform other machine learning functions. This prototype is currently being tested by a small team of hospitalists, radiation oncologists and interventional radiologists at UCLA.
“I believe this application will have phenomenal potential to change how physicians interact with each other to provide more efficient care,” said John Hegde, MD, resident physician in radiation oncology at UCLA. “A key point for me is that I think it will eventually be the most seamless way to share medical information. Although it feels as easy as chatting with a friend via text message, it is a really powerful tool for quickly obtaining the data you need to make better-informed decisions.”
As the application continues to improve, researchers aim to expand the work to assist general physicians in interfacing with other specialists, such as cardiologists and neurosurgeons. Implementing this tool across the health care spectrum, said Lee, has great potential in the quest to deliver the highest-quality patient care.
Abstract 354: “Utilization of Deep Learning Techniques to Assist Clinicians in Diagnostic and Interventional Radiology: Development of a Virtual Radiology Assistant.” K. Seals; D. Dubin; L. Leonards; E. Lee; J. McWilliams; S. Kee; R. Suh; David Geffen School of Medicine at UCLA, Los Angeles, CA. SIR Annual Scientific Meeting, March 4-9, 2017. This abstract can be found at sirmeeting.org.
Materials provided by Society of Interventional Radiology. Note: Content may be edited for style and length.
See the original post here:
When IBM CEO Ginni Rometty delivered the opening keynote at HIMSS17 sheeffectively set the stagefor artificial intelligence, cognitive computing and machine learning to be prevalent themes throughout the rest of the conference.
Other top trends buzzed about in Orlando: cloud computing and population health.
Healthcare IT News asked our readers where they stand in terms of these initiatives. And we threw in a bonus question to figure out what their favorite part of HIMSS17 was.
Some 70 percent of respondents are either actively planning or researching artificial intelligence, cognitive computing and machine learning technologies while 7 percent are rolling them out and 1 percent have already completed an implementation.
A Sunday afternoon session featuring AI startups demonstrated the big promise of such tools as well as the persistent questions, skepticism and even fearwhen it comes to these emerging technologies.
Whereas AI was considerably more prominent in the HIMSS17 discourse than in years past, population health management has been among the top trends for the last couple conferences.
Its not entirely surprising that more respondents, 30 percent,are either rolling out or have completed a rollout of population health technologies, while 50 percent are either researching actively planning to do so.
One striking similarity between AI and population health is the 20 percent of participants responding that they have no interest in either. For cloud computing, meanwhile, only 7 percent indicated they are not interested.
Though cloud computing is not a new concept, it is widely seen as such in the HIPAA-sensitive world of personally-identifiable and protected health information. The overarching themes at the pre-conference HIMSS and Healthcare IT News Cloud Computing Forum on Sunday were that security is not a core competency of hospital and health systems, thus many cloud providers can better protect health data and the ability to spin up server, storage and compute resources on Amazon, Google or Microsoft is enabling a whole new era of innovation that simply is not possible when hospitals have to invest in their own infrastructure to run proofs-of-concept and pilot programs. The Centers for Medicare and Medicaid Services, for instance,cut $5 million from its annual infrastructure budgetby opting for infrastructure-as-a-service.
Here comes the bonus question: What was your favorite part of HIMSS17?
The show floor won hands-down, followed by education sessions, then networking events and, in a neck-and-neck tie are keynotes and parties/nightlife.
This article is part of our ongoing coverage of HIMSS17. VisitDestination HIMSS17for previews, reporting live from the show floor and after the conference.
Like Healthcare IT News onFacebookandLinkedIn
See the rest here:
As artificial intelligence tools become smarter and easier to use, the threat that they may take human jobs is real. They might also just make people much better at what they do, revolutionizing the workday for many.
What a bulldozer was to physical labor, AI is to data and to thought labor, saidNaveen Rao(pictured), Ph.D., vice president and general manager of artificial intelligence solutions at Intel.
Rao told John Furrier (@furrier), host oftheCUBE, SiliconANGLE Medias mobile live streaming studio, during South by Southwest in Austin, TX, that there are many examples of how AI can help streamline processes; one would be an insurance firm needing to read millions of pages of text to assess risk.
I cant do that very easily, right? I have to have a team of analysts run through, write summaries these are the kinds of problems we can start to attack, he said.AI can turn a computer into a data inference machine, not just a way to automate compute tasks, he added.
Improved user interfaces are driving the democratization of AI for people doing regular jobs, Rao pointed out. A major example of how AI can bring a technology to the masses is the iPod, which in turn informed the smartphone.
Storing music in a digital form in a small device was around before the iPod, but when they made it easy to use, that sort of gave rise to the smartphone, Rao said.
Rao sees fascinating advances in AI robot development, driven in part by 3D printing and the maker revolution lowering mechanical costs.
That, combined with these techniques becoming mature, is going to come up with some really cool stuff. Were going to start seeing The Jetsonskind of thing, he said.
Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of the South by SouthWest (SXSW).(*Disclosure: Intel sponsors some SXSW segments on SiliconANGLE Medias theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Read the original:
As robots continue to play a growing role in our daily lives, white collar jobs in many sectors including accounting and financial operations are quickly becoming a thing of the past. Business are gravitating towards software to automate bookkeeping tasks, saving considerable amounts of both time and money. In fact, since 2004, the number of full-time finance employees at large companies has declined a staggering 40% to roughly 71 employees for every $1 billon of revenue,down from 119 employees, according to a report by top consulting firm The Hackett Group.
These numbers show that instead of resisting change, companies are embracing the efficiencies of this new technology and exploring how individual businesses can leverage automation and, more importantly, artificial intelligence aka robots. A quick aside on the idea of robots versus automation. As technology becomes more sophisticated and particularly with the use of Artificial Intelligence (AI) were able to automate multiple steps in a process. The concept of Robotic Process Automation (RPA) or robots for short has emerged to capture the notion of more sophisticated automation of everyday tasks.
Today, there is more data available than ever and computers are enhancing their capabilities to leverage these mountains of information. With that, many technology providers are focusing on making it as easy as possible for businesses to implement and utilize their solutions. Whether its by easing the support and management burden via Software as a Service (SaaS) delivery or more turn-key offerings that embed best practices in the solution, one can see a transformation from simply providing tools to providing a level of robotic automation that seems more like a service offering than a technology.
Of course, the name of the game for any business is speed, efficiency, and cost reduction.It is essential to embrace technologies that increase efficiency and savings because, like it or not, your competitors will. While there are some companies that stick with the old-school approaches, they end up serving small niches of customers and seeing less overall growth.
As long as the technology-based solution is less expensive and performs equally as well, if not better than alternative options, the market forces will drive companies to implement the automated technologies. In particular, the impact of robotic artificial intelligence (AI) is here to stay. In the modern work environment, automation means much more than just compiling numbers but making intelligent observations and judgements based on the data that is reviewed.
If companies and businesses want to ensure future success, its imperative to accept and embrace the capabilities provided by robots. Artificial intelligence wont always be perfect but it can dramatically improve your work output and add to your bottom line. Its important to emphasize that the goal is not to curtail employees but to find ways to leverage the robots toautomate everyday tasks or detail-oriented processesand focus the employees on higher-value activities.
Lets use an example: controlling spent in Travel & Expense (T&E) by auditing expense reports. When performing an audit, many companies randomly sample roughly 20% of expense reports to identify potential waste and fraud. If you process 500 expense reports in a month then 100 of those reports would be audited. The problem is less than 1% of these expense reports contain fraud or serious risks (cite SAR report), meaning the odds are that 99% of the reports reviewed were a waste of time and resources and the primary abuser of company funds most likely went unnoticed.
By employing a robot to identify risky looking expense reports and configuring the system to be hyper-vigilant, it has been shown that a sufficiently sophisticated AI system will flag 7% of the expense reports for fraud, waste, and misuse. (7% is the average Oversight Systems has seen across 20 million expense reports) If we look back to our previous example this means that out of 500 expense reports, employees would only have to review 35 instead of the 100 reports that would have been audited. Though these are likely not all fraudulent, they may provide other valuable information such as noting when an employee needs to be reminded about company travel policy.
While it may sound like robots are eliminating human jobs, its important to note that they can also be extremely valuable working collaboratively with employees. Although the example above focused on fraud, the same productivity leverage is available regarding errors, waste, misuse in financial processes, etc. With the help of robots, we can spend less time hunting for issues and more time addressing them. By working together with technology, the employee has a higher chance of rooting out fraud and will have the bandwidth to work with company travelers to influence their future behavior.
It is clear that in order to ensure future profitability, it is crucial for businesses to understand and take advantage of the significant role that robots can play in dramatically enhancing financial operations.
Read this article:
Although China could initially only observe the advent of the Information-Technology Revolution in Military Affairs, the Peoples Liberation Army might presently have a unique opportunity to take advantage of the military applications of artificial intelligence to transform warfare. When the United States first demonstrated its superiority in network-centric warfare during the first Gulf War, the PLA was forced to confront the full extent of its relative backwardness in information technology. Consequently, the PLA embarked upon an ambitious agenda of informatization (). To date, the PLA has advanced considerably in its capability to utilize information to enhance its combat capabilities, from long-range precision strike to operations in space and cyberspace. Currently, PLA thinkers anticipate the advent of an intelligentization Revolution in Military Affairs that will result in a transformation from informatized ways of warfare to future intelligentized () warfare. For the PLA, this emerging trend heightens the imperative of keeping pace with the U.S. militarys progress in artificial intelligence, after its failure to do so in information technology. Concurrently, the PLA seeks to capitalize upon the disruptive potential of artificial intelligence to leapfrog the United States through technological and conceptual innovation.
For the PLA, intelligentization is the culmination of decades of advances in informatization. Since the 1990s, the PLA has been transformed from a force that had not even completed the process of mechanization to a military power ever more confident in its capability to fight and win informatized wars. Despite continued challenges, the PLA appears to be on track to establish the system of systems operations () capability integral to integrated joint operations. The recent restructuring of the PLAs Informatization Department further reflects the progression and evolution of its approach. These advances in informatization have established the foundation for the PLAs transition towards intelligentization. According to Maj. Gen. Wang Kebin (), director of the former General Staff Department Informatization Department, Chinas information revolution has been progressing through three stages: first digitalization (), then networkization () and now intelligentization (). The PLA has succeeded in the introduction of information technology into platforms and systems; progressed towards integration, especially of its C4ISR capabilities; and seeks to advance towards deeper fusion of systems and sensors across all services, theater commands and domains of warfare. This final stage could be enabled by advances in multiple emerging technologies, including big data, cloud computing, mobile networks, the Internet of Things and artificial intelligence. In particular, the complexity of warfare under conditions of intelligentization will necessitate a greater degree of reliance upon artificial intelligence. Looking forward, artificial intelligence is expected to replace information technology, which served as the initial foundation for its emergence, as the dominant technology for military development.
Although the PLA has traditionally sought to learn lessons from foreign conflicts, its current thinking on the implications of artificial intelligence has been informed not by a war but by a game. AlphaGos defeat of Lee Sedol in the ancient Chinese game of Go has seemingly captured the PLAs imagination at the highest levels. From the perspective of influential PLA strategists, this great war of man and machine () decisively demonstrated the immense potential of artificial intelligence to take on an integral role in command and control and also decisionmaking in future warfare. Indeed, the success of AlphaGo is considered a turning point that demonstrated the potential of artificial intelligence to engage in complex analyses and strategizing comparable to that required to wage warnot only equaling human cognitive capabilities but even contributing a distinctive advantage that may surpass the human mind. In fact, AlphaGo has even been able to invent its own, novel techniques that human players of this ancient game had never devised. This capacity to formulate unique, even superior strategies implies that the application of artificial intelligence to military decisionmaking could also reveal unimaginable ways of waging war. At the highest levels, the Central Military Commission Joint Staff Department has called for the PLA to progress towards intelligentized command and decisionmaking in its construction of a joint operations command system.
The next time you shop on fashion website Myntra, you might end up choosing a t-shirt designed completely by a softwarethe pattern, colour and texture without any intervention from a human designer. And you would not realise it. The first set of these t-shirts went on sale four days ago. This counts as a significant leap for Artificial Intelligence in ecommerce.
For customers, buying online might seem simpleclick, pay and collect. But it’s a different ballgame for e-tailers. Behind the scenes, from the warehouses to the websites, artificial intelligence plays a huge role in automating processes. Online retailers are employing AI to solve complex problems and make online shopping a smoother experience. This could involve getting software to understand and process voice queries, recommend products based on a person’s buying history, or forecast demand.
SO WHAT ARE THE BIG NAMES DOING? “In terms of industry trends, people are going towards fast fashion. (Moda) Rapido does fast fashion in an intelligent way,” said Ambarish Kenghe, chief product officer at Myntra, a Flipkart unit and India’s largest online fashion retailer.
The Moda Rapido clothing label began as a project in 2015, with Myntra using AI to process fashion data and predict trends. The companys human designers incorporated the inputs into their designs. The new AI-designed t-shirts are folded into this label unmarked, so Myntra can genuinely test how well these sell when pitted against shirts designed by humans.
Also Read: AI will help answer queries automatically: Rajeev Rastogi, Amazon
“Till now, designers could look at statistics (for inputs). But you need to scale. We are limited by the bandwidth of designers. The next step is, how about the computer generating the design and us curating it,” Kenghe said. “It is a gold mine. Our machines will get better on designing and we will also get data.”
This is not a one-off experiment. Ecommerce, which has a treasure trove of data collected over the last few years is ripe for disruption from AI. Companies are betting big on AI and pouring in funds to push the boundaries of what can be done with data. “We are applying AI to a number of problems such as speech recognition, natural language understanding, question answering, dialogue systems, product recommendations, product search, forecasting future product demand, etc.,” said Rajeev Rastogi, director, machine learning, at Amazon.
An example of how AI is used in recommendations could be this: if you started your search on a retailers website with, say, a white shirt with blue polka dots, and your next search is for an shirt with a similar collar and cuff style, the algorithm understands what is motivating you. “We start with personalizationit is key. If you have enough and more collection, clutter is an issue. How do you (a customer) get to the product that you want? We are trying to figure it out. We want to give you precisely what you are looking for,” said Ajit Narayanan, chief technology officer, Myntra.
A related focus area for AI is recommending the right sizes as this can vary across brands. “We have pretty high return rates across many categories because people think that sizes are the same across brands and across geographies. So, trying to make recommendations with appropriate size is another problem that we are working on. Say, a size 6 in Reebok might be 7 in Nike, and so on,” Rastogi said in an earlier interview with ET.
Myntra uses data intelligence to also decide which payment gateway is the best for a transaction.
“Minute to minute there is a difference. If you are going from, say, a HDFC Bank card to a certain gateway at a certain time, the payment success rate may be different than for the same gateway and for the same card at a different time, based on the load. This is learning over a period of time,” said Kenghe. “Recently, during the Chennai cyclone, one of the gateways had an outage. The system realised this and auto-routed all transactions away from the gateway. Elsewhere, humans were trying to figure out what happened.
SUPPORT FROM AI SPECIALISTS A number of independent AI-focused startups are also working on automating manually intensive tasks in ecommerce. Take cataloging. If not done properly, searching for the right product becomes cumbersome and shoppers might log out.
“Catalogues are (usually) tagged manually. One person can tag 2,000 to 10,000 images. The problem is, it is inconsistent. This affects product discovery. We do automatic tagging (for ecommerce clients) and reduce 90% of human intervention,” said Ashwini Asokan, chief executive of Chennai-based AI startup Mad Street Den. “We can tag 30,000 images in, say, two hours.”
Mad Street Den also offers a host of other services such as sending personalised emails to their clients’ customers, automating warehouse operations and providing analysis and forecasting.
Gurugram-based Staqu works on generating digital tags that make searching for a product online easier. “We provide a software development kit that can be integrated into an affiliate partner’s website or app. Then the site or app will become empowered by image search. It will recognise the product and start making tags for that,” said Atul Rai, cofounder of Staqu, which counts Paytm and Yepme among clients. Staqu is a part of IBM’s Global Entrepreneurship Program.
The other big use of AI is to provide business intelligence. Bengaluru-based Stylumia informs their fashion retailer clients on the latest design trends. “We deliver insights using computer vision, meaning visual intelligence,” said CEO Ganesh Subramanian. “Say, for example, (how do you exactly describe a) dark blue stripe shirt. Now, dark blue is subjective. You cannot translate dark blue, so we pull information from the Net and we show it visually.”
In product delivery, algorithms are being used to clean up and automate the process.
Bengaluru-based Locus is enabling logistics for companies using AI. “We use machine learning to convert (vaguely described) addresses into valid (recognizable) addresses. There are pin code errors, spelling mistakes, missing localities. Machine learning is critical in logistics. We even do demand predictions and predict returns,” said Nishith Rastogi, chief executive of Locus, whose customers include Quikr, Delhivery, Lenskart and Urban Ladder.
Myntra is trying to use AI to predict for customers the exact time of product delivery. “The exact time is very important to us. However, it is not straightforward. It depends on what time somebody placed an order, what was happening in the rest of the supply chain at that time, what was its capacity. It is a complicated thing to solve but we threw this (challenge) to the machine,” said Kenghe. “(The machine) learnt over a period of time. It learnt what happens on weekends, what happens on weekdays, and which warehouse to which pin code is (a product) going to, and what the product is and what size it is. It figured these out with some supervision and came up with (more accurate delivery) dates. I do not think we have perfected it, but it is a big deal for us.”
THE NEXT BIG CHALLENGE One of Myntra’s AI projects is to come up with a fashion assistant that can talk in common language and recommend what to wear for various occasions. But “conversational flows are difficult to solve. This is very early. It will not see the light of the day very soon. The assistants first use would be for support, say (for a user to ask) where is my order, (or instruct) cancel order,” said Kenghe.
The world over, conversational bots are the next big thing. Technology giants like Google and Amazon are pushing forward research on artificial intelligence. “As we see (customer care) agents responding (to buyers), the machine can learn from it. The next stage is, a customer can say ‘I am going to Goa’ and the assistant will figure out that Goa means beach and give a list of things (to take along),” Kenghe said.
While speech is one crucial area in AI research, vision is another. Mad Street Den is trying to use AI in warehouses to monitor processes. “Using computer vision, there is no need for multiple photoshoots of products. This avoids duplication and you are saving money for the customer almost 16-25% savings on the operational side. We can then start seeing who is walking into the warehouse, how many came in, efficiency, analytics, etc. We are opening up the scale of operations,” said Asokan.
Any opportunity to improve efficiency and cut cost is of supreme importance in ecommerce, said Partha Talukdar, assistant professor at Bengaluru’s Indian Institute of Science, where he heads the Machine and Language Learning Lab (MALL), whose mission is to give a “worldview” to machines.
“Companies like Amazon are doing automation wherever they can… right to the point of using robots for warehouse management and delivery through drones. AI and ML are extremely important because of the potential. There are a lot of diverse experiments going on (in ecommerce). We will certainly see a lot of innovative tech from this domain.”
Read the original here: