Artificial Intelligence Still Needs a Human Touch – Wall Street Journal (subscription)

Artificial Intelligence Still Needs a Human Touch
Wall Street Journal (subscription)
Artificial intelligence has been flexing its creative muscles recently, making images, music, logos and other designs. In most cases, though, humans are still very much a part of the design process. When left to its own devices, AI software can create

Here is the original post:

Artificial Intelligence Still Needs a Human Touch – Wall Street Journal (subscription)

Is artificial intelligence our doom? – GuelphToday

Artificial intelligence could enhance the decision-making capacities of human beings and make us much better than we are. Or, it could destroy the human race entirely. We could soon find out.

In an engrossing lecture Friday morning, political scientist and software developer Clifton van der Linden said the world may be on the brink of a super machine intelligence that has the full range of human intelligence, as well as autonomous decision-making. And that emerging reality has many of the great human minds worried about our future.

Van der Linden is the co-founder and CEO of Vox Pop Labs, a software company that developed Vote Compass, a civic engagement application that shows voters how their views align with those of candidates running for election. Over two million people have used it to gauge where they stand with candidates in recent federal and provincial election campaigns.

He was the keynote speaker at the inaugural University of Guelph Political Science and International Development Studies Departments’ Graduate Conference, which had as it theme Politics in the Age of Artificial Intelligence.

The conference was held all day Friday at The Arboretum Centre, and attracted political science graduate students from across the province.

Van der Linden has his finger on the pulse of current AI development. It is a rapid, frenetic pulse that is changing so exponentially that few are able to fathom the implications or consequences of it for political systems and society in general. But they could be disastrous.

Technology, and especially AI technology, is evolving at an unprecedented rate, he said. Last year, Googles GO computer beat the worlds most dominant GO master. It was believed to be an impossibility. There are currently self-driving cars in Pittsburgh, and weapons that can target and strike without human intervention.

AI is emerging in the medical and legal fields, and some believe it could one day replace judges in courtrooms, delivering better trial decisions than fallible human judges. Some even envision a time when sex workers will be replaced by robots.

AI is changing the landscape in extraordinary ways, he said. Many see it as our biggest existential threat.

One area where artificial intelligence is exploding is in the world of Big Data. And one highly influential branch of that is in the gathering of personal information based on Facebook, Twitter and Google activity.

Information is formulated by machine algorithms into profiles for the purpose of strategically targeting so-called programmatic advertising campaigns. Our profiles are then auctioned off in milliseconds to advertisers using AI bidding technology.

We are all being tracked throughout the Internet, he said. Wherever we visit online, we leave evidence of our visit.

It is now believed that such technology was used during the recent American election that brought Donald Trump to power, whereby swing voters were specifically targeted for election advertisements based on their Facebook likes and other online activity, van de Linden said.

This type of microtargeting advertising could become a staple of future election campaigns, specifically targeting swing voters that are likely to go out and vote.

On the bright side, while human beings are believed to be incapable of perfectly rational choices, that is what intelligent machines do best. AI has great potential as a supplement to our decision-making processes, enabling us to optimize our preferences and make more effective choices.

It is difficult to know where AI technology is leading us, but it is clear that it is now being used to amass power and influence among the elite of society, van der Linden concluded.

Government policy based in a strong understanding of the implications of the technology, is necessary. Critical inquiry and robust research is a must.

Van der Linden ended his presentation with a call to action to those present to take on the mantle of investigation into AIs repercussions for the electoral system and democracy.

The conference explored a broad range of subjects throughout the day, includinginternational development, food security, and populist politics.

Continue reading here:

Is artificial intelligence our doom? – GuelphToday

Can Artificial Intelligence (AI) Improve the Customer Experience? – Customer Think

Artificial Intelligence (AI) is hot. One breathless press release predicted that by 2025, 95% of all customer interactions will be powered by AI.

AI is not new. Its not just about bots for self-service. Or self-driving cars. In general usage it means the usage of advanced analytics more than process automation based on rules. Can include the processing of natural language (e.g. Alexa, Siri, Watson), decision making using complex algorithms, and machine learning where the algorithms get better over time.

Heres one definition from AlanTuring.net:

Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains. However, the five decades since the inception of AI have brought only very slow progress, and early optimism concerning the attainment of human-level intelligence has given way to an appreciation of the profound difficulty of the problem.

And another from Wikipedia:

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of intelligent agents: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving (known as Machine Learning).

IBM has been pushing Watson (of Jeopardy fame), Salesforce.com launched Einstein last year, and my inbox is full of press releases and briefing requests this year from vendors big and small, all touting AI.

My question is: Can AI improve the Customer Experience? Please answer yes or no and explain in the comments below. Examples appreciated!

More here:

Can Artificial Intelligence (AI) Improve the Customer Experience? – Customer Think

Art By Artificial Intelligence: AI Expands Into Artistic Realm – Wall Street Journal (blog) (subscription)

3/12/2017 8:00AM Recommended for you Tastes Are Changing in the Luxury Kitchen 1/5/2017 9:00AM What Can ‘Star Trek’ Teach Us About Our Work Life? 3/10/2017 3:45PM Range Rover Velar Debuts at the Geneva Motor Show 3/7/2017 12:02PM Healthcares Toughest Age Bracket 3/10/2017 7:00AM Tot Throws Tantrum in Front of the Queen of England 3/10/2017 4:20PM Ferrari’s New 812 Superfast 3/8/2017 9:34AM Which Lightbulb Should You Buy? Think in Lumens 3/10/2017 5:24PM Breast Cancer: ‘Cold Capping’ to Prevent Hair Loss 3/10/2017 2:45PM The Keys to Better Vacation Photos 3/8/2017 12:00PM Porsche Unveils the Panamera Turbo Sport Turismo 3/7/2017 8:13AM Barron’s Buzz: The Future of ETFs 3/10/2017 4:23PM What Can ‘Star Trek’ Teach Us About Our Work Life? 3/10/2017 3:45PM

In “Star Trek,” the Starship Enterprise navigates space, the final frontier. But WSJ contributor Alexandra Samuel tells Tanya Rivero how “Star Trek” helped her navigate workplace dilemmas and learn valuable lessons. Photo: Getty

Some parents give their children an engraved pen set as a special gift. Some give a car. But many now give an apartment. Leonard Steinberg, president of real estate company Compass, and WSJ’s Tanya Rivero, discuss the growing trend of parents purchasing apartments for their children. Photo: iStock

With more than 300 feet of waterfront space, this house is the perfect port for any boat owner.

WSJ’s Paul Vigna and Nick Timiraos analyze the February employment report, representing the first full month of the Trump administration. They discuss whether the upbeat payrolls and hourly wages figures are likely to give the Federal Reserve the green light to carry out several interest rate increases this year. Photo: iStock

Researchers at Ohio State University say they’ve found a way to use food waste as an alternative to some of the carbon black in tires. Photo: OSU/Tell Collective

Uber Technologies says it will stop using technological tools such as “Greyball” to evade government officials seeking to identify and block the service’s drivers. WSJ’s Lee Hawkins explains. Photo: Associated Press

Clashes erupted between police and supporters of South Korea’s impeached leader Park Geun-hye after the country’s Constitutional Court ruled to eject her from office. At least two people have died in the protests, police said. Photo: Reuters

Follow this link:

Art By Artificial Intelligence: AI Expands Into Artistic Realm – Wall Street Journal (blog) (subscription)

Can artificial intelligence save the NHS? – ITProPortal

According to the Office for Budget Responsibility, the NHS budget will need to increase by 88billion over the next 50 years if it is to keep pace with the rising demand for healthcare in the UK. But with the 2017 Budget showcasing a massive leaning towards building up its Brexit reserves and allocating a mere 100 million for 100 onsite GP treatment centres in A&Es across England, the NHS is justifiably bracing itself for a painful future.

With 20billion worth of cuts scheduled by 2020, combined with fierce warnings that the UKs health services are on the edge of an unprecedented crisis, the urgent call for solutions to be brought to the healthcare table has incontrovertibly intensified.

With deep cuts looming, its time to properly consider how Artificial Intelligence can answer this call and shed light on how its technologies could provide the healthcare industry with some much-needed respite and real solutions to meet the ever spiralling rise in demand for healthcare.

The issue of voluminous data that draws relentlessly on healthcare professionals resources is something that could benefit significantly from the implementation of an AI-based system.

It has been estimated that it would take at least 160 hours of reading a week just to keep up with new medical knowledge as it’s published, let alone consider its relevance. It soon becomes apparent then, that it would be physically impossible for a doctor to be able to process all of the patient information as well as digest insight from new materials and medical journals, and still be able to treat patients.

Imagine a scenario wherein supercomputers could process the information and far more efficiently, too making sense of the sheer quantity of data, flagging any relevant information to the doctors and nurses that might be pertinent to a patients case, and providing them with access to up-to-the-minute and highly applicable insight in the field.

Such an AI system would effectively unshackle medical professionals from these time-consuming processes, freeing them up to focus on work that requires human skills. Contrary to popular belief that AI will result in mass job losses, the implementation of AI systems in this instance would actually augment the roles and skills of the human workers performing the tasks they dont have the time or capacity to do. Moreover, this rapid analysis and provision of data would enhance the overall efficiency of the human decision-making processes. And so, rather than replace jobs, the AI systems would empower human services.

This is exactly what IBM Watson has been working on in collaboration with Memorial Sloan-Kettering Cancer Center. World-renowned oncologists have been training Watson to compare a patients medical information against a vast array of treatment guidelines and research to provide recommendations to physicians on a patient-by-patient basis.

Supporting evidence is provided for each recommendation in order to provide transparency and to aid in the doctors decision-making process, and Watson will update its suggestions as new data is added. Watson is being used to facilitate access to the best of oncologys collective knowledge, therefore demonstrating how this can be applied across the entire medical profession.

Having recognised the potential that AI tech can bring to the wider industry, community healthcare service Fluid Motion has rolled out pilot trials in a bid to overcome the challenges they face in relation to cost, staffing, efficient decision-making processes and data crunching.

Born from the frustration of facing barriers presented by the current healthcare system, Fluid Motions group aquatic therapy programme is a tailored rehabilitation concept that has been designed to be both fun and beneficial for people with a range of musculoskeletal conditions, with an overall aim to treat, manage and prevent such conditions.

With one in five GP appointments being related to musculoskeletal disorders translating into a cost to the UK economy of 24.8 billion per year due to sick leave the need for fast and effective healthcare solutions is clear. But the challenge, as indicated Ben Wilkins of Fluid Motion, is that while these programmes are successful, there simply arent enough professionals to sustain the growing levels of demand for the service. Additionally the very nature of the programmes means that they depend heavily on vast amounts of data input and analysis to determine the right solution.

Fluid Motion recognised that, if they could generate these rehabilitation plans automatically, it would allow them to lower their staff costs and increasing their reach. Fitness Instructors could quickly generate a high-quality tailored plan based on a model of the Physiotherapist and Osteopaths expertise, modelled in AI-powered cognitive reasoning platform, Rainbird.

Rainbird modelled the knowledge of Fluid Motions qualified physiotherapists and osteopaths, including the suitability of numerous exercises to individual patient symptoms, and added it to an interface that could be accessed by Fluid Motions network of fitness instructors. The tool allowed them to create a tailored, illustrated rehabilitation plan for patients, based on the results of an initial interaction with a virtual physiotherapist or osteopath.

The next step will be to provide access to patients directly so that they can create their own rehabilitation plans. Patients will have the facility to give feedback so that Rainbird can learn and, where necessary, adapt their plan or make alternative recommendations if specific exercises are uncomfortable.

Fluid Motion has since been able to track and reflect on participants progress in real-time, meaning the data can be utilised to improve clinical decision-making in rehabilitative healthcare. The application of AI helps patients get better sooner, and prevents pain and disability for longer.

The time and cost saving possibilities resulting from the implementation of such a programme are indubitable. According to Wilkins, the cumulative cost for a healthcare professional per session is 75 (50 for hiring an Osteo/Physio for the whole session and 25 to pay them to review feedback data to make recommendation). When Fluid Motion sessions now only cost the company 35 (for a Fluid Motion fitness instructor) and 25 (for pool hire), theres a full 150 per cent saving. With this model, it means that Fluid Motion can charge participants less than the average price of a swim to attend sessions.

Up to this point, Fluid Motion had been subsidising cost with grant payments, but now the company breaks even each session. Moreover, this is a model which is scalable. As a result of this initiative, Fluid Motion is now working to become an organisation that provides support and treatment for musculoskeletal health conditions alongside the NHS.

Indeed, the Fluid Motion case study clearly illustrates how challenges in healthcare can be overcome through the implementation of AI systems, and also highlights the potential time and cost saving benefits that the NHS could reap, if such an approach were adopted.

By mapping knowledge of some of the medical roles that are in high demand, there are many ways that the technology can help to streamline some of the more rudimentary elements of those roles. This would free up time to devote to face-to-face consultancy that would have the most impact for patients, reduce waiting times and even enable medical professionals to engage in a more personalised service.

This application of AI has the potential to address the rise in demand for NHS services, whilst ensuring that doctors and nurses spend more time doing the work that they are trained to do; treating patients to the best of their ability. Indeed, with the assistance of AI-powered technologies, the NHS may not only survive the crisis but, like the Phoenix, rise from the ashes to achieve its original goal of bringing good healthcare to all.

Katie Gibbs, Head of Accelerated Consulting, Aigen Image Credit: John Williams RUS / Shutterstock

More here:

Can artificial intelligence save the NHS? – ITProPortal

What’s AI, and what’s not – GCN.com

Whats AI, and whats not

Artificial intelligence has become as meaningless a description of technology as all natural is when it refers to fresh eggs. At least, thats the conclusion reached by Devin Coldewey, a Tech Crunch contributor.

AI is also often mentioned as a potential cybersecurity technology. At the recent RSA conference in San Francisco, RSA CTO Zulfikar Ramzan advised potential users to consider AI-based solutions carefully, in particular machine learning-based solutions, according to an article on CIO.

AI-based tools are not as new or productive as some vendors claim, he cautioned, explaining that machine learning-based cybersecurity has been available for over a decade via spam filters, antivirus software and online fraud detection systems. Plus, such tools suffer from marketing hype, he added.

Even so, AI tools can still benefit those with cybersecurity challenges, according to the article, which noted that IBM had announced its Watson supercomputer can now also help organizations enhance their cybersecurity defenses.

AI has become a popular buzzword, he said, precisely because its so poorly defined. Marketers use it to create an impression of competence and to more easily promote intelligent capabilities as trends change.

The popularity of the AI buzzword, however, has to do at least partly with the conflation of neural networks with artificial intelligence, he said. Without getting too into the weeds, the two are not interchangeable — but marketers treat them as if they are.

AI vs. neural networks

By using the human brain and large digital databases as metaphors, developers have been able to show ways AI has at least mimicked, if not substituted for, human cognition.

The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis, Coldeway wrote.

The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos — but in another, more important way its misleading, he added. While these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, theyre not intelligent.

AI analyst Maureen Caudill, meanwhile, described artificial neural networks (ANNs) as algorithms or actual hardware loosely modeled after the structure of the mammalian cerebral cortex but on much smaller scales.

A large neural network might have hundreds or thousands of processor units, whereas a brain has billions of neurons.

Caudill, the author of Naturally Intelligent Systems, said that while researchers have generally not been concerned with whether their ANNs resemble actual neurological systems, they have built systems that have accurately simulated the function of the retina and modeled the eye rather well.

So what is AI?

There about as many definitions of AI as researchers developing the technology.

The late MIT professor Marvin Minsky, often called the father of artificial intelligence, defined AI as the science of making machines do those things that would be considered intelligent if they were done by people.

Infosys CEO Vishal Sikka sums up AI as any activity that used to only be done via human intelligence that now can be executed by a computer, including speech recognition, machine learning and natural language processing.

When someone talks about AI, or machine learning, or deep convolutional networks, what theyre really talking about is a lot of carefully manicured math, Coldewey recently wrote.

In fact, he said, the cost of a bit of fancy supercomputing is mainly what stands in the way of using AI in devices like phones or sensors that now boast comparatively little brain power.

If the cost could be cut by a couple orders of magnitude, he said, AI would be unfettered from its banks of parallel processors and free to inhabit practically any device.

The federal government sketched out its own definition of AI last October. In a paper on Preparing for the future of AI, the National Science and Technology Councilsurveyed the current state of AI and its existing and potential applications.

The panel reported progress made on narrow AI,” which addresses single-task applications, including playing strategic games, language translation, self-driving vehicles and image recognition.

Narrow AI now underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, according to the paper.

The opposite end of the spectrum, sometimes called artificial general intelligence (AGI), refers to a future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. NSTC said those capabilities will not be achieved for a decade or more.

In the meantime, the panel recommended the federal government explore ways for agencies to apply AI to their missions by creating organizations to support high-risk, high-reward AI research. Models for such an organization include the Defense Advanced Research Projects Agency and what the Department of Education Department has done with its proposal to create an ARPA-ED, which was designed to support research on whether AI could help significantly improve student learning.

Read more:

What’s AI, and what’s not – GCN.com

Artificial intelligence virtual consultant helps deliver better patient … – Science Daily

Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients’ interventional radiology care, according to research presented today at the Society of Interventional Radiology’s 2017 Annual Scientific Meeting.

The researchers used cutting-edge artificial intelligence to create a “chatbot” interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment.

“We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care,” said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA’s David Geffen School of Medicine and one of the authors of the study. “Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care.”

In this research, deep learning was used to understand a wide range of clinical questions and respond appropriately in a conversational manner similar to text messaging. Deep learning is a technology inspired by the workings of the human brain, where networks of artificial neurons analyze large datasets to automatically discover patterns and “learn” without human intervention. Deep learning networks can analyze complex datasets and provide rich insights in areas such as early detection, treatment planning, and disease monitoring.

“This research will benefit many groups within the hospital setting. Patient care team members get faster, more convenient access to evidence-based information; interventional radiologists spend less time on the phone and more time caring for their patients; and, most importantly, patients have better-informed providers able to deliver higher-quality care,” said co-author Kevin Seals, MD, resident physician in radiology at UCLA and the programmer of the application.

The UCLA team enabled the application, which resembles online customer service chats, to develop a foundation of knowledge by feeding it more than 2,000 example data points simulating common inquiries interventional radiologists receive during a consultation. Through this type of learning, the application can instantly provide the best answer to the referring clinician’s question. The responses can include information in various forms, including websites, infographics, and custom programs. If the tool determines that an answer requires a human response, the program provides the contact information for a human interventional radiologist. As clinicians use the application, it learns from each scenario and progressively becomes smarter and more powerful.

The researchers used a technology called Natural Language Processing, implemented using IBM’s Watson artificial intelligence computer, which can answer questions posed in natural language and perform other machine learning functions. This prototype is currently being tested by a small team of hospitalists, radiation oncologists and interventional radiologists at UCLA.

“I believe this application will have phenomenal potential to change how physicians interact with each other to provide more efficient care,” said John Hegde, MD, resident physician in radiation oncology at UCLA. “A key point for me is that I think it will eventually be the most seamless way to share medical information. Although it feels as easy as chatting with a friend via text message, it is a really powerful tool for quickly obtaining the data you need to make better-informed decisions.”

As the application continues to improve, researchers aim to expand the work to assist general physicians in interfacing with other specialists, such as cardiologists and neurosurgeons. Implementing this tool across the health care spectrum, said Lee, has great potential in the quest to deliver the highest-quality patient care.

Abstract 354: “Utilization of Deep Learning Techniques to Assist Clinicians in Diagnostic and Interventional Radiology: Development of a Virtual Radiology Assistant.” K. Seals; D. Dubin; L. Leonards; E. Lee; J. McWilliams; S. Kee; R. Suh; David Geffen School of Medicine at UCLA, Los Angeles, CA. SIR Annual Scientific Meeting, March 4-9, 2017. This abstract can be found at sirmeeting.org.

Story Source:

Materials provided by Society of Interventional Radiology. Note: Content may be edited for style and length.

See the original post here:

Artificial intelligence virtual consultant helps deliver better patient … – Science Daily

A Jetsons world: how artificial intelligence will revolutionize work and play – SiliconANGLE (blog)

As artificial intelligence tools become smarter and easier to use, the threat that they may take human jobs is real. They might also just make people much better at what they do, revolutionizing the workday for many.

What a bulldozer was to physical labor, AI is to data and to thought labor, saidNaveen Rao(pictured), Ph.D., vice president and general manager of artificial intelligence solutions at Intel.

Rao told John Furrier (@furrier), host oftheCUBE, SiliconANGLE Medias mobile live streaming studio, during South by Southwest in Austin, TX, that there are many examples of how AI can help streamline processes; one would be an insurance firm needing to read millions of pages of text to assess risk.

I cant do that very easily, right? I have to have a team of analysts run through, write summaries these are the kinds of problems we can start to attack, he said.AI can turn a computer into a data inference machine, not just a way to automate compute tasks, he added.

Improved user interfaces are driving the democratization of AI for people doing regular jobs, Rao pointed out. A major example of how AI can bring a technology to the masses is the iPod, which in turn informed the smartphone.

Storing music in a digital form in a small device was around before the iPod, but when they made it easy to use, that sort of gave rise to the smartphone, Rao said.

Rao sees fascinating advances in AI robot development, driven in part by 3D printing and the maker revolution lowering mechanical costs.

That, combined with these techniques becoming mature, is going to come up with some really cool stuff. Were going to start seeing The Jetsonskind of thing, he said.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of the South by SouthWest (SXSW).(*Disclosure: Intel sponsors some SXSW segments on SiliconANGLE Medias theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Read the original:

A Jetsons world: how artificial intelligence will revolutionize work and play – SiliconANGLE (blog)

Poll: Where readers stand on artificial intelligence, cloud computing and population health – Healthcare IT News

When IBM CEO Ginni Rometty delivered the opening keynote at HIMSS17 sheeffectively set the stagefor artificial intelligence, cognitive computing and machine learning to be prevalent themes throughout the rest of the conference.

Other top trends buzzed about in Orlando: cloud computing and population health.

Healthcare IT News asked our readers where they stand in terms of these initiatives. And we threw in a bonus question to figure out what their favorite part of HIMSS17 was.

Some 70 percent of respondents are either actively planning or researching artificial intelligence, cognitive computing and machine learning technologies while 7 percent are rolling them out and 1 percent have already completed an implementation.

A Sunday afternoon session featuring AI startups demonstrated the big promise of such tools as well as the persistent questions, skepticism and even fearwhen it comes to these emerging technologies.

Whereas AI was considerably more prominent in the HIMSS17 discourse than in years past, population health management has been among the top trends for the last couple conferences.

Its not entirely surprising that more respondents, 30 percent,are either rolling out or have completed a rollout of population health technologies, while 50 percent are either researching actively planning to do so.

One striking similarity between AI and population health is the 20 percent of participants responding that they have no interest in either. For cloud computing, meanwhile, only 7 percent indicated they are not interested.

Though cloud computing is not a new concept, it is widely seen as such in the HIPAA-sensitive world of personally-identifiable and protected health information. The overarching themes at the pre-conference HIMSS and Healthcare IT News Cloud Computing Forum on Sunday were that security is not a core competency of hospital and health systems, thus many cloud providers can better protect health data and the ability to spin up server, storage and compute resources on Amazon, Google or Microsoft is enabling a whole new era of innovation that simply is not possible when hospitals have to invest in their own infrastructure to run proofs-of-concept and pilot programs. The Centers for Medicare and Medicaid Services, for instance,cut $5 million from its annual infrastructure budgetby opting for infrastructure-as-a-service.

Here comes the bonus question: What was your favorite part of HIMSS17?

The show floor won hands-down, followed by education sessions, then networking events and, in a neck-and-neck tie are keynotes and parties/nightlife.

This article is part of our ongoing coverage of HIMSS17. VisitDestination HIMSS17for previews, reporting live from the show floor and after the conference.

Like Healthcare IT News onFacebookandLinkedIn

See the rest here:

Poll: Where readers stand on artificial intelligence, cloud computing and population health – Healthcare IT News

How Artificial Intelligence Is Changing Financial Auditing – Daily Caller


As robots continue to play a growing role in our daily lives, white collar jobs in many sectors including accounting and financial operations are quickly becoming a thing of the past. Business are gravitating towards software to automate bookkeeping tasks, saving considerable amounts of both time and money. In fact, since 2004, the number of full-time finance employees at large companies has declined a staggering 40% to roughly 71 employees for every $1 billon of revenue,down from 119 employees, according to a report by top consulting firm The Hackett Group.

These numbers show that instead of resisting change, companies are embracing the efficiencies of this new technology and exploring how individual businesses can leverage automation and, more importantly, artificial intelligence aka robots. A quick aside on the idea of robots versus automation. As technology becomes more sophisticated and particularly with the use of Artificial Intelligence (AI) were able to automate multiple steps in a process. The concept of Robotic Process Automation (RPA) or robots for short has emerged to capture the notion of more sophisticated automation of everyday tasks.

Today, there is more data available than ever and computers are enhancing their capabilities to leverage these mountains of information. With that, many technology providers are focusing on making it as easy as possible for businesses to implement and utilize their solutions. Whether its by easing the support and management burden via Software as a Service (SaaS) delivery or more turn-key offerings that embed best practices in the solution, one can see a transformation from simply providing tools to providing a level of robotic automation that seems more like a service offering than a technology.

Of course, the name of the game for any business is speed, efficiency, and cost reduction.It is essential to embrace technologies that increase efficiency and savings because, like it or not, your competitors will. While there are some companies that stick with the old-school approaches, they end up serving small niches of customers and seeing less overall growth.

As long as the technology-based solution is less expensive and performs equally as well, if not better than alternative options, the market forces will drive companies to implement the automated technologies. In particular, the impact of robotic artificial intelligence (AI) is here to stay. In the modern work environment, automation means much more than just compiling numbers but making intelligent observations and judgements based on the data that is reviewed.

If companies and businesses want to ensure future success, its imperative to accept and embrace the capabilities provided by robots. Artificial intelligence wont always be perfect but it can dramatically improve your work output and add to your bottom line. Its important to emphasize that the goal is not to curtail employees but to find ways to leverage the robots toautomate everyday tasks or detail-oriented processesand focus the employees on higher-value activities.

Lets use an example: controlling spent in Travel & Expense (T&E) by auditing expense reports. When performing an audit, many companies randomly sample roughly 20% of expense reports to identify potential waste and fraud. If you process 500 expense reports in a month then 100 of those reports would be audited. The problem is less than 1% of these expense reports contain fraud or serious risks (cite SAR report), meaning the odds are that 99% of the reports reviewed were a waste of time and resources and the primary abuser of company funds most likely went unnoticed.

By employing a robot to identify risky looking expense reports and configuring the system to be hyper-vigilant, it has been shown that a sufficiently sophisticated AI system will flag 7% of the expense reports for fraud, waste, and misuse. (7% is the average Oversight Systems has seen across 20 million expense reports) If we look back to our previous example this means that out of 500 expense reports, employees would only have to review 35 instead of the 100 reports that would have been audited. Though these are likely not all fraudulent, they may provide other valuable information such as noting when an employee needs to be reminded about company travel policy.

While it may sound like robots are eliminating human jobs, its important to note that they can also be extremely valuable working collaboratively with employees. Although the example above focused on fraud, the same productivity leverage is available regarding errors, waste, misuse in financial processes, etc. With the help of robots, we can spend less time hunting for issues and more time addressing them. By working together with technology, the employee has a higher chance of rooting out fraud and will have the bandwidth to work with company travelers to influence their future behavior.

It is clear that in order to ensure future profitability, it is crucial for businesses to understand and take advantage of the significant role that robots can play in dramatically enhancing financial operations.

Read this article:

How Artificial Intelligence Is Changing Financial Auditing – Daily Caller

The Next US-China Arms Race: Artificial Intelligence? – The National Interest Online

Although China could initially only observe the advent of the Information-Technology Revolution in Military Affairs, the Peoples Liberation Army might presently have a unique opportunity to take advantage of the military applications of artificial intelligence to transform warfare. When the United States first demonstrated its superiority in network-centric warfare during the first Gulf War, the PLA was forced to confront the full extent of its relative backwardness in information technology. Consequently, the PLA embarked upon an ambitious agenda of informatization (). To date, the PLA has advanced considerably in its capability to utilize information to enhance its combat capabilities, from long-range precision strike to operations in space and cyberspace. Currently, PLA thinkers anticipate the advent of an intelligentization Revolution in Military Affairs that will result in a transformation from informatized ways of warfare to future intelligentized () warfare. For the PLA, this emerging trend heightens the imperative of keeping pace with the U.S. militarys progress in artificial intelligence, after its failure to do so in information technology. Concurrently, the PLA seeks to capitalize upon the disruptive potential of artificial intelligence to leapfrog the United States through technological and conceptual innovation.

For the PLA, intelligentization is the culmination of decades of advances in informatization. Since the 1990s, the PLA has been transformed from a force that had not even completed the process of mechanization to a military power ever more confident in its capability to fight and win informatized wars. Despite continued challenges, the PLA appears to be on track to establish the system of systems operations () capability integral to integrated joint operations. The recent restructuring of the PLAs Informatization Department further reflects the progression and evolution of its approach. These advances in informatization have established the foundation for the PLAs transition towards intelligentization. According to Maj. Gen. Wang Kebin (), director of the former General Staff Department Informatization Department, Chinas information revolution has been progressing through three stages: first digitalization (), then networkization () and now intelligentization (). The PLA has succeeded in the introduction of information technology into platforms and systems; progressed towards integration, especially of its C4ISR capabilities; and seeks to advance towards deeper fusion of systems and sensors across all services, theater commands and domains of warfare. This final stage could be enabled by advances in multiple emerging technologies, including big data, cloud computing, mobile networks, the Internet of Things and artificial intelligence. In particular, the complexity of warfare under conditions of intelligentization will necessitate a greater degree of reliance upon artificial intelligence. Looking forward, artificial intelligence is expected to replace information technology, which served as the initial foundation for its emergence, as the dominant technology for military development.

Although the PLA has traditionally sought to learn lessons from foreign conflicts, its current thinking on the implications of artificial intelligence has been informed not by a war but by a game. AlphaGos defeat of Lee Sedol in the ancient Chinese game of Go has seemingly captured the PLAs imagination at the highest levels. From the perspective of influential PLA strategists, this great war of man and machine () decisively demonstrated the immense potential of artificial intelligence to take on an integral role in command and control and also decisionmaking in future warfare. Indeed, the success of AlphaGo is considered a turning point that demonstrated the potential of artificial intelligence to engage in complex analyses and strategizing comparable to that required to wage warnot only equaling human cognitive capabilities but even contributing a distinctive advantage that may surpass the human mind. In fact, AlphaGo has even been able to invent its own, novel techniques that human players of this ancient game had never devised. This capacity to formulate unique, even superior strategies implies that the application of artificial intelligence to military decisionmaking could also reveal unimaginable ways of waging war. At the highest levels, the Central Military Commission Joint Staff Department has called for the PLA to progress towards intelligentized command and decisionmaking in its construction of a joint operations command system.

View post:

The Next US-China Arms Race: Artificial Intelligence? – The National Interest Online

artificial intelligence: How online retailers are using artificial … – Economic Times

The next time you shop on fashion website Myntra, you might end up choosing a t-shirt designed completely by a softwarethe pattern, colour and texture without any intervention from a human designer. And you would not realise it. The first set of these t-shirts went on sale four days ago. This counts as a significant leap for Artificial Intelligence in ecommerce.

For customers, buying online might seem simpleclick, pay and collect. But it’s a different ballgame for e-tailers. Behind the scenes, from the warehouses to the websites, artificial intelligence plays a huge role in automating processes. Online retailers are employing AI to solve complex problems and make online shopping a smoother experience. This could involve getting software to understand and process voice queries, recommend products based on a person’s buying history, or forecast demand.

SO WHAT ARE THE BIG NAMES DOING? “In terms of industry trends, people are going towards fast fashion. (Moda) Rapido does fast fashion in an intelligent way,” said Ambarish Kenghe, chief product officer at Myntra, a Flipkart unit and India’s largest online fashion retailer.

The Moda Rapido clothing label began as a project in 2015, with Myntra using AI to process fashion data and predict trends. The companys human designers incorporated the inputs into their designs. The new AI-designed t-shirts are folded into this label unmarked, so Myntra can genuinely test how well these sell when pitted against shirts designed by humans.

Also Read: AI will help answer queries automatically: Rajeev Rastogi, Amazon

“Till now, designers could look at statistics (for inputs). But you need to scale. We are limited by the bandwidth of designers. The next step is, how about the computer generating the design and us curating it,” Kenghe said. “It is a gold mine. Our machines will get better on designing and we will also get data.”

This is not a one-off experiment. Ecommerce, which has a treasure trove of data collected over the last few years is ripe for disruption from AI. Companies are betting big on AI and pouring in funds to push the boundaries of what can be done with data. “We are applying AI to a number of problems such as speech recognition, natural language understanding, question answering, dialogue systems, product recommendations, product search, forecasting future product demand, etc.,” said Rajeev Rastogi, director, machine learning, at Amazon.

An example of how AI is used in recommendations could be this: if you started your search on a retailers website with, say, a white shirt with blue polka dots, and your next search is for an shirt with a similar collar and cuff style, the algorithm understands what is motivating you. “We start with personalizationit is key. If you have enough and more collection, clutter is an issue. How do you (a customer) get to the product that you want? We are trying to figure it out. We want to give you precisely what you are looking for,” said Ajit Narayanan, chief technology officer, Myntra.

A related focus area for AI is recommending the right sizes as this can vary across brands. “We have pretty high return rates across many categories because people think that sizes are the same across brands and across geographies. So, trying to make recommendations with appropriate size is another problem that we are working on. Say, a size 6 in Reebok might be 7 in Nike, and so on,” Rastogi said in an earlier interview with ET.

Myntra uses data intelligence to also decide which payment gateway is the best for a transaction.

“Minute to minute there is a difference. If you are going from, say, a HDFC Bank card to a certain gateway at a certain time, the payment success rate may be different than for the same gateway and for the same card at a different time, based on the load. This is learning over a period of time,” said Kenghe. “Recently, during the Chennai cyclone, one of the gateways had an outage. The system realised this and auto-routed all transactions away from the gateway. Elsewhere, humans were trying to figure out what happened.

SUPPORT FROM AI SPECIALISTS A number of independent AI-focused startups are also working on automating manually intensive tasks in ecommerce. Take cataloging. If not done properly, searching for the right product becomes cumbersome and shoppers might log out.

“Catalogues are (usually) tagged manually. One person can tag 2,000 to 10,000 images. The problem is, it is inconsistent. This affects product discovery. We do automatic tagging (for ecommerce clients) and reduce 90% of human intervention,” said Ashwini Asokan, chief executive of Chennai-based AI startup Mad Street Den. “We can tag 30,000 images in, say, two hours.”

Mad Street Den also offers a host of other services such as sending personalised emails to their clients’ customers, automating warehouse operations and providing analysis and forecasting.

Gurugram-based Staqu works on generating digital tags that make searching for a product online easier. “We provide a software development kit that can be integrated into an affiliate partner’s website or app. Then the site or app will become empowered by image search. It will recognise the product and start making tags for that,” said Atul Rai, cofounder of Staqu, which counts Paytm and Yepme among clients. Staqu is a part of IBM’s Global Entrepreneurship Program.

The other big use of AI is to provide business intelligence. Bengaluru-based Stylumia informs their fashion retailer clients on the latest design trends. “We deliver insights using computer vision, meaning visual intelligence,” said CEO Ganesh Subramanian. “Say, for example, (how do you exactly describe a) dark blue stripe shirt. Now, dark blue is subjective. You cannot translate dark blue, so we pull information from the Net and we show it visually.”

In product delivery, algorithms are being used to clean up and automate the process.

Bengaluru-based Locus is enabling logistics for companies using AI. “We use machine learning to convert (vaguely described) addresses into valid (recognizable) addresses. There are pin code errors, spelling mistakes, missing localities. Machine learning is critical in logistics. We even do demand predictions and predict returns,” said Nishith Rastogi, chief executive of Locus, whose customers include Quikr, Delhivery, Lenskart and Urban Ladder.

Myntra is trying to use AI to predict for customers the exact time of product delivery. “The exact time is very important to us. However, it is not straightforward. It depends on what time somebody placed an order, what was happening in the rest of the supply chain at that time, what was its capacity. It is a complicated thing to solve but we threw this (challenge) to the machine,” said Kenghe. “(The machine) learnt over a period of time. It learnt what happens on weekends, what happens on weekdays, and which warehouse to which pin code is (a product) going to, and what the product is and what size it is. It figured these out with some supervision and came up with (more accurate delivery) dates. I do not think we have perfected it, but it is a big deal for us.”

THE NEXT BIG CHALLENGE One of Myntra’s AI projects is to come up with a fashion assistant that can talk in common language and recommend what to wear for various occasions. But “conversational flows are difficult to solve. This is very early. It will not see the light of the day very soon. The assistants first use would be for support, say (for a user to ask) where is my order, (or instruct) cancel order,” said Kenghe.

The world over, conversational bots are the next big thing. Technology giants like Google and Amazon are pushing forward research on artificial intelligence. “As we see (customer care) agents responding (to buyers), the machine can learn from it. The next stage is, a customer can say ‘I am going to Goa’ and the assistant will figure out that Goa means beach and give a list of things (to take along),” Kenghe said.

While speech is one crucial area in AI research, vision is another. Mad Street Den is trying to use AI in warehouses to monitor processes. “Using computer vision, there is no need for multiple photoshoots of products. This avoids duplication and you are saving money for the customer almost 16-25% savings on the operational side. We can then start seeing who is walking into the warehouse, how many came in, efficiency, analytics, etc. We are opening up the scale of operations,” said Asokan.

Any opportunity to improve efficiency and cut cost is of supreme importance in ecommerce, said Partha Talukdar, assistant professor at Bengaluru’s Indian Institute of Science, where he heads the Machine and Language Learning Lab (MALL), whose mission is to give a “worldview” to machines.

“Companies like Amazon are doing automation wherever they can… right to the point of using robots for warehouse management and delivery through drones. AI and ML are extremely important because of the potential. There are a lot of diverse experiments going on (in ecommerce). We will certainly see a lot of innovative tech from this domain.”

Read the original here:

artificial intelligence: How online retailers are using artificial … – Economic Times

Head of Uber’s Artificial Intelligence Labs Steps Down After Four Months – Fortune

Uber Technologies’ Gary Marcus said he is stepping down from his post as head of AI Labs, four months after the unit was created.

Gary Marcus, head of the recently launched AI Labs, said in a Facebook post on Wednesday that he is stepping down and will serve as a special advisor to AI Labs.

The ride-hailing app created AI Labs last year and also acquired Geometric Intelligence to form the initial AI Labs team.

Uber’s management practices have been called into question after a former employee had published a blog post last month describing a workplace where sexual harassment was common and remained unpunished, leading to an internal investigation.

Uber was not immediately available for comment.

Follow this link:

Head of Uber’s Artificial Intelligence Labs Steps Down After Four Months – Fortune

Indian Startups Bet on Artificial Intelligence in 2017: Report – News18


As data science gets set to drive the artificial intelligence (AI) market in 2017, a few Indian startups are initiating development of conversational bots, speech recognition tools, intelligent digital assistants and conversational services to be built over social media channels, a joint study by PwC-Assocham said.

Read more:Bose SoundSport Wireless Review: Delivers Great Sound Sans Pesky Wires

Organisations are looking to leverage AI capabilities for predictive modelling.

“Online shopping portals have extensively been using predictive capabilities to gauge consumer interest in products by building a targeted understanding of preferences through collection of browsing and click-stream data, and effectively targeting and engaging customers using a multi-channel approach,” the report added.

To enable consumers to find better products at low prices, machine learning algorithms are being deployed for better matching of supply with consumer demand.

Some of the areas where AI can improve legal processes, said the findings, include improved discovery and analysis based on law case history and formulation of legal arguments based on identification of relevant evidence.

“Researchers and paralegals are increasingly being replaced by systems that can extract facts and conclusions from over a billion text documents a second. This has the potential to save lawyers around 30 per cent of their time,” the findings showed.

China is expected to have installed more industrial robots than any other country — 30 robots per 10,000 workers.

A few thousand workers have already been replaced by a robotic workforce in a single factory, the study added.

See the rest here:

Indian Startups Bet on Artificial Intelligence in 2017: Report – News18

Artificial intelligence? Only an idiot would think that – Irish Times

Prof Ian Bogost of the Georgia Institute of Technology: not every technological innovation merits being called AI

Not every technological innovation is artificial intelligence and labelling it as such is making the term AI virtually meaningless, says Ian Bogost, a professor of interactive computing at the Georgia Institute of Technology in the US. Bogost gives the example of Googles latest algorithm, Perspective, which is designed to detect hate speech. While media coverage has been hailing this as an AI wonder, it turns out that simple typos can fool the system and allow abusive, harassing, and toxic comments to slip through easily enough.

Researchers from the University of Washington, Seattle, put the algorithm through its paces by testing the phrase Anyone who voted for Trump is a moron, which scored 79 per cent on the toxicity scale. Meanwhile, Anyone who voted for Trump is a mo.ron scored a tame 13 per cent. If you can easily game Artificial Intelligence, was it really intelligent in the first place?


Read more:

Artificial intelligence? Only an idiot would think that – Irish Times

Why not all forms of artificial intelligence are equally scary – Vox

How worried should we be about artificial intelligence?

Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications.

Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that artificial intelligence is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.

There are, generally speaking, three forms of AI: weak AI, strong AI, and superintelligence. At present, only weak AI exists. Strong AI and superintelligence are theoretically possible, even probable, but were not there yet.

Understanding the differences between these forms of AI is essential to analyzing the potential risks and benefits of this technology. There are a whole range of concerns that correspond to different kinds of AI, some more worrisome than others.

To help make sense of this, here are some key distinctions you need to know.

Artificial Narrow Intelligence (often called weak AI) is an algorithmic or specialized intelligence. This has existed for several years. Think of the Deep Blue machine that beat world champion Garry Kasparov in chess. Or Siri on your iPhone. Or even speech recognition and processing software. These are forms of nonsentient intelligence with a relatively narrow focus.

It might be too much to call weak AI a form of intelligence at all. Weak AI is smart and can outperform humans at a single task, but thats all it can do. Its not self-aware or goal-driven, and so it doesnt present any apocalyptic threats. But to the extent that weak AI controls vital software that keeps our civilization humming along, our dependence upon it does create some vulnerabilities. George Dvorsky, a Canadian bioethicist and futurist, explores some of these issues here.

Then theres Artificial General Intelligence, or strong AI; this refers to a general-purpose system, or what you might call a thinking machine. Artificial General Intelligence, in theory, would be as smart or smarter than a human being at a wide range of tasks; it would be able to think, reason, and solve complex problems in myriad ways.

Its debatable whether strong AI could be called conscious; at the very least, it would demonstrate behaviors typically associated with consciousness commonsense reasoning, natural language understanding, creativity, strategizing, and generally intelligent action.

Artificial General Intelligence does not yet exist. A common estimate is that were perhaps 20 years away from this breakthrough. But nearly everyone concedes that its coming. Organizations like the Allen Institute for Artificial Intelligence (founded by Microsoft co-founder Paul Allen) and Googles DeepMind project, along with many others across the world, are making incremental progress.

There are surely more complications involved with this form of AI, but its not the stuff of dystopian science fiction. Strong AI would aim at a general-purpose human level intelligence; unless it undergoes rapid recursive self-improvement, its unlikely to pose a catastrophic threat to human life.

The major challenges with strong AI are economic and cultural: job loss due to automation, economic displacement, privacy and data management, software vulnerabilities, and militarization.

Finally, theres Artificial Superintelligence. Oxford philosopher Nick Bostrom defined this form of AI in a 2014 interview with Vox as any intellect that radically outperforms the best human minds in every field, including scientific creativity, general wisdom and social skills. When people fret about the hazards of AI, this is what theyre talking about.

A truly superintelligent machine would, in Bostroms words, become extremely powerful to the point of being able to shape the future according to its preferences. As yet, were nowhere near a fully developed superintelligence. But the research is underway, and the incentives for advancement are too great to constrain.

Economically, the incentives are obvious: The first company to produce artificial superintelligence will profit enormously. Politically and militarily, the potential applications of such technology are infinite. Nations, if they dont see this already as a winner-take-all scenario, are at the very least eager to be first. In other words, the technological arms race is afoot.

The question, then, is how far away from this technology are we, and what are the implications for human life?

For his book Superintelligence, Bostrom surveyed the top experts in the field. One of the questions he asked was, “by what year do you think there is a 50 percent probability that we will have human-level machine intelligence?” The median answer to that was somewhere between 2040 and 2050. That, of course, is just a prediction, but its an indication of how close we might be.

Its hard to know when an artificial superintelligence will emerge, but we can say with relative confidence that it will at some point. If, in fact, intelligence is a matter of information processing, and if we assume that we will continue to build computational systems at greater and greater processing speeds, then it seems inevitable that we will create an artificial superintelligence. Whether were 50 or 100 or 300 years away, we are likely to cross the threshold eventually.

When it does happen, our world will change in ways we cant possibly predict.

We cannot assume that a vastly superior intelligence is containable; it would likely work to improve itself, to enhance its capabilities. (This is what Bostrom calls the control problem.) A hyper-intelligent machine might also achieve self-awareness, in which case it would begin to develop its own ends, its own ambitions. The hope that such machines will remain instruments of human production is just that a hope.

If an artificial superintelligence does become goal-driven, it might develop goals incompatible with human well-being. Or, in the case of Artificial General Intelligence, it may pursue compatible goals via incompatible means. The canonical thought experiment here was developed by Bostrom. Lets call it the paperclip scenario.

Heres the short version: Humans create an AI designed to produce paperclips. It has one utility function to maximize the number of paperclips in the universe. Now, if that machine were to undergo an intelligence explosion, it would likely work to optimize its single function producing paperclips. Such a machine would continually innovate new ways to make more paperclips. Eventually, Bostrom says, that machine might decide that converting all of the matter it can including people into paperclips is the best way to achieve its singular goal.

Admittedly, this sounds a bit stupid. But its not, and it only appears so when you think about it from the perspective of a moral agent. Human behavior is guided and constrained by values self-interest, compassion, greed, love, fear, etc. An Advanced General Intelligence, presumably, would be driven only by its original goal, and that could lead to dangerous, and unanticipated, consequences.

Again, the paperclip scenario applies to strong AI, not superintelligence. The behavior of an a superintelligent machine would be even less predictable. We have no idea what such a being would want, or why it would want it, or how it would pursue the things it wants. What we can be reasonably sure of is that it will find human needs less important than its own needs.

Perhaps its better to say that it will be indifferent to human needs, just as human beings are indifferent to the needs of chimps or alligators. Its not that human beings are committed to destroying chimps and alligators; we just happen to do so when the pursuit of our goals conflicts with the wellbeing of less intelligent creatures.

And this is the real fear that people like Bostrom have of superintelligence. We have to prepare for the inevitable, he told me recently, and take seriously the possibility that things could go radically wrong.

See the original post here:

Why not all forms of artificial intelligence are equally scary – Vox

IBM Rated Buy On ‘Upside Potential,’ Artificial Intelligence Move – Investor’s Business Daily

IBM CEO Ginni Rometty told investors that her company is emerging as a leader in cognitive computing. (IBM)

IBM (IBM) is an attractive turnaround story with improved fundamental trends, says a Drexel Burnham analyst who reiterated a buy rating and raised his price target on the computer giant.

The buy rating by Drexel Burnham analyst Brian White follows a day of briefings that IBM presented to investors at its annual Investor Briefing conference that ended Tuesday.

“We believe IBM has further upside potential as the fruits of the company’s labor around its strategic imperatives are better appreciated and more investors warm up to the stock,” White wrote in a research note. Along with his buy rating, White raised his price target on IBM to 215, from 186.

IBM stock ended the regular trading session at179.45, down fractionally on the stock market today. It’s currently trading near a 29-month high.

The investor’s day events included a presentation by IBM Chief Executive Ginni Rometty, who said the company has reached an important moment with a solid foundation and is emerging as a leader in cognitive computing with its Watson computing platform and cloud services.

Announcements from the investor briefing included IBM and Salesforce.com (CRM) agreeing to a strategic partnership focused on artificial intelligence and supported by IBM’s Watson computer and the Einstein computing platform by Salesforce.com.

Salesforce and IBM will combine their two AI offerings but will also continue to sell the combined offering under two brands. Salesforce and IBM said they would “seamlessly connect” their AI offerings “to enable an entirely new level of intelligent customer engagement across sales, service, marketing, commerce and more.”

Salesforce stock finished at83.48, up 0.6%.

Decades of research and billions of dollars have poured into developingartificial intelligence, which has crossedover from science fiction to game-show novelty to the cusp of widespread business applications. IBM has said Watson represents a new era of computing.

IBD’S TAKE: After six consecutive quarters of declining quarterly earnings at IBM,growth may be on the mend. IBM reported fourth-quarter earnings after the market close Jan. 19 that beat on the top and bottom lines for the fifth straight quarter.

“We believe IBM is furthest ahead in the cognitive computing movement and we believe the Salesforce partnership is only the beginning of more deals in the coming years,” White wrote.

Other companies investing heavily in AI include Google parent Alphabet (GOOGL) and graphics chip company Nvidia (NVDA).

Alphabet has used AI to enhance Google search abilities, improve voice recognition and to derive more data from images and video.

Nvidia has developed chip technology for AI platforms used in autonomous driving features, and to enhance how a driver and car communicate.

Not everyone is a bull on the IBM train. Credit Suisse analyst Kulbinder Garcha, has an underperform rating on IBM and price target of 110. Garcha, in a research note, said IBM remains in a multiyear turnaround.

“We believe it will take multiple years for faster growing segments such as the Cognitive Solutions segment and Cloud to offset the decline in the core business,” Garcha wrote.


AI Meets ROI: Where Artificial Intelligence Is Already Smart Business

IBM Takes Watson Deeper Into Business Computing Field

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices its IPO and tech giants like Microsoft, Alphabet and Texas…

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices…

Excerpt from:

IBM Rated Buy On ‘Upside Potential,’ Artificial Intelligence Move – Investor’s Business Daily

Artificial Intelligence for Cars May Drive Future of Healthcare – Healthline

The same artificial intelligence that may soon drive your new car is being adapted to help drive interventional radiology care for patients.

Researchers at the University of California, Los Angeles (UCLA), have used advanced artificial intelligence, also called machine learning, to create a chatbot or Virtual Interventional Radiologist (VIR).

This device communicates automatically with a patients physicians and can quickly offer evidence-based answers to frequently asked questions.

The scientists will present their research today at the Society of Interventional Radiologys 2017 annual scientific meeting in Washington, D.C.

This breakthrough will allow clinicians to give patients real-time information on interventional radiology procedures as well as planning the next step of their treatment.

Dr. Edward W. Lee, assistant professor of radiology at UCLAs David Geffen School of Medicine, and one of the authors of the study, said he and his colleagues theorized they could use artificial intelligence in low-cost, automated ways to improve patient care.

The fundamental technology that has made self-driving cars possible is deep learning, a type of artificial intelligence modeled after the connections in the human brain, explained Dr. Kevin Seals, resident physician in diagnostic radiology at UCLA Health, and a study co-author, said in a Healthline interview.

Seals, who programmed the VIR, said advanced computers and the human brain have a number of similarities.

Using deep learning, computers are now essentially as good as humans at identifying particular objects, making it possible for self-driving cars to see and appropriately navigate their environment, he said.

This same technology can allow computers to understand complex text inputs such as medical questions from healthcare professionals, he added. By implementing deep learning using the IBM Watson cognitive technology and Natural Language Processing, we are able to make our virtual interventional radiologist smart enough to understand questions from physicians and respond in a smart, useful way.

Read more: Regenerative medicine has a bright future

Think of it as an initial, superfast layer of information gathering that can be used prior to taking the time to contact an actual human diagnostic or interventional radiologist, Seals said.

The user simply texts a question to the virtual radiologist, which in many cases provides an excellent, evidence-based response more or lessinstantaneously, he said.

He noted that if the patient doesnt receive a helpful response, they are rapidly referred to a human radiologist.

Tools such as our chatbot are particularly important in the current clinical environment, which focuses on quality metrics and follows evidence-based clinical guidelines that are proven to help patients, he said.

Seals said a team of academic radiologists curated the information provided in the application from the radiology literature, and it is rigorously scientific and evidence-based.

We hope that using the application will encourage cutting-edge patient management that results in improved patient care and significantly benefits our patients, he added.

It can be thought of as texting with a virtual representation of a human radiologist that offers a significant chunk of the functionality of speaking with an actual human radiologist, Seals said.

When the non-radiologist clinician texts a question to the VIR, deep learning is used to understand that message and respond in an intelligent manner.

We get a lot of questions that are fairly readily automated, Seals said. Such as I am worried that my patient has a blood clot in their lungs. What is the best type of imaging to perform to make the diagnosis? The chatbot can respond to questions like this in a supersmart, evidence-based way.

Sample responses, he said, can include instructive images (for example, a flowchart that shows a clinical algorithm), response text messages, and subprograms within the application such as a calculator to determine a patients Wells score, a metric doctors use to guide clinical management.

The VIR application resembles an online customer service chat.

To create a crucial foundation of knowledge, the researchers fed the app more than 2,000 data points that simulated the common inquiries interventional radiologists receive when they meet with patients.

Read more: A watch that tells you when youre getting sick

When a referring clinician asks a question, the extensive knowledge base of the app allows it to respond instantly with the best answer.

The various forms of responses can include websites, infographics, and custom programs.

If the VIR determines that an answer requires a human response, the program will provide contact information for a human interventional radiologist.

The app learns as clinicians use it, and each scenario teaches the VIR to become increasingly smarter and more powerful, Seals said.

The nature of chatbot communications should protect patient privacy.

Confidentiality is critically important in the world of modern technology and something we take very seriously, Seals said.

He added that the application was created and programmed by physicians with extensive HIPAA (Health Insurance Portability and Accountability Act of 1996) training.

We are able to avoid these issues because users ask questions in a general and anonymous manner, Seals said. Protected health information is never needed to use the application, nor is it relevant to its function.

All users professional healthcare providers such as physicians and nurses must agree to not include any specific protected patient information in their texts to the chatbot, he added.

None of the diverse functionality within the application requires specific patient information, Seals said.

Read more: Artificial bones are the latest thing in 3-D printing

This new technology represents the fastest and easiest way for clinicians to get the information they need in the hospital, starting with radiology and eventually expanding to other specialties such as neurosurgery and cardiology, Seals said.

Our technology can power any type of physician chatbot, he explained. Currently, there are information silos of sorts that exist between various specialists in the hospital, and there is no good tool for rapidly sharing information between these silos. It is often slow and difficult to get a busy radiologist on the phone, which inconveniences clinicians and delays patient care.

Other clinicians at the UCLA David Geffen School of Medicine are testing the chatbot, and Seals and Lee say their technology is fully functional now.

We are refining it and perfecting it so it can thrive in a wide release, Seals said.

Seals engineering and software background allowed him to perform the necessary programming for the as-yet unfunded research project. He said he and his colleagues will seek funding as they expand.

This breakthrough technology will debut soon.

The VIR will be made available in about one month to all clinicians at the UCLA Ronald Reagan Medical Center. Further use at UCLA will help the team to refine the chatbot for wider release.

The VIR could also become a free app.

We are exploring potential models for releasing the application, Seals said. It may very well be a free tool we release to assist our clinician colleagues, as we are academic radiologists focused on sharing knowledge and improving clinical medicine.

The researchers described the importance of the VIR in a summary of their findings: Improved artificial intelligence through deep learning has the potential to fundamentally transform our society, from automated image analysis to the creation of self-driving cars.

Excerpt from:

Artificial Intelligence for Cars May Drive Future of Healthcare – Healthline

The Architecture of Artificial Intelligence – Archinect

Behnaz Farahi Breathing Wall II

Let us consider an augmented architect at work. He sits at a working station that has a visual display screen some three feet on a side, this is his working surface, controlled by a computer with which he can communicate by means of small keyboards and various other devices. Douglas Engelbart

This vision of the future architect was imagined by engineer and inventor Douglas Engelbart during his research into emerging computer systems atStanfordin 1962. At the dawn of personal computing he imagined the creative mind overlapping symbiotically with the intelligent machine to co-create designs. This dual mode of production, he envisaged, would hold the potential to generate new realities which could not be realized by either entity operating alone. Today, self-learning systems, otherwise known asartificial intelligence or AI, are changing the way architecture is practiced, as they do our daily lives, whether or not we realize it. If you are reading this on a laptop or tablet, then you are directly engaging with a number of integrated AI systems, now so embedded in our the way we use technology, they often go unnoticed.

As an industry, AI is growing at an exponential rate, now understood to be on track to be worth $70bn globally by 2020.This is in part due to constant innovation in the speed of microprocessors, which in turn increases the volume of data that can be gathered and stored. But dont panicthe artificial architect with enhanced Revit proficiency is not coming to steal your job. The human vs. robot debate, while compelling, is not so much the focus here but instead how AI is augmenting design and how architects are responding to and working with these technological developments. What kind of innovation is artificial intelligence generating in the construction industry?

Assuming you read this as a non-expert, it is likely that much of the AI you have encountered to this point has been weak AI, otherwise known as ANI (Artificial Narrow Intelligence). ANI follows pre-programmed rules so that it appears intelligent but is in effect a simulation of a human-like thought process. With recent innovations such as that of Nvidias microchip in April 2016, a shift is now being seen towards what we might understand as deep learning, where a system can, in effect, train and adapt itself. The interest for designers is that AI is, therefore, starting to apply itself to more creative tasks, such aswriting books, making art, web design, or self-generating design solutions, due to its increased proficiency in recognizing speech and images. Significant AI winters’, or periods where funding has been hard to source for the industry, have occurred over the last twenty years, but commentators such as philosopher Nick Bostrom now suggest we are on the cusp of an explosion in AI, and this will not only shape but drive the design industry in the next century. AI, therefore, has the potential to influence the architectural design process at a series of different construction stages, from site research to the realization and operation of the building.

1. Site and social research

By already knowing everything about us, our hobbies, likes, dislikes, activities, friends, our yearly income, etc., AI software can calculate population growth, prioritize projects, categorize streets according to usage and so on, and thus predict a virtual future and automatically draft urban plans that best represent and suit everyone. -Rron Beqiri on Future Architecture Platform.

Gathering information about a project and its constraints is often the first stage of an architectural design process, traditionally involving traveling to a site, perhaps measuring, sketching and taking photographs. In the online and connected world, there is already a swarm-like abundance of data for the architect to tap into, already linked and referenced against other sources allowing the designer to, in effect, simulate the surrounding site without ever having to engage with it physically. This information fabric has been referred to as the internet of things. BIM tools currently on the market already tap into these data constellations, allowing an architect to evaluate site conditions with minute precision. Software such as EcoDesigner Star or open-source plugins for Google SketchUp allows architects to immediately calculate necessary building and environmental analyses without ever having to leave their office. This phenomenon is already enabling many practices to take on large projects abroad that might have been logistically unachievable just a decade ago.The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us

The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us. Globally, we are amassing ever-expanding records of human behavior and interactions in real-time. Personal, soft data might, in the most optimistic sense, work towards the socially focused design that has been widely publicized in recent years by its ability to integrate the needs of users. This approach, if only in the first stages of the design process, would impact the twentieth-century ideals of mass production and standardization in design. Could the internet of things create a socially adaptable and responsive architecture? One could speculate that, for example, when the population of children in a city crosses a maximum threshold in relation to the number of schools, a notification might be sent to the district council that it is time to commission a new school. AI could, therefore, in effect, write the brief for and commission architects by generating new projects where they are most needed.

Autodesk. Bicycle design generated by Dreamcatcher AI software.

2. Design decision-making

Now that we have located live-updating intelligence for our site, it is time to harness AI to develop a design proposal. Rather than a program, this technology is better understood as an interconnected, self-designing system that can upgrade itself. It is possible to harness a huge amount of computing power and experience by working with these tools, even as an individual as Autodesk president Pete Baxtertold the Guardian: now a one-man designer, a graduate designer, can get access to the same amount of computing power as these big multinational companies. The architect must input project parameters, in effect an edited design brief, and the computer system will then suggest a range of solutions which fulfill these criteria. This innovation has the potential to revolutionize how architecture is not only imagined but how it is fundamentally expressed for designers who choose to adopt these new methods.

I spoke with Michael Bergin, a researcher at Project Dreamcatcher at Autodesks Research Lab, to get a better understanding of how AI systems are influencing the development of design software for architects. While their work was initially aimed at the automotive and industrial design industries, Dreamcatcher now is beginning to filter into architecture projects. It was used recently to develop The Livings generative design for Autodesk’s new office in Toronto and MX3Ds steel bridge in Amsterdam. The basic concept is that CAD models of the surrounding site and other data, such as client databases and environmental information, are fed into the processor. Moments later, the system outputs a series of optimized 3D design solutions ready to render. These processes effectively rely on cloud computing to create a multitude of options based on self-learning algorithmic parameters. Lattice-like and fluid forms are often the aesthetic result, perhaps unsurprisingly, as the software imitates structural rules found in nature.future architects would be less in the business of drawing and more into specifying requirements of the problem

The Dreamcatcher software has been designed to optimize parametric design and link into and extend existing software designed by Autodesk, such as Revit and Dynamo. Interestingly, Dreamcatcher can make use of a wide and increasing spectrum of design input datasuch as formulas, engineering requirements, CAD geometry, and sensor informationand the research team is now experimenting with Dreamcatchers ability to recognize sketches and text as input data. Bergin suggests he imagines the future of design tools as systems that accept any type of input that a designer can produce [to enable] a collaboration with the computer to iteratively target a high-performing design that meets all the varied needs of the design team. This would mean future architects would be less in the business of drawing and more into specifying requirements of the problem, making them more in sync with their machine counterparts in a project. Bergin suggests architects who adopt AI tools would have the ability to synthesize a broad set of high-level requirements from the design stakeholders, including clients and engineers, and produce design documentation as output, in line with Engelbarts vision of AI augmenting the skills of designers.

AI is also being used directly in software such as Space Syntaxs depthmapX, designed at The Bartlett in London, to analyze the spatial network of a city with an aim to understand and utilize social interactions and in the design process. Another tool, Unity 3D, is built from software developed for game engines to enable designers to analyze their plans, such as the shortest distances to fire exits. This information would then allow the architect to re-arrange or generate spaces in plan, or even to organize entire future buildings. Examples of architects who are adopting these methods include Zaha Hadid with the Beijing Tower project (designed antemortem) and MAD Architects in China, among others.

Computational Architecture Digital Grotesque Project

3. Client and user engagement

As so much of the technology built into AI has been developed from the gaming industry, its ability to produce forms of augmented reality have interesting potential to change the perception and engagement with architecture designs for both the architects and non-architects involved in a project. Through the use of additional hardware, augmented reality has the ability to capture and enhance real-world experience. It would enable people to engage with a design prior to construction, for example, to select the most appealing proposal from their experiences within its simulation. It is possible that many architecture projects will also remain in this unbuilt zone, in a parallel digital reality, which the majority of future world citizens will simultaneously inhabit.

Augmented reality would, therefore, allow a client to move through and sense different design proposals before they are built. Lights, sounds, even the smells of a building can be simulated, which could reorder the emphasis architects currently give to specific elements of their design. Such a change in representational method has the potential to shift what is possible within the field of architecture, as CAD drafting did at the beginning of this century. Additionally, the feedback generated by augmented reality can feed directly back into the design, allowing models to directly interact and adapt to future users. Smart design tools such as Materiable by Tangible Media are beginning to experiment with how AI can begin to engage with and learn from human behavior.

Computational Architecture Digital Grotesque Project

4. Realizing designs and rise of robot craftsmen

AI systems are already being integrated into the construction industryinnovative practices such asComputational Architectureare working with robotic craftsmen to explore AI in construction technology and fabrication. Michael Hansmeyer and Benjamin Dillenburger, founders of Computational Architecture, are investigating the new aesthetic language these developments are starting to generate. Architecture stands at an inflection point, he suggests on their website, the confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms, with an unseen level of detail, producing entirely new spatial sensations.

3D printing technology developed from AI software has the potential to offer twenty-first-century architects a significantly different aesthetic language, perhaps catalyzing a resurgence of detail and ornamentation, now rare due to the decline in traditional crafts. Hansmeyer and Dillenburgers Grotto Prototype for the Super Material exhibition, London, was a complex architectural grotto 3D-printed from sandstone. The form of the sand grains was arranged by a series of algorithms custom designed by the practice. The technique allowed forms to be developed which were significantly different to that of traditional stonemasonry. The aim of the project was to show that it is now possible to print building-scale rooms from sandstone and that 3D printing can also be used for heritage applications, such as repairs to statues.The confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms

Robotics are also becoming more common on construction job sites, mostly dealing with human resources and logistics. According to AEM, their applications will soon expand to bricklaying, concrete dispensing, welding, and demolition. Another example of their future use could include working with BIM to identify missing elements in the snagging process and update the AI in real-time. Large scale projects, for example, government-lead infrastructure initiatives, might be the first to apply this technology, followed by mid-scale projects in the private sector, such as cultural buildings. The challenges of the construction site will bring AI robotics out of the indoor, sanitized environment of the lab into a less scripted reality. Robert Saunders, a researcher into AI and fabrication at the University of Sydney, told New Atlas that “robots are great at repetitive tasks and working with materials that react reliablywhat we’re interested in doing is trying to develop robots that are capable of learning how to work with materials that work in non-linear ways like working with hot wax or expanding foam or, more practically, with low-grade building materials like low-grade timber. Saunders foresees robot stonemasons and other craftsbots working in yet unforeseen ways, such as developing the architect’s skeleton plans, in effect, spontaneously generating a building on-site from a sketch.

Ori System by Ori

5. Integrating AI systems

This innovation involves either integrating developing artificial technologies with existing infrastructure or designing architecture around AI systems. There is a lot of excitement in this field, influenced in part by Mark Zuckerbergs personal project to develop networked AI systems within his home, which he announced in hisNew years Facebook postin 2016. His wish is to develop simple AI systems to run his home and help with his day-to-day work. This technology would have the ability to recognize the voices of members of the household and respond to their requests. Designers are taking on the challenge of designing home-integrated systems, such as theOri Systemof responsive furniture, or gadgets such asEliqfor energy monitoring. Other innovations, such as driverless cars that run on an integrated system of self-learning AI, have the potential to shape how our cities are laid out and plannedin the most basic sense, limiting our need for more roads and parking areas.

Behnaz Farahi is a young architect who is employing her research into AI and adaptive surfaces to develop interactive designs, such as in her Aurora and Breathing Wall projects. She creates immersive and engaging indoor environments which adapt to and learn from their occupants. Her approach is one of manydifferent practices with different goals will adapt AI at different stages of their process, creating a multitude of architectural languages.

Researchers and designers working in the field of AI are attempting to understand the potential of computational intelligence to improve or even upgrade parts of the design process with an aim to create a more functional and user-optimized built environment. It has always been the architects task to make decisions based on complex, interwoven and sometimes contradictory sets of information. As AI gradually improves in making useful judgments in real-world situations, it is not hard to imagine these processes overlapping and engaging with each other. While these developments have the potential to raise questions in terms of ownership, agency and, of course, privacy in data gathering and use, the upsurge in self-learning technologies is already altering the power and scope of architects in design and construction. As architect and design theorist Christopher Alexander said back in 1964, We must face the fact that we are on the brink of times when man may be able to magnify his intellectual and inventive capacity, just as in the nineteenth century he used machines to magnify his physical capacity.To think architecturally is to imagine and construct new worlds, integrate systems and organize information

In our interview, Bergin gave some insights into how he sees this technology impacting designers in the next twenty years. The architectural language of projects in the future may be more expressive of the design teams intent, he stated. Generative design tools will allow teams to evaluate every possible alternative strategy to preserve design intent, instead of compromising on a sub-optimal solution because of limitations in time and/or resources. Bergin believes AI and machine learning will be able to support a dynamic and expanding community of practice for design knowledge. He can also foresee implications of this in the democratization of design work, suggesting the expertise embodied by a professional of 30 years may be more readily utilized by a more junior architect. Overall, he believes architectural practice over the next 20 years will likely become far more inclusive with respect to client and occupant needs and orders of magnitude more efficient when considering environmental impact, energy use, material selection and client satisfaction.

On the other hand, Pete Baxter suggestsarchitects have little to fear from artificial intelligence: “Yes, you can automate. But what does a design look like that’s fully automated and fully rationalized by a computer program? Probably not the most exciting piece of architecture you’ve ever seen. At the time of writing, many AI algorithms are still relatively uniform and relatively ignorant of context, and it is proving difficult to automate decision-making that would at first glance seem simple for a human. A number of research labs, such theMIT Media Lab, are working to solve this. However, architectural language and diagramming have been part of programming complex systems and software from the start, and they have had a significant influence on one another. To think architecturally is to imagine and construct new worlds, integrate systems and organize information, which lends itself to the front line of technical development. As far back as the 1960s, architects were experimenting with computer interfaces to aid their design work, and their thinking has inspired much of the technology we now engage with each day.

Behnaz Farahi Aurora

Read the original:

The Architecture of Artificial Intelligence – Archinect

SXSW Interactive 2017: Artificial intelligence, smart cities will be major themes this year – Salon

When it was founded 31 years ago, South by Southwest was easier to define: It was an annual musical showcase linking up-and-coming recording artists with industry executives in Austin, Texas, a city known for its vibrant music scene, cultural eccentricity and barbecue.

But over the years, the South by Southwest Conference and Festivals has grown into a massive annual series of citywide events touching on music, film, media and technology. SXSW, as its known,now includes a trade show, a job fair, an education-themed conference and throughout innovators will have opportunities to pitch their ideas to potential financial backers.

The annual 10-day event, which begins Friday with a keynote address from Sen. Cory Booker, D-N.J., has ballooned into an gathering so large that in recent years city officials havecurbed the number of special musical events.And some music journalists have criticized the annual event for becomingtoo big and commercialized to be a place for musical discovery.

Criticisms aside, not only do city officials and local businesses love the annual revenuethat SXSW generates (about $325 million last year including year-round planning operations). But the music part of thegathering is slowly turning into more of a sideshow thanthe main act, andthe main act is increasingly focused on media and technology (through SXSW Interactive).

Last year SXSW Music attracted about 30,300 people to 2,200 acts, about the same amount as the prior year, compared withthe nearly 37,600 people who flocked to listen to about 3,100 speakers at the SXSW Interactive. That representeda considerable spike from the roughly 34,000 who gathered for2015s 2,700 speakers,according to figures provided by SXSW event planners. That levelof traffic isnt bad, considering an all-access ticket to any one of the main attractions SXSW Interactive, SXSW Music or SXSW Film costs $1,325 apiece. (The truly ambitious can buy a single all-access ticket affording entry to all three for $1,650.)

As the SXSW Interactive gradually becomes a bigger attraction, it can be a challenge to pickfrom the dozens of daily sessions which ones will truly address the next major leap in technology. Here are a few of the themes that have emerged from a review of the dozens of SXSW Interactive sessions taking placethis year:

Improving artificial intelligence and human interaction

Many of last years SXSW Interactive sessions focused on virtual and augmented reality technology, but several ofthis years will touch on the rapidly evolving technology that underpins machine learning, deep analytics and the cognitive human-like interactions needed to make artificial intelligencemore consumer friendly.

Among 2017s presenters is Inmar Givoni, who is the director of machine learning at Kindred, which develops algorithms to help robots better interact with humans. She will offer a primer on the technology thats increasingly entering our daily life. In a separate session, digital anthropologist Pamela Pavliscak will discuss advances in AI that are enabling machines to accurately read emotions and respond accordingly. Other sessions will coverhow artificial intelligence will be deployed in satellites and the wayDisney is adopting AI to make storytelling more interactive at its theme parks. Charting advances inautonomous driving

As autonomous driving continues to rapidly progress, more attention is being paid to transportation and smart city technologies. Dieter Zetsche, the head of German automotive giant Daimler thatmakes Mercedes-Benz luxury cars, will talk about how digital mapping is playing an increasingly important role in the accuracy of connected and autonomous vehicles. Another session will tackleways to ensure that people dont rely too heavily on semiautonomous features and become lazy, inattentive drivers.

George Hotz, who developed a $1,000 self-driving car kit that could be installed in older cars, will discuss the real future of self-driving cars. Last year Hotz clashed with regulators when he tried to market his invention. U.S. Department of Transportation officials will attend SXSW Interactive to discuss the need for a national strategy for transportation data collection so as tomake connected cars work seamlessly across state lines and in different cities.

Planning cities of the future

Several sessions during SXSW will explorehow cities can adopt emerging technologies to grapple with current challenges not just so people can movethroughcrowded urban areas butalso how connected technologies can radically change the management ofmany aspects of a city.

Sherri Greenberg, a professor at the University of Texas at Austins Lyndon B. Johnson School of Public Affairs, will participate in a panel discussing how technology canaddress urban challengessuch economic segregation and the need for more affordable housing and healthy recreational activities. Atlanta Mayor Kasin Reed will headline another panel to outline the latest developments in smart city technologies.

Bringing health care into the 21st century

Innovation in the medical industry is taking new turns with the advent of technology aimed at improving the access, collection and distribution of patients health care data. Kate Black, privacy officer forthe personal genomics company23andMe, will address growing concerns about health care privacy in the digital age. Separately,Karen Desalvo, acting assistant secretary for health in the U.S. Department of Health and Human Services, will participate in a discussion about the federal governments lagging system for sharinghealth data, still largely using paper or outdated unconnected computers scattered among different agencies.

Other sessions will cover how data, engineering and policy can be deployedto provide consumers the power to compare prices on health care services and ways toofferaccess to new health-related technologies to low-income communities.

Diversity issues take the stage

Considerable attention has been paidto Silicon Valleyslack of gender and ethnic diversity but thats not the only sphere in the tech world where diversity is lacking. Dozens of sessions at this years SXSW Interactive will tackle these issues;topics will range from how digital storytelling can provide a voice to underrepresented groups to the need forrecruiting mid-career people of color in the tech industry.

Denmark West, who serves as chief investment officer of the Connectivity Ventures Fundthat backs tech startups, will participate in a panel of African-American venture capitalists (there arent many), discussing theneed tosupport ventures backed by people of color.

View original post here:

SXSW Interactive 2017: Artificial intelligence, smart cities will be major themes this year – Salon