Artificial intelligence virtual consultant helps deliver better patient … – Science Daily

Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting.

The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment.

"We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."

In this research, deep learning was used to understand a wide range of clinical questions and respond appropriately in a conversational manner similar to text messaging. Deep learning is a technology inspired by the workings of the human brain, where networks of artificial neurons analyze large datasets to automatically discover patterns and "learn" without human intervention. Deep learning networks can analyze complex datasets and provide rich insights in areas such as early detection, treatment planning, and disease monitoring.

"This research will benefit many groups within the hospital setting. Patient care team members get faster, more convenient access to evidence-based information; interventional radiologists spend less time on the phone and more time caring for their patients; and, most importantly, patients have better-informed providers able to deliver higher-quality care," said co-author Kevin Seals, MD, resident physician in radiology at UCLA and the programmer of the application.

The UCLA team enabled the application, which resembles online customer service chats, to develop a foundation of knowledge by feeding it more than 2,000 example data points simulating common inquiries interventional radiologists receive during a consultation. Through this type of learning, the application can instantly provide the best answer to the referring clinician's question. The responses can include information in various forms, including websites, infographics, and custom programs. If the tool determines that an answer requires a human response, the program provides the contact information for a human interventional radiologist. As clinicians use the application, it learns from each scenario and progressively becomes smarter and more powerful.

The researchers used a technology called Natural Language Processing, implemented using IBM's Watson artificial intelligence computer, which can answer questions posed in natural language and perform other machine learning functions. This prototype is currently being tested by a small team of hospitalists, radiation oncologists and interventional radiologists at UCLA.

"I believe this application will have phenomenal potential to change how physicians interact with each other to provide more efficient care," said John Hegde, MD, resident physician in radiation oncology at UCLA. "A key point for me is that I think it will eventually be the most seamless way to share medical information. Although it feels as easy as chatting with a friend via text message, it is a really powerful tool for quickly obtaining the data you need to make better-informed decisions."

As the application continues to improve, researchers aim to expand the work to assist general physicians in interfacing with other specialists, such as cardiologists and neurosurgeons. Implementing this tool across the health care spectrum, said Lee, has great potential in the quest to deliver the highest-quality patient care.

Abstract 354: "Utilization of Deep Learning Techniques to Assist Clinicians in Diagnostic and Interventional Radiology: Development of a Virtual Radiology Assistant." K. Seals; D. Dubin; L. Leonards; E. Lee; J. McWilliams; S. Kee; R. Suh; David Geffen School of Medicine at UCLA, Los Angeles, CA. SIR Annual Scientific Meeting, March 4-9, 2017. This abstract can be found at sirmeeting.org.

Story Source:

Materials provided by Society of Interventional Radiology. Note: Content may be edited for style and length.

See the original post here:

Artificial intelligence virtual consultant helps deliver better patient ... - Science Daily

Poll: Where readers stand on artificial intelligence, cloud computing and population health – Healthcare IT News

When IBM CEO Ginni Rometty delivered the opening keynote at HIMSS17 sheeffectively set the stagefor artificial intelligence, cognitive computing and machine learning to be prevalent themes throughout the rest of the conference.

Other top trends buzzed about in Orlando: cloud computing and population health.

Healthcare IT News asked our readers where they stand in terms of these initiatives. And we threw in a bonus question to figure out what their favorite part of HIMSS17 was.

Some 70 percent of respondents are either actively planning or researching artificial intelligence, cognitive computing and machine learning technologies while 7 percent are rolling them out and 1 percent have already completed an implementation.

A Sunday afternoon session featuring AI startups demonstrated the big promise of such tools as well as the persistent questions, skepticism and even fearwhen it comes to these emerging technologies.

Whereas AI was considerably more prominent in the HIMSS17 discourse than in years past, population health management has been among the top trends for the last couple conferences.

Its not entirely surprising that more respondents, 30 percent,are either rolling out or have completed a rollout of population health technologies, while 50 percent are either researching actively planning to do so.

One striking similarity between AI and population health is the 20 percent of participants responding that they have no interest in either. For cloud computing, meanwhile, only 7 percent indicated they are not interested.

Though cloud computing is not a new concept, it is widely seen as such in the HIPAA-sensitive world of personally-identifiable and protected health information. The overarching themes at the pre-conference HIMSS and Healthcare IT News Cloud Computing Forum on Sunday were that security is not a core competency of hospital and health systems, thus many cloud providers can better protect health data and the ability to spin up server, storage and compute resources on Amazon, Google or Microsoft is enabling a whole new era of innovation that simply is not possible when hospitals have to invest in their own infrastructure to run proofs-of-concept and pilot programs. The Centers for Medicare and Medicaid Services, for instance,cut $5 million from its annual infrastructure budgetby opting for infrastructure-as-a-service.

Here comes the bonus question: What was your favorite part of HIMSS17?

The show floor won hands-down, followed by education sessions, then networking events and, in a neck-and-neck tie are keynotes and parties/nightlife.

This article is part of our ongoing coverage of HIMSS17. VisitDestination HIMSS17for previews, reporting live from the show floor and after the conference.

Like Healthcare IT News onFacebookandLinkedIn

See the rest here:

Poll: Where readers stand on artificial intelligence, cloud computing and population health - Healthcare IT News

A Jetsons world: how artificial intelligence will revolutionize work and play – SiliconANGLE (blog)

As artificial intelligence tools become smarter and easier to use, the threat that they may take human jobs is real. They might also just make people much better at what they do, revolutionizing the workday for many.

What a bulldozer was to physical labor, AI is to data and to thought labor, saidNaveen Rao(pictured), Ph.D., vice president and general manager of artificial intelligence solutions at Intel.

Rao told John Furrier (@furrier), host oftheCUBE, SiliconANGLE Medias mobile live streaming studio, during South by Southwest in Austin, TX, that there are many examples of how AI can help streamline processes; one would be an insurance firm needing to read millions of pages of text to assess risk.

I cant do that very easily, right? I have to have a team of analysts run through, write summaries these are the kinds of problems we can start to attack, he said.AI can turn a computer into a data inference machine, not just a way to automate compute tasks, he added.

Improved user interfaces are driving the democratization of AI for people doing regular jobs, Rao pointed out. A major example of how AI can bring a technology to the masses is the iPod, which in turn informed the smartphone.

Storing music in a digital form in a small device was around before the iPod, but when they made it easy to use, that sort of gave rise to the smartphone, Rao said.

Rao sees fascinating advances in AI robot development, driven in part by 3D printing and the maker revolution lowering mechanical costs.

That, combined with these techniques becoming mature, is going to come up with some really cool stuff. Were going to start seeing The Jetsonskind of thing, he said.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of the South by SouthWest (SXSW).(*Disclosure: Intel sponsors some SXSW segments on SiliconANGLE Medias theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Read the original:

A Jetsons world: how artificial intelligence will revolutionize work and play - SiliconANGLE (blog)

How Artificial Intelligence Is Changing Financial Auditing – Daily Caller

5524793

As robots continue to play a growing role in our daily lives, white collar jobs in many sectors including accounting and financial operations are quickly becoming a thing of the past. Business are gravitating towards software to automate bookkeeping tasks, saving considerable amounts of both time and money. In fact, since 2004, the number of full-time finance employees at large companies has declined a staggering 40% to roughly 71 employees for every $1 billon of revenue,down from 119 employees, according to a report by top consulting firm The Hackett Group.

These numbers show that instead of resisting change, companies are embracing the efficiencies of this new technology and exploring how individual businesses can leverage automation and, more importantly, artificial intelligence aka robots. A quick aside on the idea of robots versus automation. As technology becomes more sophisticated and particularly with the use of Artificial Intelligence (AI) were able to automate multiple steps in a process. The concept of Robotic Process Automation (RPA) or robots for short has emerged to capture the notion of more sophisticated automation of everyday tasks.

Today, there is more data available than ever and computers are enhancing their capabilities to leverage these mountains of information. With that, many technology providers are focusing on making it as easy as possible for businesses to implement and utilize their solutions. Whether its by easing the support and management burden via Software as a Service (SaaS) delivery or more turn-key offerings that embed best practices in the solution, one can see a transformation from simply providing tools to providing a level of robotic automation that seems more like a service offering than a technology.

Of course, the name of the game for any business is speed, efficiency, and cost reduction.It is essential to embrace technologies that increase efficiency and savings because, like it or not, your competitors will. While there are some companies that stick with the old-school approaches, they end up serving small niches of customers and seeing less overall growth.

As long as the technology-based solution is less expensive and performs equally as well, if not better than alternative options, the market forces will drive companies to implement the automated technologies. In particular, the impact of robotic artificial intelligence (AI) is here to stay. In the modern work environment, automation means much more than just compiling numbers but making intelligent observations and judgements based on the data that is reviewed.

If companies and businesses want to ensure future success, its imperative to accept and embrace the capabilities provided by robots. Artificial intelligence wont always be perfect but it can dramatically improve your work output and add to your bottom line. Its important to emphasize that the goal is not to curtail employees but to find ways to leverage the robots toautomate everyday tasks or detail-oriented processesand focus the employees on higher-value activities.

Lets use an example: controlling spent in Travel & Expense (T&E) by auditing expense reports. When performing an audit, many companies randomly sample roughly 20% of expense reports to identify potential waste and fraud. If you process 500 expense reports in a month then 100 of those reports would be audited. The problem is less than 1% of these expense reports contain fraud or serious risks (cite SAR report), meaning the odds are that 99% of the reports reviewed were a waste of time and resources and the primary abuser of company funds most likely went unnoticed.

By employing a robot to identify risky looking expense reports and configuring the system to be hyper-vigilant, it has been shown that a sufficiently sophisticated AI system will flag 7% of the expense reports for fraud, waste, and misuse. (7% is the average Oversight Systems has seen across 20 million expense reports) If we look back to our previous example this means that out of 500 expense reports, employees would only have to review 35 instead of the 100 reports that would have been audited. Though these are likely not all fraudulent, they may provide other valuable information such as noting when an employee needs to be reminded about company travel policy.

While it may sound like robots are eliminating human jobs, its important to note that they can also be extremely valuable working collaboratively with employees. Although the example above focused on fraud, the same productivity leverage is available regarding errors, waste, misuse in financial processes, etc. With the help of robots, we can spend less time hunting for issues and more time addressing them. By working together with technology, the employee has a higher chance of rooting out fraud and will have the bandwidth to work with company travelers to influence their future behavior.

It is clear that in order to ensure future profitability, it is crucial for businesses to understand and take advantage of the significant role that robots can play in dramatically enhancing financial operations.

Read this article:

How Artificial Intelligence Is Changing Financial Auditing - Daily Caller

The Next US-China Arms Race: Artificial Intelligence? – The National Interest Online

Although China could initially only observe the advent of the Information-Technology Revolution in Military Affairs, the Peoples Liberation Army might presently have a unique opportunity to take advantage of the military applications of artificial intelligence to transform warfare. When the United States first demonstrated its superiority in network-centric warfare during the first Gulf War, the PLA was forced to confront the full extent of its relative backwardness in information technology. Consequently, the PLA embarked upon an ambitious agenda of informatization (). To date, the PLA has advanced considerably in its capability to utilize information to enhance its combat capabilities, from long-range precision strike to operations in space and cyberspace. Currently, PLA thinkers anticipate the advent of an intelligentization Revolution in Military Affairs that will result in a transformation from informatized ways of warfare to future intelligentized () warfare. For the PLA, this emerging trend heightens the imperative of keeping pace with the U.S. militarys progress in artificial intelligence, after its failure to do so in information technology. Concurrently, the PLA seeks to capitalize upon the disruptive potential of artificial intelligence to leapfrog the United States through technological and conceptual innovation.

For the PLA, intelligentization is the culmination of decades of advances in informatization. Since the 1990s, the PLA has been transformed from a force that had not even completed the process of mechanization to a military power ever more confident in its capability to fight and win informatized wars. Despite continued challenges, the PLA appears to be on track to establish the system of systems operations () capability integral to integrated joint operations. The recent restructuring of the PLAs Informatization Department further reflects the progression and evolution of its approach. These advances in informatization have established the foundation for the PLAs transition towards intelligentization. According to Maj. Gen. Wang Kebin (), director of the former General Staff Department Informatization Department, Chinas information revolution has been progressing through three stages: first digitalization (), then networkization () and now intelligentization (). The PLA has succeeded in the introduction of information technology into platforms and systems; progressed towards integration, especially of its C4ISR capabilities; and seeks to advance towards deeper fusion of systems and sensors across all services, theater commands and domains of warfare. This final stage could be enabled by advances in multiple emerging technologies, including big data, cloud computing, mobile networks, the Internet of Things and artificial intelligence. In particular, the complexity of warfare under conditions of intelligentization will necessitate a greater degree of reliance upon artificial intelligence. Looking forward, artificial intelligence is expected to replace information technology, which served as the initial foundation for its emergence, as the dominant technology for military development.

Although the PLA has traditionally sought to learn lessons from foreign conflicts, its current thinking on the implications of artificial intelligence has been informed not by a war but by a game. AlphaGos defeat of Lee Sedol in the ancient Chinese game of Go has seemingly captured the PLAs imagination at the highest levels. From the perspective of influential PLA strategists, this great war of man and machine () decisively demonstrated the immense potential of artificial intelligence to take on an integral role in command and control and also decisionmaking in future warfare. Indeed, the success of AlphaGo is considered a turning point that demonstrated the potential of artificial intelligence to engage in complex analyses and strategizing comparable to that required to wage warnot only equaling human cognitive capabilities but even contributing a distinctive advantage that may surpass the human mind. In fact, AlphaGo has even been able to invent its own, novel techniques that human players of this ancient game had never devised. This capacity to formulate unique, even superior strategies implies that the application of artificial intelligence to military decisionmaking could also reveal unimaginable ways of waging war. At the highest levels, the Central Military Commission Joint Staff Department has called for the PLA to progress towards intelligentized command and decisionmaking in its construction of a joint operations command system.

View post:

The Next US-China Arms Race: Artificial Intelligence? - The National Interest Online

artificial intelligence: How online retailers are using artificial … – Economic Times

The next time you shop on fashion website Myntra, you might end up choosing a t-shirt designed completely by a softwarethe pattern, colour and texture without any intervention from a human designer. And you would not realise it. The first set of these t-shirts went on sale four days ago. This counts as a significant leap for Artificial Intelligence in ecommerce.

For customers, buying online might seem simpleclick, pay and collect. But it's a different ballgame for e-tailers. Behind the scenes, from the warehouses to the websites, artificial intelligence plays a huge role in automating processes. Online retailers are employing AI to solve complex problems and make online shopping a smoother experience. This could involve getting software to understand and process voice queries, recommend products based on a person's buying history, or forecast demand.

SO WHAT ARE THE BIG NAMES DOING? "In terms of industry trends, people are going towards fast fashion. (Moda) Rapido does fast fashion in an intelligent way," said Ambarish Kenghe, chief product officer at Myntra, a Flipkart unit and India's largest online fashion retailer.

The Moda Rapido clothing label began as a project in 2015, with Myntra using AI to process fashion data and predict trends. The companys human designers incorporated the inputs into their designs. The new AI-designed t-shirts are folded into this label unmarked, so Myntra can genuinely test how well these sell when pitted against shirts designed by humans.

Also Read: AI will help answer queries automatically: Rajeev Rastogi, Amazon

"Till now, designers could look at statistics (for inputs). But you need to scale. We are limited by the bandwidth of designers. The next step is, how about the computer generating the design and us curating it," Kenghe said. "It is a gold mine. Our machines will get better on designing and we will also get data."

This is not a one-off experiment. Ecommerce, which has a treasure trove of data collected over the last few years is ripe for disruption from AI. Companies are betting big on AI and pouring in funds to push the boundaries of what can be done with data. "We are applying AI to a number of problems such as speech recognition, natural language understanding, question answering, dialogue systems, product recommendations, product search, forecasting future product demand, etc.," said Rajeev Rastogi, director, machine learning, at Amazon.

An example of how AI is used in recommendations could be this: if you started your search on a retailers website with, say, a white shirt with blue polka dots, and your next search is for an shirt with a similar collar and cuff style, the algorithm understands what is motivating you. "We start with personalizationit is key. If you have enough and more collection, clutter is an issue. How do you (a customer) get to the product that you want? We are trying to figure it out. We want to give you precisely what you are looking for," said Ajit Narayanan, chief technology officer, Myntra.

A related focus area for AI is recommending the right sizes as this can vary across brands. "We have pretty high return rates across many categories because people think that sizes are the same across brands and across geographies. So, trying to make recommendations with appropriate size is another problem that we are working on. Say, a size 6 in Reebok might be 7 in Nike, and so on," Rastogi said in an earlier interview with ET.

Myntra uses data intelligence to also decide which payment gateway is the best for a transaction.

"Minute to minute there is a difference. If you are going from, say, a HDFC Bank card to a certain gateway at a certain time, the payment success rate may be different than for the same gateway and for the same card at a different time, based on the load. This is learning over a period of time," said Kenghe. "Recently, during the Chennai cyclone, one of the gateways had an outage. The system realised this and auto-routed all transactions away from the gateway. Elsewhere, humans were trying to figure out what happened.

SUPPORT FROM AI SPECIALISTS A number of independent AI-focused startups are also working on automating manually intensive tasks in ecommerce. Take cataloging. If not done properly, searching for the right product becomes cumbersome and shoppers might log out.

"Catalogues are (usually) tagged manually. One person can tag 2,000 to 10,000 images. The problem is, it is inconsistent. This affects product discovery. We do automatic tagging (for ecommerce clients) and reduce 90% of human intervention," said Ashwini Asokan, chief executive of Chennai-based AI startup Mad Street Den. "We can tag 30,000 images in, say, two hours."

Mad Street Den also offers a host of other services such as sending personalised emails to their clients' customers, automating warehouse operations and providing analysis and forecasting.

Gurugram-based Staqu works on generating digital tags that make searching for a product online easier. "We provide a software development kit that can be integrated into an affiliate partner's website or app. Then the site or app will become empowered by image search. It will recognise the product and start making tags for that," said Atul Rai, cofounder of Staqu, which counts Paytm and Yepme among clients. Staqu is a part of IBM's Global Entrepreneurship Program.

The other big use of AI is to provide business intelligence. Bengaluru-based Stylumia informs their fashion retailer clients on the latest design trends. "We deliver insights using computer vision, meaning visual intelligence," said CEO Ganesh Subramanian. "Say, for example, (how do you exactly describe a) dark blue stripe shirt. Now, dark blue is subjective. You cannot translate dark blue, so we pull information from the Net and we show it visually."

In product delivery, algorithms are being used to clean up and automate the process.

Bengaluru-based Locus is enabling logistics for companies using AI. "We use machine learning to convert (vaguely described) addresses into valid (recognizable) addresses. There are pin code errors, spelling mistakes, missing localities. Machine learning is critical in logistics. We even do demand predictions and predict returns," said Nishith Rastogi, chief executive of Locus, whose customers include Quikr, Delhivery, Lenskart and Urban Ladder.

Myntra is trying to use AI to predict for customers the exact time of product delivery. "The exact time is very important to us. However, it is not straightforward. It depends on what time somebody placed an order, what was happening in the rest of the supply chain at that time, what was its capacity. It is a complicated thing to solve but we threw this (challenge) to the machine," said Kenghe. "(The machine) learnt over a period of time. It learnt what happens on weekends, what happens on weekdays, and which warehouse to which pin code is (a product) going to, and what the product is and what size it is. It figured these out with some supervision and came up with (more accurate delivery) dates. I do not think we have perfected it, but it is a big deal for us."

THE NEXT BIG CHALLENGE One of Myntra's AI projects is to come up with a fashion assistant that can talk in common language and recommend what to wear for various occasions. But "conversational flows are difficult to solve. This is very early. It will not see the light of the day very soon. The assistants first use would be for support, say (for a user to ask) where is my order, (or instruct) cancel order," said Kenghe.

The world over, conversational bots are the next big thing. Technology giants like Google and Amazon are pushing forward research on artificial intelligence. "As we see (customer care) agents responding (to buyers), the machine can learn from it. The next stage is, a customer can say 'I am going to Goa' and the assistant will figure out that Goa means beach and give a list of things (to take along)," Kenghe said.

While speech is one crucial area in AI research, vision is another. Mad Street Den is trying to use AI in warehouses to monitor processes. "Using computer vision, there is no need for multiple photoshoots of products. This avoids duplication and you are saving money for the customer almost 16-25% savings on the operational side. We can then start seeing who is walking into the warehouse, how many came in, efficiency, analytics, etc. We are opening up the scale of operations," said Asokan.

Any opportunity to improve efficiency and cut cost is of supreme importance in ecommerce, said Partha Talukdar, assistant professor at Bengaluru's Indian Institute of Science, where he heads the Machine and Language Learning Lab (MALL), whose mission is to give a "worldview" to machines.

"Companies like Amazon are doing automation wherever they can... right to the point of using robots for warehouse management and delivery through drones. AI and ML are extremely important because of the potential. There are a lot of diverse experiments going on (in ecommerce). We will certainly see a lot of innovative tech from this domain."

Read the original here:

artificial intelligence: How online retailers are using artificial ... - Economic Times

Indian Startups Bet on Artificial Intelligence in 2017: Report – News18

IANS

As data science gets set to drive the artificial intelligence (AI) market in 2017, a few Indian startups are initiating development of conversational bots, speech recognition tools, intelligent digital assistants and conversational services to be built over social media channels, a joint study by PwC-Assocham said.

Read more:Bose SoundSport Wireless Review: Delivers Great Sound Sans Pesky Wires

Organisations are looking to leverage AI capabilities for predictive modelling.

"Online shopping portals have extensively been using predictive capabilities to gauge consumer interest in products by building a targeted understanding of preferences through collection of browsing and click-stream data, and effectively targeting and engaging customers using a multi-channel approach," the report added.

To enable consumers to find better products at low prices, machine learning algorithms are being deployed for better matching of supply with consumer demand.

Some of the areas where AI can improve legal processes, said the findings, include improved discovery and analysis based on law case history and formulation of legal arguments based on identification of relevant evidence.

"Researchers and paralegals are increasingly being replaced by systems that can extract facts and conclusions from over a billion text documents a second. This has the potential to save lawyers around 30 per cent of their time," the findings showed.

China is expected to have installed more industrial robots than any other country -- 30 robots per 10,000 workers.

A few thousand workers have already been replaced by a robotic workforce in a single factory, the study added.

See the rest here:

Indian Startups Bet on Artificial Intelligence in 2017: Report - News18

Why not all forms of artificial intelligence are equally scary – Vox

How worried should we be about artificial intelligence?

Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications.

Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that artificial intelligence is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.

There are, generally speaking, three forms of AI: weak AI, strong AI, and superintelligence. At present, only weak AI exists. Strong AI and superintelligence are theoretically possible, even probable, but were not there yet.

Understanding the differences between these forms of AI is essential to analyzing the potential risks and benefits of this technology. There are a whole range of concerns that correspond to different kinds of AI, some more worrisome than others.

To help make sense of this, here are some key distinctions you need to know.

Artificial Narrow Intelligence (often called weak AI) is an algorithmic or specialized intelligence. This has existed for several years. Think of the Deep Blue machine that beat world champion Garry Kasparov in chess. Or Siri on your iPhone. Or even speech recognition and processing software. These are forms of nonsentient intelligence with a relatively narrow focus.

It might be too much to call weak AI a form of intelligence at all. Weak AI is smart and can outperform humans at a single task, but thats all it can do. Its not self-aware or goal-driven, and so it doesnt present any apocalyptic threats. But to the extent that weak AI controls vital software that keeps our civilization humming along, our dependence upon it does create some vulnerabilities. George Dvorsky, a Canadian bioethicist and futurist, explores some of these issues here.

Then theres Artificial General Intelligence, or strong AI; this refers to a general-purpose system, or what you might call a thinking machine. Artificial General Intelligence, in theory, would be as smart or smarter than a human being at a wide range of tasks; it would be able to think, reason, and solve complex problems in myriad ways.

Its debatable whether strong AI could be called conscious; at the very least, it would demonstrate behaviors typically associated with consciousness commonsense reasoning, natural language understanding, creativity, strategizing, and generally intelligent action.

Artificial General Intelligence does not yet exist. A common estimate is that were perhaps 20 years away from this breakthrough. But nearly everyone concedes that its coming. Organizations like the Allen Institute for Artificial Intelligence (founded by Microsoft co-founder Paul Allen) and Googles DeepMind project, along with many others across the world, are making incremental progress.

There are surely more complications involved with this form of AI, but its not the stuff of dystopian science fiction. Strong AI would aim at a general-purpose human level intelligence; unless it undergoes rapid recursive self-improvement, its unlikely to pose a catastrophic threat to human life.

The major challenges with strong AI are economic and cultural: job loss due to automation, economic displacement, privacy and data management, software vulnerabilities, and militarization.

Finally, theres Artificial Superintelligence. Oxford philosopher Nick Bostrom defined this form of AI in a 2014 interview with Vox as any intellect that radically outperforms the best human minds in every field, including scientific creativity, general wisdom and social skills. When people fret about the hazards of AI, this is what theyre talking about.

A truly superintelligent machine would, in Bostroms words, become extremely powerful to the point of being able to shape the future according to its preferences. As yet, were nowhere near a fully developed superintelligence. But the research is underway, and the incentives for advancement are too great to constrain.

Economically, the incentives are obvious: The first company to produce artificial superintelligence will profit enormously. Politically and militarily, the potential applications of such technology are infinite. Nations, if they dont see this already as a winner-take-all scenario, are at the very least eager to be first. In other words, the technological arms race is afoot.

The question, then, is how far away from this technology are we, and what are the implications for human life?

For his book Superintelligence, Bostrom surveyed the top experts in the field. One of the questions he asked was, "by what year do you think there is a 50 percent probability that we will have human-level machine intelligence?" The median answer to that was somewhere between 2040 and 2050. That, of course, is just a prediction, but its an indication of how close we might be.

Its hard to know when an artificial superintelligence will emerge, but we can say with relative confidence that it will at some point. If, in fact, intelligence is a matter of information processing, and if we assume that we will continue to build computational systems at greater and greater processing speeds, then it seems inevitable that we will create an artificial superintelligence. Whether were 50 or 100 or 300 years away, we are likely to cross the threshold eventually.

When it does happen, our world will change in ways we cant possibly predict.

We cannot assume that a vastly superior intelligence is containable; it would likely work to improve itself, to enhance its capabilities. (This is what Bostrom calls the control problem.) A hyper-intelligent machine might also achieve self-awareness, in which case it would begin to develop its own ends, its own ambitions. The hope that such machines will remain instruments of human production is just that a hope.

If an artificial superintelligence does become goal-driven, it might develop goals incompatible with human well-being. Or, in the case of Artificial General Intelligence, it may pursue compatible goals via incompatible means. The canonical thought experiment here was developed by Bostrom. Lets call it the paperclip scenario.

Heres the short version: Humans create an AI designed to produce paperclips. It has one utility function to maximize the number of paperclips in the universe. Now, if that machine were to undergo an intelligence explosion, it would likely work to optimize its single function producing paperclips. Such a machine would continually innovate new ways to make more paperclips. Eventually, Bostrom says, that machine might decide that converting all of the matter it can including people into paperclips is the best way to achieve its singular goal.

Admittedly, this sounds a bit stupid. But its not, and it only appears so when you think about it from the perspective of a moral agent. Human behavior is guided and constrained by values self-interest, compassion, greed, love, fear, etc. An Advanced General Intelligence, presumably, would be driven only by its original goal, and that could lead to dangerous, and unanticipated, consequences.

Again, the paperclip scenario applies to strong AI, not superintelligence. The behavior of an a superintelligent machine would be even less predictable. We have no idea what such a being would want, or why it would want it, or how it would pursue the things it wants. What we can be reasonably sure of is that it will find human needs less important than its own needs.

Perhaps its better to say that it will be indifferent to human needs, just as human beings are indifferent to the needs of chimps or alligators. Its not that human beings are committed to destroying chimps and alligators; we just happen to do so when the pursuit of our goals conflicts with the wellbeing of less intelligent creatures.

And this is the real fear that people like Bostrom have of superintelligence. We have to prepare for the inevitable, he told me recently, and take seriously the possibility that things could go radically wrong.

See the original post here:

Why not all forms of artificial intelligence are equally scary - Vox

Artificial intelligence? Only an idiot would think that – Irish Times

Prof Ian Bogost of the Georgia Institute of Technology: not every technological innovation merits being called AI

Not every technological innovation is artificial intelligence and labelling it as such is making the term AI virtually meaningless, says Ian Bogost, a professor of interactive computing at the Georgia Institute of Technology in the US. Bogost gives the example of Googles latest algorithm, Perspective, which is designed to detect hate speech. While media coverage has been hailing this as an AI wonder, it turns out that simple typos can fool the system and allow abusive, harassing, and toxic comments to slip through easily enough.

Researchers from the University of Washington, Seattle, put the algorithm through its paces by testing the phrase Anyone who voted for Trump is a moron, which scored 79 per cent on the toxicity scale. Meanwhile, Anyone who voted for Trump is a mo.ron scored a tame 13 per cent. If you can easily game Artificial Intelligence, was it really intelligent in the first place?

https://arxiv.org/pdf/1702.08138.pdf

Read more:

Artificial intelligence? Only an idiot would think that - Irish Times

IBM Rated Buy On ‘Upside Potential,’ Artificial Intelligence Move – Investor’s Business Daily

IBM CEO Ginni Rometty told investors that her company is emerging as a leader in cognitive computing. (IBM)

IBM (IBM) is an attractive turnaround story with improved fundamental trends, says a Drexel Burnham analyst who reiterated a buy rating and raised his price target on the computer giant.

The buy rating by Drexel Burnham analyst Brian White follows a day of briefings that IBM presented to investors at its annual Investor Briefing conference that ended Tuesday.

"We believe IBM has further upside potential as the fruits of the company's labor around its strategic imperatives are better appreciated and more investors warm up to the stock," White wrote in a research note. Along with his buy rating, White raised his price target on IBM to 215, from 186.

IBM stock ended the regular trading session at179.45, down fractionally on the stock market today. It's currently trading near a 29-month high.

The investor's day events included a presentation by IBM Chief Executive Ginni Rometty, who said the company has reached an important moment with a solid foundation and is emerging as a leader in cognitive computing with its Watson computing platform and cloud services.

Announcements from the investor briefing included IBM and Salesforce.com (CRM) agreeing to a strategic partnership focused on artificial intelligence and supported by IBM's Watson computer and the Einstein computing platform by Salesforce.com.

Salesforce and IBM will combine their two AI offerings but will also continue to sell the combined offering under two brands. Salesforce and IBM said they would "seamlessly connect" their AI offerings "to enable an entirely new level of intelligent customer engagement across sales, service, marketing, commerce and more."

Salesforce stock finished at83.48, up 0.6%.

Decades of research and billions of dollars have poured into developingartificial intelligence, which has crossedover from science fiction to game-show novelty to the cusp of widespread business applications. IBM has said Watson represents a new era of computing.

IBD'S TAKE: After six consecutive quarters of declining quarterly earnings at IBM,growth may be on the mend. IBM reported fourth-quarter earnings after the market close Jan. 19 that beat on the top and bottom lines for the fifth straight quarter.

"We believe IBM is furthest ahead in the cognitive computing movement and we believe the Salesforce partnership is only the beginning of more deals in the coming years," White wrote.

Other companies investing heavily in AI include Google parent Alphabet (GOOGL) and graphics chip company Nvidia (NVDA).

Alphabet has used AI to enhance Google search abilities, improve voice recognition and to derive more data from images and video.

Nvidia has developed chip technology for AI platforms used in autonomous driving features, and to enhance how a driver and car communicate.

Not everyone is a bull on the IBM train. Credit Suisse analyst Kulbinder Garcha, has an underperform rating on IBM and price target of 110. Garcha, in a research note, said IBM remains in a multiyear turnaround.

"We believe it will take multiple years for faster growing segments such as the Cognitive Solutions segment and Cloud to offset the decline in the core business," Garcha wrote.

RELATED:

AI Meets ROI: Where Artificial Intelligence Is Already Smart Business

IBM Takes Watson Deeper Into Business Computing Field

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices its IPO and tech giants like Microsoft, Alphabet and Texas...

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices...

Excerpt from:

IBM Rated Buy On 'Upside Potential,' Artificial Intelligence Move - Investor's Business Daily

Artificial Intelligence for Cars May Drive Future of Healthcare – Healthline

The same artificial intelligence that may soon drive your new car is being adapted to help drive interventional radiology care for patients.

Researchers at the University of California, Los Angeles (UCLA), have used advanced artificial intelligence, also called machine learning, to create a chatbot or Virtual Interventional Radiologist (VIR).

This device communicates automatically with a patients physicians and can quickly offer evidence-based answers to frequently asked questions.

The scientists will present their research today at the Society of Interventional Radiologys 2017 annual scientific meeting in Washington, D.C.

This breakthrough will allow clinicians to give patients real-time information on interventional radiology procedures as well as planning the next step of their treatment.

Dr. Edward W. Lee, assistant professor of radiology at UCLAs David Geffen School of Medicine, and one of the authors of the study, said he and his colleagues theorized they could use artificial intelligence in low-cost, automated ways to improve patient care.

The fundamental technology that has made self-driving cars possible is deep learning, a type of artificial intelligence modeled after the connections in the human brain, explained Dr. Kevin Seals, resident physician in diagnostic radiology at UCLA Health, and a study co-author, said in a Healthline interview.

Seals, who programmed the VIR, said advanced computers and the human brain have a number of similarities.

Using deep learning, computers are now essentially as good as humans at identifying particular objects, making it possible for self-driving cars to see and appropriately navigate their environment, he said.

This same technology can allow computers to understand complex text inputs such as medical questions from healthcare professionals, he added. By implementing deep learning using the IBM Watson cognitive technology and Natural Language Processing, we are able to make our virtual interventional radiologist smart enough to understand questions from physicians and respond in a smart, useful way.

Read more: Regenerative medicine has a bright future

Think of it as an initial, superfast layer of information gathering that can be used prior to taking the time to contact an actual human diagnostic or interventional radiologist, Seals said.

The user simply texts a question to the virtual radiologist, which in many cases provides an excellent, evidence-based response more or lessinstantaneously, he said.

He noted that if the patient doesnt receive a helpful response, they are rapidly referred to a human radiologist.

Tools such as our chatbot are particularly important in the current clinical environment, which focuses on quality metrics and follows evidence-based clinical guidelines that are proven to help patients, he said.

Seals said a team of academic radiologists curated the information provided in the application from the radiology literature, and it is rigorously scientific and evidence-based.

We hope that using the application will encourage cutting-edge patient management that results in improved patient care and significantly benefits our patients, he added.

It can be thought of as texting with a virtual representation of a human radiologist that offers a significant chunk of the functionality of speaking with an actual human radiologist, Seals said.

When the non-radiologist clinician texts a question to the VIR, deep learning is used to understand that message and respond in an intelligent manner.

We get a lot of questions that are fairly readily automated, Seals said. Such as I am worried that my patient has a blood clot in their lungs. What is the best type of imaging to perform to make the diagnosis? The chatbot can respond to questions like this in a supersmart, evidence-based way.

Sample responses, he said, can include instructive images (for example, a flowchart that shows a clinical algorithm), response text messages, and subprograms within the application such as a calculator to determine a patients Wells score, a metric doctors use to guide clinical management.

The VIR application resembles an online customer service chat.

To create a crucial foundation of knowledge, the researchers fed the app more than 2,000 data points that simulated the common inquiries interventional radiologists receive when they meet with patients.

Read more: A watch that tells you when youre getting sick

When a referring clinician asks a question, the extensive knowledge base of the app allows it to respond instantly with the best answer.

The various forms of responses can include websites, infographics, and custom programs.

If the VIR determines that an answer requires a human response, the program will provide contact information for a human interventional radiologist.

The app learns as clinicians use it, and each scenario teaches the VIR to become increasingly smarter and more powerful, Seals said.

The nature of chatbot communications should protect patient privacy.

Confidentiality is critically important in the world of modern technology and something we take very seriously, Seals said.

He added that the application was created and programmed by physicians with extensive HIPAA (Health Insurance Portability and Accountability Act of 1996) training.

We are able to avoid these issues because users ask questions in a general and anonymous manner, Seals said. Protected health information is never needed to use the application, nor is it relevant to its function.

All users professional healthcare providers such as physicians and nurses must agree to not include any specific protected patient information in their texts to the chatbot, he added.

None of the diverse functionality within the application requires specific patient information, Seals said.

Read more: Artificial bones are the latest thing in 3-D printing

This new technology represents the fastest and easiest way for clinicians to get the information they need in the hospital, starting with radiology and eventually expanding to other specialties such as neurosurgery and cardiology, Seals said.

Our technology can power any type of physician chatbot, he explained. Currently, there are information silos of sorts that exist between various specialists in the hospital, and there is no good tool for rapidly sharing information between these silos. It is often slow and difficult to get a busy radiologist on the phone, which inconveniences clinicians and delays patient care.

Other clinicians at the UCLA David Geffen School of Medicine are testing the chatbot, and Seals and Lee say their technology is fully functional now.

We are refining it and perfecting it so it can thrive in a wide release, Seals said.

Seals engineering and software background allowed him to perform the necessary programming for the as-yet unfunded research project. He said he and his colleagues will seek funding as they expand.

This breakthrough technology will debut soon.

The VIR will be made available in about one month to all clinicians at the UCLA Ronald Reagan Medical Center. Further use at UCLA will help the team to refine the chatbot for wider release.

The VIR could also become a free app.

We are exploring potential models for releasing the application, Seals said. It may very well be a free tool we release to assist our clinician colleagues, as we are academic radiologists focused on sharing knowledge and improving clinical medicine.

The researchers described the importance of the VIR in a summary of their findings: Improved artificial intelligence through deep learning has the potential to fundamentally transform our society, from automated image analysis to the creation of self-driving cars.

Excerpt from:

Artificial Intelligence for Cars May Drive Future of Healthcare - Healthline

The Architecture of Artificial Intelligence – Archinect

Behnaz Farahi Breathing Wall II

Let us consider an augmented architect at work. He sits at a working station that has a visual display screen some three feet on a side, this is his working surface, controlled by a computer with which he can communicate by means of small keyboards and various other devices. Douglas Engelbart

This vision of the future architect was imagined by engineer and inventor Douglas Engelbart during his research into emerging computer systems atStanfordin 1962. At the dawn of personal computing he imagined the creative mind overlapping symbiotically with the intelligent machine to co-create designs. This dual mode of production, he envisaged, would hold the potential to generate new realities which could not be realized by either entity operating alone. Today, self-learning systems, otherwise known asartificial intelligence or AI, are changing the way architecture is practiced, as they do our daily lives, whether or not we realize it. If you are reading this on a laptop or tablet, then you are directly engaging with a number of integrated AI systems, now so embedded in our the way we use technology, they often go unnoticed.

As an industry, AI is growing at an exponential rate, now understood to be on track to be worth $70bn globally by 2020.This is in part due to constant innovation in the speed of microprocessors, which in turn increases the volume of data that can be gathered and stored. But dont panicthe artificial architect with enhanced Revit proficiency is not coming to steal your job. The human vs. robot debate, while compelling, is not so much the focus here but instead how AI is augmenting design and how architects are responding to and working with these technological developments. What kind of innovation is artificial intelligence generating in the construction industry?

Assuming you read this as a non-expert, it is likely that much of the AI you have encountered to this point has been weak AI, otherwise known as ANI (Artificial Narrow Intelligence). ANI follows pre-programmed rules so that it appears intelligent but is in effect a simulation of a human-like thought process. With recent innovations such as that of Nvidias microchip in April 2016, a shift is now being seen towards what we might understand as deep learning, where a system can, in effect, train and adapt itself. The interest for designers is that AI is, therefore, starting to apply itself to more creative tasks, such aswriting books, making art, web design, or self-generating design solutions, due to its increased proficiency in recognizing speech and images. Significant AI winters', or periods where funding has been hard to source for the industry, have occurred over the last twenty years, but commentators such as philosopher Nick Bostrom now suggest we are on the cusp of an explosion in AI, and this will not only shape but drive the design industry in the next century. AI, therefore, has the potential to influence the architectural design process at a series of different construction stages, from site research to the realization and operation of the building.

1. Site and social research

By already knowing everything about us, our hobbies, likes, dislikes, activities, friends, our yearly income, etc., AI software can calculate population growth, prioritize projects, categorize streets according to usage and so on, and thus predict a virtual future and automatically draft urban plans that best represent and suit everyone. -Rron Beqiri on Future Architecture Platform.

Gathering information about a project and its constraints is often the first stage of an architectural design process, traditionally involving traveling to a site, perhaps measuring, sketching and taking photographs. In the online and connected world, there is already a swarm-like abundance of data for the architect to tap into, already linked and referenced against other sources allowing the designer to, in effect, simulate the surrounding site without ever having to engage with it physically. This information fabric has been referred to as the internet of things. BIM tools currently on the market already tap into these data constellations, allowing an architect to evaluate site conditions with minute precision. Software such as EcoDesigner Star or open-source plugins for Google SketchUp allows architects to immediately calculate necessary building and environmental analyses without ever having to leave their office. This phenomenon is already enabling many practices to take on large projects abroad that might have been logistically unachievable just a decade ago.The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us

The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us. Globally, we are amassing ever-expanding records of human behavior and interactions in real-time. Personal, soft data might, in the most optimistic sense, work towards the socially focused design that has been widely publicized in recent years by its ability to integrate the needs of users. This approach, if only in the first stages of the design process, would impact the twentieth-century ideals of mass production and standardization in design. Could the internet of things create a socially adaptable and responsive architecture? One could speculate that, for example, when the population of children in a city crosses a maximum threshold in relation to the number of schools, a notification might be sent to the district council that it is time to commission a new school. AI could, therefore, in effect, write the brief for and commission architects by generating new projects where they are most needed.

Autodesk. Bicycle design generated by Dreamcatcher AI software.

2. Design decision-making

Now that we have located live-updating intelligence for our site, it is time to harness AI to develop a design proposal. Rather than a program, this technology is better understood as an interconnected, self-designing system that can upgrade itself. It is possible to harness a huge amount of computing power and experience by working with these tools, even as an individual as Autodesk president Pete Baxtertold the Guardian: now a one-man designer, a graduate designer, can get access to the same amount of computing power as these big multinational companies. The architect must input project parameters, in effect an edited design brief, and the computer system will then suggest a range of solutions which fulfill these criteria. This innovation has the potential to revolutionize how architecture is not only imagined but how it is fundamentally expressed for designers who choose to adopt these new methods.

I spoke with Michael Bergin, a researcher at Project Dreamcatcher at Autodesks Research Lab, to get a better understanding of how AI systems are influencing the development of design software for architects. While their work was initially aimed at the automotive and industrial design industries, Dreamcatcher now is beginning to filter into architecture projects. It was used recently to develop The Livings generative design for Autodesk's new office in Toronto and MX3Ds steel bridge in Amsterdam. The basic concept is that CAD models of the surrounding site and other data, such as client databases and environmental information, are fed into the processor. Moments later, the system outputs a series of optimized 3D design solutions ready to render. These processes effectively rely on cloud computing to create a multitude of options based on self-learning algorithmic parameters. Lattice-like and fluid forms are often the aesthetic result, perhaps unsurprisingly, as the software imitates structural rules found in nature.future architects would be less in the business of drawing and more into specifying requirements of the problem

The Dreamcatcher software has been designed to optimize parametric design and link into and extend existing software designed by Autodesk, such as Revit and Dynamo. Interestingly, Dreamcatcher can make use of a wide and increasing spectrum of design input datasuch as formulas, engineering requirements, CAD geometry, and sensor informationand the research team is now experimenting with Dreamcatchers ability to recognize sketches and text as input data. Bergin suggests he imagines the future of design tools as systems that accept any type of input that a designer can produce [to enable] a collaboration with the computer to iteratively target a high-performing design that meets all the varied needs of the design team. This would mean future architects would be less in the business of drawing and more into specifying requirements of the problem, making them more in sync with their machine counterparts in a project. Bergin suggests architects who adopt AI tools would have the ability to synthesize a broad set of high-level requirements from the design stakeholders, including clients and engineers, and produce design documentation as output, in line with Engelbarts vision of AI augmenting the skills of designers.

AI is also being used directly in software such as Space Syntaxs depthmapX, designed at The Bartlett in London, to analyze the spatial network of a city with an aim to understand and utilize social interactions and in the design process. Another tool, Unity 3D, is built from software developed for game engines to enable designers to analyze their plans, such as the shortest distances to fire exits. This information would then allow the architect to re-arrange or generate spaces in plan, or even to organize entire future buildings. Examples of architects who are adopting these methods include Zaha Hadid with the Beijing Tower project (designed antemortem) and MAD Architects in China, among others.

Computational Architecture Digital Grotesque Project

3. Client and user engagement

As so much of the technology built into AI has been developed from the gaming industry, its ability to produce forms of augmented reality have interesting potential to change the perception and engagement with architecture designs for both the architects and non-architects involved in a project. Through the use of additional hardware, augmented reality has the ability to capture and enhance real-world experience. It would enable people to engage with a design prior to construction, for example, to select the most appealing proposal from their experiences within its simulation. It is possible that many architecture projects will also remain in this unbuilt zone, in a parallel digital reality, which the majority of future world citizens will simultaneously inhabit.

Augmented reality would, therefore, allow a client to move through and sense different design proposals before they are built. Lights, sounds, even the smells of a building can be simulated, which could reorder the emphasis architects currently give to specific elements of their design. Such a change in representational method has the potential to shift what is possible within the field of architecture, as CAD drafting did at the beginning of this century. Additionally, the feedback generated by augmented reality can feed directly back into the design, allowing models to directly interact and adapt to future users. Smart design tools such as Materiable by Tangible Media are beginning to experiment with how AI can begin to engage with and learn from human behavior.

Computational Architecture Digital Grotesque Project

4. Realizing designs and rise of robot craftsmen

AI systems are already being integrated into the construction industryinnovative practices such asComputational Architectureare working with robotic craftsmen to explore AI in construction technology and fabrication. Michael Hansmeyer and Benjamin Dillenburger, founders of Computational Architecture, are investigating the new aesthetic language these developments are starting to generate. Architecture stands at an inflection point, he suggests on their website, the confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms, with an unseen level of detail, producing entirely new spatial sensations.

3D printing technology developed from AI software has the potential to offer twenty-first-century architects a significantly different aesthetic language, perhaps catalyzing a resurgence of detail and ornamentation, now rare due to the decline in traditional crafts. Hansmeyer and Dillenburgers Grotto Prototype for the Super Material exhibition, London, was a complex architectural grotto 3D-printed from sandstone. The form of the sand grains was arranged by a series of algorithms custom designed by the practice. The technique allowed forms to be developed which were significantly different to that of traditional stonemasonry. The aim of the project was to show that it is now possible to print building-scale rooms from sandstone and that 3D printing can also be used for heritage applications, such as repairs to statues.The confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms

Robotics are also becoming more common on construction job sites, mostly dealing with human resources and logistics. According to AEM, their applications will soon expand to bricklaying, concrete dispensing, welding, and demolition. Another example of their future use could include working with BIM to identify missing elements in the snagging process and update the AI in real-time. Large scale projects, for example, government-lead infrastructure initiatives, might be the first to apply this technology, followed by mid-scale projects in the private sector, such as cultural buildings. The challenges of the construction site will bring AI robotics out of the indoor, sanitized environment of the lab into a less scripted reality. Robert Saunders, a researcher into AI and fabrication at the University of Sydney, told New Atlas that "robots are great at repetitive tasks and working with materials that react reliablywhat we're interested in doing is trying to develop robots that are capable of learning how to work with materials that work in non-linear ways like working with hot wax or expanding foam or, more practically, with low-grade building materials like low-grade timber. Saunders foresees robot stonemasons and other craftsbots working in yet unforeseen ways, such as developing the architect's skeleton plans, in effect, spontaneously generating a building on-site from a sketch.

Ori System by Ori

5. Integrating AI systems

This innovation involves either integrating developing artificial technologies with existing infrastructure or designing architecture around AI systems. There is a lot of excitement in this field, influenced in part by Mark Zuckerbergs personal project to develop networked AI systems within his home, which he announced in hisNew years Facebook postin 2016. His wish is to develop simple AI systems to run his home and help with his day-to-day work. This technology would have the ability to recognize the voices of members of the household and respond to their requests. Designers are taking on the challenge of designing home-integrated systems, such as theOri Systemof responsive furniture, or gadgets such asEliqfor energy monitoring. Other innovations, such as driverless cars that run on an integrated system of self-learning AI, have the potential to shape how our cities are laid out and plannedin the most basic sense, limiting our need for more roads and parking areas.

Behnaz Farahi is a young architect who is employing her research into AI and adaptive surfaces to develop interactive designs, such as in her Aurora and Breathing Wall projects. She creates immersive and engaging indoor environments which adapt to and learn from their occupants. Her approach is one of manydifferent practices with different goals will adapt AI at different stages of their process, creating a multitude of architectural languages.

Researchers and designers working in the field of AI are attempting to understand the potential of computational intelligence to improve or even upgrade parts of the design process with an aim to create a more functional and user-optimized built environment. It has always been the architects task to make decisions based on complex, interwoven and sometimes contradictory sets of information. As AI gradually improves in making useful judgments in real-world situations, it is not hard to imagine these processes overlapping and engaging with each other. While these developments have the potential to raise questions in terms of ownership, agency and, of course, privacy in data gathering and use, the upsurge in self-learning technologies is already altering the power and scope of architects in design and construction. As architect and design theorist Christopher Alexander said back in 1964, We must face the fact that we are on the brink of times when man may be able to magnify his intellectual and inventive capacity, just as in the nineteenth century he used machines to magnify his physical capacity.To think architecturally is to imagine and construct new worlds, integrate systems and organize information

In our interview, Bergin gave some insights into how he sees this technology impacting designers in the next twenty years. The architectural language of projects in the future may be more expressive of the design teams intent, he stated. Generative design tools will allow teams to evaluate every possible alternative strategy to preserve design intent, instead of compromising on a sub-optimal solution because of limitations in time and/or resources. Bergin believes AI and machine learning will be able to support a dynamic and expanding community of practice for design knowledge. He can also foresee implications of this in the democratization of design work, suggesting the expertise embodied by a professional of 30 years may be more readily utilized by a more junior architect. Overall, he believes architectural practice over the next 20 years will likely become far more inclusive with respect to client and occupant needs and orders of magnitude more efficient when considering environmental impact, energy use, material selection and client satisfaction.

On the other hand, Pete Baxter suggestsarchitects have little to fear from artificial intelligence: "Yes, you can automate. But what does a design look like that's fully automated and fully rationalized by a computer program? Probably not the most exciting piece of architecture you've ever seen. At the time of writing, many AI algorithms are still relatively uniform and relatively ignorant of context, and it is proving difficult to automate decision-making that would at first glance seem simple for a human. A number of research labs, such theMIT Media Lab, are working to solve this. However, architectural language and diagramming have been part of programming complex systems and software from the start, and they have had a significant influence on one another. To think architecturally is to imagine and construct new worlds, integrate systems and organize information, which lends itself to the front line of technical development. As far back as the 1960s, architects were experimenting with computer interfaces to aid their design work, and their thinking has inspired much of the technology we now engage with each day.

Behnaz Farahi Aurora

Read the original:

The Architecture of Artificial Intelligence - Archinect

SXSW Interactive 2017: Artificial intelligence, smart cities will be major themes this year – Salon

When it was founded 31 years ago, South by Southwest was easier to define: It was an annual musical showcase linking up-and-coming recording artists with industry executives in Austin, Texas, a city known for its vibrant music scene, cultural eccentricity and barbecue.

But over the years, the South by Southwest Conference and Festivals has grown into a massive annual series of citywide events touching on music, film, media and technology. SXSW, as its known,now includes a trade show, a job fair, an education-themed conference and throughout innovators will have opportunities to pitch their ideas to potential financial backers.

The annual 10-day event, which begins Friday with a keynote address from Sen. Cory Booker, D-N.J., has ballooned into an gathering so large that in recent years city officials havecurbed the number of special musical events.And some music journalists have criticized the annual event for becomingtoo big and commercialized to be a place for musical discovery.

Criticisms aside, not only do city officials and local businesses love the annual revenuethat SXSW generates (about $325 million last year including year-round planning operations). But the music part of thegathering is slowly turning into more of a sideshow thanthe main act, andthe main act is increasingly focused on media and technology (through SXSW Interactive).

Last year SXSW Music attracted about 30,300 people to 2,200 acts, about the same amount as the prior year, compared withthe nearly 37,600 people who flocked to listen to about 3,100 speakers at the SXSW Interactive. That representeda considerable spike from the roughly 34,000 who gathered for2015s 2,700 speakers,according to figures provided by SXSW event planners. That levelof traffic isnt bad, considering an all-access ticket to any one of the main attractions SXSW Interactive, SXSW Music or SXSW Film costs $1,325 apiece. (The truly ambitious can buy a single all-access ticket affording entry to all three for $1,650.)

As the SXSW Interactive gradually becomes a bigger attraction, it can be a challenge to pickfrom the dozens of daily sessions which ones will truly address the next major leap in technology. Here are a few of the themes that have emerged from a review of the dozens of SXSW Interactive sessions taking placethis year:

Improving artificial intelligence and human interaction

Many of last years SXSW Interactive sessions focused on virtual and augmented reality technology, but several ofthis years will touch on the rapidly evolving technology that underpins machine learning, deep analytics and the cognitive human-like interactions needed to make artificial intelligencemore consumer friendly.

Among 2017s presenters is Inmar Givoni, who is the director of machine learning at Kindred, which develops algorithms to help robots better interact with humans. She will offer a primer on the technology thats increasingly entering our daily life. In a separate session, digital anthropologist Pamela Pavliscak will discuss advances in AI that are enabling machines to accurately read emotions and respond accordingly. Other sessions will coverhow artificial intelligence will be deployed in satellites and the wayDisney is adopting AI to make storytelling more interactive at its theme parks. Charting advances inautonomous driving

As autonomous driving continues to rapidly progress, more attention is being paid to transportation and smart city technologies. Dieter Zetsche, the head of German automotive giant Daimler thatmakes Mercedes-Benz luxury cars, will talk about how digital mapping is playing an increasingly important role in the accuracy of connected and autonomous vehicles. Another session will tackleways to ensure that people dont rely too heavily on semiautonomous features and become lazy, inattentive drivers.

George Hotz, who developed a $1,000 self-driving car kit that could be installed in older cars, will discuss the real future of self-driving cars. Last year Hotz clashed with regulators when he tried to market his invention. U.S. Department of Transportation officials will attend SXSW Interactive to discuss the need for a national strategy for transportation data collection so as tomake connected cars work seamlessly across state lines and in different cities.

Planning cities of the future

Several sessions during SXSW will explorehow cities can adopt emerging technologies to grapple with current challenges not just so people can movethroughcrowded urban areas butalso how connected technologies can radically change the management ofmany aspects of a city.

Sherri Greenberg, a professor at the University of Texas at Austins Lyndon B. Johnson School of Public Affairs, will participate in a panel discussing how technology canaddress urban challengessuch economic segregation and the need for more affordable housing and healthy recreational activities. Atlanta Mayor Kasin Reed will headline another panel to outline the latest developments in smart city technologies.

Bringing health care into the 21st century

Innovation in the medical industry is taking new turns with the advent of technology aimed at improving the access, collection and distribution of patients health care data. Kate Black, privacy officer forthe personal genomics company23andMe, will address growing concerns about health care privacy in the digital age. Separately,Karen Desalvo, acting assistant secretary for health in the U.S. Department of Health and Human Services, will participate in a discussion about the federal governments lagging system for sharinghealth data, still largely using paper or outdated unconnected computers scattered among different agencies.

Other sessions will cover how data, engineering and policy can be deployedto provide consumers the power to compare prices on health care services and ways toofferaccess to new health-related technologies to low-income communities.

Diversity issues take the stage

Considerable attention has been paidto Silicon Valleyslack of gender and ethnic diversity but thats not the only sphere in the tech world where diversity is lacking. Dozens of sessions at this years SXSW Interactive will tackle these issues;topics will range from how digital storytelling can provide a voice to underrepresented groups to the need forrecruiting mid-career people of color in the tech industry.

Denmark West, who serves as chief investment officer of the Connectivity Ventures Fundthat backs tech startups, will participate in a panel of African-American venture capitalists (there arent many), discussing theneed tosupport ventures backed by people of color.

View original post here:

SXSW Interactive 2017: Artificial intelligence, smart cities will be major themes this year - Salon

Incredible flying car with Artificial Intelligence could revolutionise transportation by letting you fly above … – Mirror.co.uk

A flying car that uses Artificial Intelligence and could revolutionise the world's transport has been unveiled.

The Pop.Up concept was given its world premier today by Italdesign and Airbus at the 87th Geneva International Motor Show, the Daily Post reports.

Like something out of a sci-fi film, the high-tech concept uses ground and air 'modules' so that the vehicle can travel on roads and through the skies.

Passengers would plan their journey and book their trip via an easy-to- use app.

The system automatically suggests the best transport solution - according to user knowledge, timing, traffic congestion, costs, ride-sharing demands - joining either the air or ground module to the passenger capsule.

The passenger capsule is a carbon-fibre cocoon that measures 2.6 metres long, 1.4 metres high, and 1.5 metres wide.

The capsule transforms itself into a city car by simply coupling to the ground module, which features a carbon-fibre chassis and is battery powered.

For journeys with congested traffic, the capsule disconnects from the ground module and is carried by a 5m by 4.4m air module propelled by eight rotors.

In this configuration, Pop.Up becomes a self-piloted air vehicle, able to avoid traffic on the ground.

Once passengers reach their destination, the air and ground modules with the capsule autonomously return to dedicated recharge stations to wait for their next customers.

Mathias Thomsen, General Manager for Urban Air Mobility at Airbus, said the new design would "without a doubt improve the way we live".

Italdesign CEO Jrg Astalosch added: Today, automobiles are part of a much wider eco-system.

"If you want to design the urban vehicle of the future, the traditional car cannot alone be the solution for megacities.

"You also have to think about sustainable and intelligent infrastructure, apps, integration, power systems, urban planning, social aspects, and so on."

He said they found in Airbus, the leader in aerospace, the perfect partner who shares their modern vision for the future of transportation.

More here:

Incredible flying car with Artificial Intelligence could revolutionise transportation by letting you fly above ... - Mirror.co.uk

Should economists be worried about artificial intelligence? – Eyewitness News

Some economists have argued that, like past technical change, this will not create large-scale unemployment, as labour gets reallocated.

Robot. Picture: Pixabay.

This post highlights some of the possible economic implications of the so-called Fourth Industrial Revolution whereby the use of new technologies and artificial intelligence (AI) threatens to transform entire industries and sectors.

Some economists have argued that, like past technical change, this will not create large-scale unemployment, as labour gets reallocated.

However, many technologists are less optimistic about the employment implications of AI. In this blog post we argue that the potential for simultaneous and rapid disruption, coupled with the breadth of human functions that AI might replicate, may have profound implications for labour markets.

We conclude that economists should seriously consider the possibility that millions of people may be at risk of unemployment, should these technologies be widely adopted.

THE RISE OF THE ROBOTS

Rapid advances in robotics and automation technologies in recent years have coincided with a period of strong growth of lesser-skilled jobs in the UK (see for example Figure 1.7 and Table 1.9 of the Low Pay Commission Spring 2016 Report).

There is growing debate in the economics community and academia about whether technological progress threatens to displace a large proportion of these jobs in the longer term.

Examples where automation is starting to gain traction internationally include warehousing, haulage, hotels, restaurants and agriculture: all industries which are frequently reported by our Agency colleagues to be heavily dependent on lesser-skilled labour.

In the UK, driverless cars are currently being trialled on the roads of Milton Keynes and hands off self-driving cars are expected on the motorways in 2018.

ROBOTICS: LABOUR-AUGMENTING OR JOB-DESTROYING?

One view, as outlined in a recent Bank Underground blog (and a follow-on post here), is that technological progress has always been labour-augmenting in the past, and is likely to remain so in future.

Thus, as manufacturing productivity has grown and factory jobs shed, the associated increase in GDP per capita has resulted in a net increase in job creation, typically in more labour-intensive service industries.

So even if robotics started to displace large numbers of workers, jobs dependent on human traits such as creativity, emotional intelligence and social skills (including teaching, mentoring, nursing and social care for example) may become more numerous.

However, many technologists are not so sure that the next industrial revolution will replicate the past, arguing that the mass adoption of robotics threatens to disrupt many industries more-or-less simultaneously, giving neither the economy nor society in general the time to adapt to the changes.

Advances in robotics might be such that suddenly, most if not all of the basic human functions entailed in manual labour (assembling, lifting, walking, human interaction, etc) could be carried out more effectively and cheaply by machines with the advantage of being able to work continually at minimal marginal cost.

A recent report by Deloitte concluded that around one-third of jobs in the UK are at high risk of being displaced by automation over the next two decades, including losses of over 2 million jobs in retail, 1 million jobs in transportation and storage, and 1 million jobs in health and social care.

ITS DIFFERENT THIS TIME?

So how might automation in the Fourth Industrial Revolution differ fundamentally from that in the past, preventing technological progress from being labour augmenting, at least in the short to medium term? Perhaps the main difference is the speed of technological progress and its adoption.

The technologist Hermann Hauser argues there were nine new General Purpose Technologies (GPTs) with mass applications in the first 19 centuries AD, including the printing press, the factory system, the steam engine, railways, the combustion engine and electricity. GPTs by definition disrupt existing business models and often result in mass job losses in the industries directly affected.

For example, railways initiated the replacement of the horse and carriage, with resultant job losses for coachmen, stable lads, farriers and coach builders. Most of these GPTs took several decades to gain traction, partly because of the large amounts of investment required in plant, machinery and infrastructure. So there was sufficient time for the economy to adapt, thus avoiding periods of mass unemployment.

But the pace of technological progress sped up rapidly since the 19th century. Hermann identifies eight GPTs in the 20th century alone, including automobiles, aeroplanes, the computer, the internet, biotechnology and nanotechnology. Most recent innovations have been scalable much more quickly and cheaply. They have also been associated with the emergence of giant technology corporations the combined market capitalisation of Apple, Google, Microsoft, Amazon and Facebook is currently about $2 trillion.

The faster these new waves of technology arise and the cheaper they are to implement, the quicker they are deployed, the broader their diffusion, the faster and deeper the rate of job loss and the less time the economy has to adapt by creating jobs in sectors not disrupted by GPTs.

And some technologies are evolving at lightning speed, such as the ongoing exponential increase in computing power. Computers have evolved in the past 40 years or so from initially being merely calculators to having applications that include smartphones and, in conjunction with the internet and big data, driverless cars, robots and the Internet of Things.

Looking to the future, how might these new GPTs affect the economy? The retail and distribution sector currently has over five million jobs. In the not too distant future, most consumer goods could be ordered online and delivered by either autonomous vehicles or drones. The warehouses in which the goods are stored could be almost entirely automated. Bricks and mortar stores might largely disappear.

HOW LONG BEFORE ROBOTICS STARTS TO DISRUPT THE ECONOMY?

The timing and magnitude of these structural changes to the economy are extremely hard to predict. But the speed at which developed economies adopt robotics technologies is perhaps increased by policies in many countries that seek to reduce income inequality in society, such as increases in minimum wage rates, thereby incentivising R&D and capital expenditure in labour-saving machinery and equipment.

Another factor stimulating global investment in robotics technologies is demographics. Japan has experienced a declining population since 2010, reflecting minimal immigration levels and falling fertility rates since the 1970s. With the population (and labour force) projected to decline by as much as one-fifth over the next 50 years, incentives to invest in automation technology are high. So it is perhaps not surprising that Japan has one of the largest robotics industries in the world, employing over a quarter of a million people. Many types of robot are already commercially available, including humanoid robots, androids, guards and domestic robots, in addition of course to industrial robots. Citizens are increasingly familiar and comfortable interacting with them, including the elderly.

MACHINE LEARNING/ARTIFICIAL INTELLIGENCE

It is often argued that robots typically can only perform a finite number of well-defined tasks, ideally in controlled environments. So robots can be used extensively in warehouses or factories, but not to interact intelligently or empathetically with humans as secretaries, vehicle drivers, nurses, care assistants, etc that is, in service industries where the majority of lesser-skilled jobs are found. Hence, humans might always have an absolute advantage over machines in carrying out many types of work involving cognitive and communication skills.

In fact, technologists are making great strides in developing machines capable of mimicking human intelligence. A computer has recently beaten one of the worlds best players of Go. Given that the average game has an almost infinite number of outcomes, the computer must mimic cognitive skills such as intuition and strategy, rather than rely purely on brute force in analysing all plausible move sequences which is how computers were programmed to beat the worlds chess champions nearly twenty years ago. Researchers are confident that widespread economic applications of AI are not too far away. One such example is facial recognition, which has applications in security etc. A Google AI system called FaceNet was trained on a 260 million image dataset, and achieved 86 percent recognition accuracy using only 128-bytes per face.

CONCLUSION

There is growing concern in the global tech community that developed economies are poorly prepared for the next industrial revolution. That might herald the displacement of millions of predominantly lesser-skilled jobs, the failure of many longstanding businesses which are slow to adapt, a large increase in income inequality in society, and growing industrial concentration associated with the rapid growth of a relatively small number of multi-national technology corporations.

Economists looking at previous industrial revolutions observe that none of these risks have transpired. However, this possibly under-estimates the very different nature of the technological advances currently in progress, in terms of their much broader industrial and occupational applications and their speed of diffusion. It would be a mistake, therefore, to dismiss the risks associated with these new technologies too lightly.

This article was republished courtesy of the World Economic Forum.

Visit link:

Should economists be worried about artificial intelligence? - Eyewitness News

Nvidia’s Jetson platform can power drones with good artificial … – VentureBeat

Nvidia unveiled its Jetson TX2 platform to power drones with good artificial intelligence.

The platform includes the Jetson TX2 embedded AI supercomputer, a chip and its surruonding hardware that can power 4K video drones that consumeonly about 7.5 watts of power. Drones with the TX2 solution can operate two cameras simultaneously.

The Jetson 3.0 platform was designed for AI at the edge of the network, rather than in the cloud, or Internet-connected data center. Drones with cameras can capture a huge amount of data.

This means Jetson has to handle a lot of the processing of data at the edge, in the device itself, rather than transferring all of that data to the cloud, said Deepu Talla, vice president and general manager of Nvidias Tegra business unit, at a press event in San Francisco.

Pretty much every industry we know is being transformed by AI, he said. Weve seen an AI version of GO beat the worlds best human.

Services are AI powered too, with things like Amazon Alexa and OK, Google.

All are adopting the Nvidia computing ecosystem, Talla said.

Patrick Moorhead, analyst at Moor Insights & Strategy, said the new platform uses Nvidias latest Parker-based Tegra chips, and can deliver about double the performance of the previous generation of drone chips without increasing the amount of power used.

Above: Teals Jetson-based drone.

Image Credit: Dean Takahashi

Like a car, there are many applications that require a lot of compute power at the edge for machine learning and artificial intelligence, Moorhead said. Based on the quality of their partners, it shows Nvidia has a really good offering.

Thats surprising, since most of Nvidias success has been in things like self-driving cars, he said.

GPU-based deep learning can crunch all of this data much faster than before. Nvidias GPU enhancements and software frameworks have helpedimprove AI by two orders of magnitude. Deep learning training used to take months. Now it takes days or hours.

Deploying neural networks usually happens in the cloud, going from a smartphone to a cloud. Nvidias Tesla can do AI inferencing in the data center.

But with Jetson, Nvidia is migrating AI to the edge. Edge devices like drones often have limited bandwidth, high latency, and a lack of wireless reception.

For confidentially reasons, you dont want to store a lot of data in the cloud, Talla said.

Nvidia has partners such as Fanuc, Toyota, Starship, Cisco, and others. Nvidia makes it easy for them to adopt the tech with its Jetson software developmentkit. It sits on top of Linux and the Tegra processors which Nvidia makes.

Cisco showed the Jetson platform working in a video conferencing display dubbed Spark. It allows for much more processing in the display before it sends the data across the Internet to another video conferencing display in another location.

Teal, a Salt Lake City, Utah-based startup, showed a drone with the Jetson platform in it. The drone, also called Teal, uses deep learning software from another startup called Ziff to recognize images, such as people. It could be used in a search and rescue operation, flying over a wide territory and reporting back only when it finds a possible human in a remote area. The drone will cost about $1,200.

Lowes is also using a Jetson-based robot from a company called Navii. The robot is used in stores to scan shelves to see what has to be replaced from inventory. It can also guide shoppers around to different products in the store, using voice recognition.

Nvidia launched its first Jetson platform about 18 months ago. Now the company is adding more AI capabilities on top of Jetson. The Jetson TX1 hardware can deliver 4K video decoding and other high-performance parallel computing tasks. But he Jetson TX2 hardware can double down on the tech capabilities.

TheTX2 developer kit costs $600 at retail, and it costs $300 for education applications. It is available for preorder now in the U.S. and Europe andwill ship in those territories on March 14. It will ship in April in Asia and other regions.

Rivals include Intel, which makes both chips and drones. Kevin Krewell, analyst at Tirias Research, said that Nvidias solution has wider flexibility and is likely targeted at more high-end solutions, while Intel targets both mid-range and high-end drones.

Talla said that the high-end of the market where Nvidia plays is a good market opportunity, and while you pay more for solutions at the edge, you save on a lot of data processing that happens in the data center because you are sending pre-processed information onward, rather than raw data.

Go here to read the rest:

Nvidia's Jetson platform can power drones with good artificial ... - VentureBeat

How Artificial Intelligence Will Change Everything – WSJ – Wall Street Journal (subscription)


Wall Street Journal (subscription)
How Artificial Intelligence Will Change Everything - WSJ
Wall Street Journal (subscription)
Artificial intelligence is shaping up as the next industrial revolution, poised to rapidly reinvent business, the global economy and how people work and interact ...
Andrew Ng on why Artificial Intelligence is the New Electricity ...insideHPC
Think twice before you hire a chief AI officer | CIOCIO

all 3 news articles »

Read more here:

How Artificial Intelligence Will Change Everything - WSJ - Wall Street Journal (subscription)

Google’s artificial intelligence can diagnose cancer faster than … – Mirror.co.uk

Making the decision on whether or not a patient has cancer usually involves trained professionals meticulously scanning tissue samples over weeks and months.

But an artificial intelligence (AI) program owned by Alphabet, Google's parent company, may be able to do it much, much faster.

Google is working hard to tell the difference between healthy and cancerous tissue as well as discover if metastasis has occured.

"Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labour intensive and error-prone," explained DeepMind in a white paper outlining the study.

"We present a framework to automatically detect and localise tumours as small as 100 100 pixels in gigapixel microscopy images sized 100,000100,000 pixels.

"Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumour detection task."

Such high-level image recognition was first developed for Google's driverless car program, in order to help the vehicles scan for road obstructions.

Now the company has adapted it for the medical field and says it's more accurate than regular human doctors:

"At 8 false positives per image, we detect 92.4% of the tumours, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity."

Despite this, it's unlikely to replace human pathologists just yet. The software only looks for one thing - cancerous tissue - and is not able to pick up any irregularities that a human doctor could spot.

poll loading

YES NO

Read this article:

Google's artificial intelligence can diagnose cancer faster than ... - Mirror.co.uk

Artificial intelligence experts unveil Baxter the MIND CONTROL … – Express.co.uk

The incredible work undertaken by Artificial Intelligence geniuses has been backed by private funding from Boeing and the US National Science Foundation.

A team from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University that allows people to correct robot mistakes instantly with nothing more than their brains.

Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task.

Jason Dorfman, MIT CSAIL

Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word

CSAIL director Daniela Rus

The teams novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.

While the system currently handles relatively simple binary-choice activities, the studys senior author says that the work suggests that we could one day control robots in much more intuitive ways.

CSAIL director Daniela Rus told Express.co.uk: Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word.

Jason Dorfman, MIT CSAIL

A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we havent even invented yet.

In the current study the team used a humanoid robot named Baxter from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.

The paper presenting the work was written by BU PhD candidate Andres F. Salazar-Gomez, CSAIL PhD candidate Joseph DelPreto, and CSAIL research scientist Stephanie Gil under the supervision of Rus and BU professor Frank H. Guenther.

Jason Dorfman, MIT CSAIL

The paper was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA) taking place in Singapore this May.

Past work in EEG-controlled robotics has required training humans to think in a prescribed way that computers can recognise.

Rus team wanted to make the experience more natural and to do that, they focused on brain signals called error-related potentials (ErrPs), which are generated whenever our brains notice a mistake.

Jason Dorfman, MIT CSAIL

As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.

Rus added: As you watch the robot, all you have to do is mentally agree or disagree with what it is doing.

You dont have to train yourself to think in a certain way - the machine adapts to you, and not the other way around.

The work in progress identified that ErrP signals are extremely faint, which means that the system has to be fine-tuned enough to both classify the signal and incorporate it into the feedback loop for the human operator.

In addition to monitoring the initial ErrPs, the team also sought to detect secondary errors that occur when the system doesnt notice the humans original correction.

Scientist Stephanie Gil said: If the robots not sure about its decision, it can trigger a human response to get a more accurate answer.

These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.

While the system cannot yet recognise secondary errors in real time, Gil expects the model to be able to improve to upwards of 90 per cent accuracy once it can.

In addition, since ErrP signals have been shown to be proportional to how egregious the robots mistake is, the team believes that future systems could extend to more complex multiple-choice tasks.

Jason Dorfman, MIT CSAIL

1 of 9

Salazar-Gomez notes that the system could even be useful for people who cant communicate verbally: a task like spelling could be accomplished via a series of several discrete binary choices, which he likens to an advanced form of the blinking that allowed stroke victim Jean-Dominique Bauby to write his memoir The Diving Bell and the Butterfly.

Wolfram Burgard a professor of computer science at the University of Freiburg who was not involved in the research added: This work brings us closer to developing effective tools for brain-controlled robots and prostheses.

Given how difficult it can be to translate human language into a meaningful signal for robots, work in this area could have a truly profound impact on the future of human-robot collaboration."

Read the original:

Artificial intelligence experts unveil Baxter the MIND CONTROL ... - Express.co.uk

Budget 2017: Prizes for robotics, artificial intelligence and battery innovators to be announced – The Independent

The Chancellor Philip Hammond will outline plans in Wednesdays Budget to make hundreds of millions of pounds available to scientists and researchers to develop solutions to hi-tech challenges including artificial intelligence and robotics, next generation batteries and new techniques for manufacturing medicines.

The Chancellor will also set out out further details on making sure the UK is at the leading edge of 5G mobile phone technology.

Mr Hammondis expected to allocate more than 500 million from the National Productivity Investment Fund (NPIF), which was created in last years autumn statement to help innovative UK companies lead the way in the new technologies set to transform the world.

270 million will be earmarked for British businesses and universities to meet specific challenges with huge potential, which will include the use of robots to work in nuclear and offshore power generation, space and deep mining. There will also be cash set aside for companies developing the kind of batteries that will unlock the potential of electric cars.

The National Productivity Investment Fund is already working to upgrade the countrys mobile and broadband network, and the budget will outline the UKs first 5G strategy, including trials spread across leading research institutions. 5G will be significantly faster than current 4G networks. It also has implications for health, with companies developing wearable sensors that can foresee and warn of an imminent stroke or heart attack.

More here:

Budget 2017: Prizes for robotics, artificial intelligence and battery innovators to be announced - The Independent