Daily Archives: May 18, 2020

Artificial Intelligence and IP – WIPO

Posted: May 18, 2020 at 3:46 pm

(Photo: WIPO)AI and IP policy

The growth of AI across a range of technical fields raises a number of policy questions with respect to IP. The main focus of those questions is whether the existing IP system needs to be modified to provide balanced protection for machine created works and inventions, AI itself and the data AI relies on to operate. WIPO has started an open process to lead the conversation regarding IP policy implications.

From stories, to reports, news and more, we publish content on the topics most discussed in the field of AI and IP.

In a world in which AI is playing an ever-expanding role, including in the processes of innovation and creativity, Professor Ryan Abbott considers some of the challenges that AI is posing for the IP system.

Saudi inventor HadeelAyoub, founder of the London-based startup, BrightSign, talks about how she cameto develop BrightSign, an AI-based smart glove that allows sign language users tocommunicate directly with others without the assistance of an interpreter.

How big data, artificial intelligence, and other technologies are changing healthcare.

British-born computer scientist, Andrew Ng, leading thinker on AI, discusses the transformative power of AI, and the measures required to ensure that AI benefits everyone.

AI is set to transform our lives. But what exactly is AI, and what are the techniques and applications driving innovation in this area?

David Hanson, maker of Sophia the Robot and CEO and Founder of Hanson Robotics, shares his vision of a future built around super intelligence.

See the article here:

Artificial Intelligence and IP - WIPO

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence and IP – WIPO

Business Applications for Artificial Intelligence: An …

Posted: at 3:46 pm

Discussion of artificial intelligence (AI) elicits a wide range of feelings. On one end of the spectrum is fear of job loss spurred by a bot revolution. On the opposite is excitement about the overblown prospects of what people can achieve with machine augmentation.

But Dr. Mark Esposito wants to root the conversation in reality. Esposito is the co-founder of Nexus Frontier Tech and instructor of Harvards Artificial Intelligence in Business: Creating Value with Machine Learning, a two-day intensive program.

Rather than thinking about what could be, he says businesses looking to adopt AI should look at what already exists.

AI has become the latest tech buzzword everywhere from Silicon Valley to China. But the first piece of AI, the artificial neuron, was developed in 1943 by scientist William McCulloch and logician Walter Pitts. Since then, weve come a long way in our understanding and development of models capable of comprehension, prediction, and analysis.

Artificial intelligence is already widely used in business applications, including automation, data analytics, and natural language processing. Across industries, these three fields of AI are streamlining operations and improving efficiencies.

Automation alleviates repetitive or even dangerous tasks. Data analytics provides businesses with insights never before possible. Natural language processing allows for intelligent search engines, helpful chatbots, and better accessibility for people who are visually impaired.

Other common uses for AI in business include:

Indeed, many experts note that the business applications of AI have advanced to such an extent that we live and work alongside it every day without even realizing it.

In 2018, Harvard Business Review predicted that AI stands to make the greatest impact in marketing services, supply chain management, and manufacturing.

Two years on, we are watching these predictions play out in real time. The rapid growth of AI-powered social media marketing, for instance, makes it easier than ever for brands to personalize the customer experience, connect with their customers, and track the success of their marketing efforts.

Supply chain management is also poised to make major AI-based advances in the next several years. Increasingly, process intelligence technologies will provide companies with accurate and comprehensive insight to monitor and improve operations in real-time.

Other areas where we can expect to see significant AI-based advancements include the healthcare industry and data transparency and security.

On the patient side of the healthcare business, we are likely to see AI help with everything from early detection and immediate diagnoses. On the physician side, AI is likely to play a larger role in streamlining scheduling processes and helping to secure patient records.

Data transparency and security is another area where AI is expected to make a significant difference in the coming years. As customers become aware of just how much data companies are collecting, the demand for greater transparency into what data is collected, how it is used, and how it is secured will only grow.

Additionally, as Esposito notes, there continues to be significant opportunity to grow the use of AI in finance and banking, two sectors with vast quantities of data and tremendous potential for AI-based modernization, but which still rely heavily on antiquated processes.

For some industries, the widespread rollout of AI hinges on ethical considerations to ensure public safety.

While cybersecurity has long been a concern in the tech world, some businesses must now also consider physical threats to the public. In transportation, this is a particularly pressing concern.

For instance, how autonomous vehicles should respond in a scenario in which an accident is imminent is a big topic of debate. Tools like MITs Moral Machine have been designed to gauge public opinion on how self-driving cars should operate when human harm cannot be avoided.

But the ethics question goes well beyond how to mitigate damage. It leads developers to question if its moral to place one humans life above another, to ask whether factors like age, occupation, and criminal history should determine when a person is spared in an accident.

Problems like these are why Esposito is calling for a global response to ethics in AI.

Given the need for specificity in designing decision-making algorithms, it stands to reason that an international body will be needed to set the standards according to which moral and ethical dilemmas are resolved, Esposito says in his World Economic Forum post.

Its important to stress the global aspect of these standards. Countries around the world are engaging in an AI arms race, quickly developing powerful systems. Perhaps too quickly.

If the race to develop artificial intelligence results in negligence to create ethical algorithms, the damage could be great. International standards can give developers guidelines and parameters that ensure machine systems mitigate risk and damage as well as a human, if not better.

According to Esposito, theres a lot of misunderstanding in the business world about AIs current capabilities and future potential. At Nexus, he and his partners work with startups and small businesses to adopt AI solutions that can streamline operations or solve problems.

Esposito discovered early on that many business owners assume AI can do everything a person can do, and more. A better approach involves identifying specific use cases.

The more you learn about the technology, the more you understand that AI is very powerful, Esposito says. But it needs to be very narrowly defined. If you dont have a narrow scope, it doesnt work.

For companies looking to leverage AI, Esposito says the first step is to look at which parts of your current operations can be digitized. Rather than dreaming up a magic-bullet solution, businesses should consider existing tech that can free up resources or provide new insights.

The low-hanging fruit is recognizing where in the value chain they can improve operations, Esposito says. AI doesnt start with AI. It starts at the company level.

For instance, companies that have already digitized payroll will find that theyre collecting a lot of data that could help forecast future costs. This allows businesses to hire and operate with more predictability, as well as streamline tasks for accounting.

One company thats successfully integrated AI tech into multiple aspects of its business is Unilever, a consumer goods corporation. In addition to streamlining hiring and onboarding, AI is helping Unilever get the most out of its vast amounts of data.

Data informs much of what Unilever does, from demand forecasts to marketing analytics. The company observed that their data sources were coming from varying interfaces and APIs, according to Diginomica. This both hindered access and made the data unreliable.

In response, Unilever developed its own platforms to store the data and make it easily accessible for its employees. Augmented with Microsofts Power BI tool, Unilevers platform collects data from both internal and external sources. It stores the data in a universal data lake where its preservedto be used indefinitely for anything from business logistics to product development.

Amazon is another early adopter. Even before its virtual assistant Alexa was in every other home in America, Amazon was an innovator in using machine learning to optimize inventory management and delivery.

With a fully robust, AI-empowered system in place, Amazon was able to make a successful foray into the food industry via its acquisition of Whole Foods, which now uses Amazon delivery services.

Esposito says this kind of scalability is key for companies looking to develop new AI products. They can then apply the tech to new markets or acquired businesses, which is essential for the tech to gain traction.

Both Unilever and Amazon are exemplary because theyre solving current problems with technology thats already available. And theyre predicting industry disruption so they can stay ahead of the pack.

Of course, these two examples are large corporations with deep pockets. But Esposito believes that most businesses thinking about AI realistically and strategically can achieve their goals.

Looking ahead from 2020, it is increasingly clear that AI will only work in conjunction with people, not instead of people.

Every major place where we have multiple dynamics happening can really be improved by these technologies, Esposito says. And I want to reinforce the fact that we want these technologies to improve society, not displace workers.

To ease fears over job loss, Esposito says business owners can frame the conversation around creating new, more functional jobs. As technologies improve efficiencies and create new insights, new jobs that build on those improvements are sure to arise.

Jobs are created by understanding what we do and what we can do better, Esposito says.

Additionally, developers should focus on creating tech that is probabilistic, as opposed to deterministic. In a probabilistic scenario, AI could predict how likely a person is to pay back a loan based on their history, then give the lender a recommendation. Deterministic AI would simply make that decision, ignoring any uncertainty.

There needs to be cooperation between machines and people, Esposito says. But we will never invite machines to make a decision on behalf of people.

Read more:

Business Applications for Artificial Intelligence: An ...

Posted in Artificial Intelligence | Comments Off on Business Applications for Artificial Intelligence: An …

Artificial Intelligence Quotes (391 quotes)

Posted: at 3:46 pm

Why give a robot an order to obey orderswhy aren't the original orders enough? Why command a robot not to do harmwouldn't it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? () Now that computers really have become smarter and more powerful, the anxiety has waned. Today's ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolencelike vision, motor coordination, and common sensedoes not come free with computation but has to be programmed in. () Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem! Steven Pinker, How the Mind Works

Read the rest here:

Artificial Intelligence Quotes (391 quotes)

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Quotes (391 quotes)

MS in Artificial Intelligence | Artificial Intelligence

Posted: at 3:46 pm

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

The rest is here:

MS in Artificial Intelligence | Artificial Intelligence

Posted in Artificial Intelligence | Comments Off on MS in Artificial Intelligence | Artificial Intelligence

What is Artificial Intelligence? | Azure Blog and Updates …

Posted: at 3:46 pm

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

Read more from the original source:

What is Artificial Intelligence? | Azure Blog and Updates ...

Posted in Artificial Intelligence | Comments Off on What is Artificial Intelligence? | Azure Blog and Updates …

What Are the Advantages of Artificial Intelligence …

Posted: at 3:46 pm

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

Read the original:

What Are the Advantages of Artificial Intelligence ...

Posted in Artificial Intelligence | Comments Off on What Are the Advantages of Artificial Intelligence …

Powering the Artificial Intelligence Revolution – HPCwire

Posted: at 3:45 pm

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

Read this article:

Powering the Artificial Intelligence Revolution - HPCwire

Posted in Artificial Intelligence | Comments Off on Powering the Artificial Intelligence Revolution – HPCwire

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

Posted: at 3:45 pm

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

Follow this link:

Artificial intelligence is struggling to cope with how the world has changed - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is struggling to cope with how the world has changed – ZDNet

An AI future set to take over post-Covid world – The Indian Express

Posted: at 3:45 pm

Updated: May 18, 2020 10:03:39 pm

Written by Seuj Saikia

Rabindranath Tagore once said, Faith is the bird that feels the light when the dawn is still dark. The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy.

Crucially, a technology that had till now been crawling or at best, walking slowly will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services. A fourth Industrial Revolution as the AI-era is referred to has already been experienced before the pandemic with the backward linkages of cloud computing and big data. However, the imperative of continued social distancing has made an AI-driven economic world order todays reality.

Setting aside the oft-discussed prophecies of the Robo-Human tussle, even if we simply focus on the present pandemic context, we will see millions of students accessing their education through ed-tech apps, mothers buying groceries on apps too and making cashless payments through fintech platforms, and employees attending video conferences on relevant apps as well: All this isnt new phenomena, but the scale at which they are happening is unparalleled in human history. The alternate universe of AI, machine learning, cloud computing, big data, 5G and automation is getting closer to us every day. And so is a clash between humans (labour) and robots (plant and machinery).

This clash might very well be fuelled by automation. Any Luddite will recall the misadventures of the 19th-century textile mills. However, the automation that we are talking about now is founded on the citadel of artificially intelligent robots. Eventually, this might merge the two factors of production into one, thereby making labour irrelevant. As factories around the world start to reboot post COVID-19, there will be hard realities to contend with: Shortage of migrant labourers in the entire gamut of the supply chain, variations of social distancing induced by the fears of a second virus wave and the overall health concerns of humans at work. All this combined could end up sparking the fire of automation, resulting in subsequent job losses and possible reallocation/reskilling of human resources.

In this context, a potential counter to such employment upheavals is the idea of cash transfers to the population in the form of Universal Basic Income (UBI). As drastic changes in the production processes lead to a more cost-effective and efficient modern industrial landscape, the surplus revenue that is subsequently earned by the state would act as a major source of funds required by the government to run UBI. Variants of basic income transfer schemes have existed for a long time and have been deployed to unprecedented levels during this pandemic. Keynesian macroeconomic measures are increasingly being seen as the antidote to the bedridden economies around the world, suffering from near-recession due to the sudden ban on economic activities. Governments would have to be innovative enough to pump liquidity into the system to boost demand without harming the fiscal discipline. But what separates UBI from all these is its universality, while others remain targeted.

This new economic world order would widen the cracks of existing geopolitical fault lines particularly between US and China, two behemoths of the AI realm. Datanomics has taken such a high place in the valuation spectre that the most valued companies of the world are the tech giants like Apple, Google, Facebook, Alibaba, Tencent etc. Interestingly, they are also the ones who are at the forefront of AI innovations. Data has become the new oil. What transports data are not pipelines but fibre optic cables and associated communication technologies. The ongoing fight over the introduction of 5G technology central to automation and remote command-control architecture might see a new phase of hostility, especially after the controversial role played by the secretive Chinese state in the COVID-19 crisis.

The issues affecting common citizens privacy, national security, rising inequality will take on newer dimensions. It is pertinent to mention that AI is not all bad: As an imperative change that the human civilisation is going to experience, it has its advantages. Take the COVID-19 crisis as an example. Amidst all the chaos, big data has enabled countries to do contact tracing effectively, and 3D printers produced the much-needed PPEs at local levels in the absence of the usual supply chains. That is why the World Economic Forum (WEF) argues that agility, scalability and automation will be the buzzwords for this new era of business, and those who have these capabilities will be the winners.

But there are losers in this, too. In this case, the developing world would be the biggest loser. The problem of inequality, which has already reached epic proportions, could be further worsened in an AI-driven economic order. The need of the hour is to prepare ourselves and develop strategies that would mitigate such risks and avert any impending humanitarian disaster. To do so, in the words of computer scientist and entrepreneur Kai-Fu Lee, the author of AI Superpowers, we have to give centrality to our heart and focus on the care economy which is largely unaccounted for in the national narrative.

(The writer is assistant commissioner of income tax, IRS. Views are personal)

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

Continue reading here:

An AI future set to take over post-Covid world - The Indian Express

Posted in Artificial Intelligence | Comments Off on An AI future set to take over post-Covid world – The Indian Express

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution – Radiology Business

Posted: at 3:45 pm

Artificial intelligence-based techniques, used to reconstruct medical images, may actually be leading to incorrect diagnoses.

Thats according to the results of a new investigation, led by experts at the University of Cambridge. Scientists there devised a series of tests to assess such imaging reconstruction and discovered numerous artefacts and other errors, according to their study, published May 11 in theProceedings of the National Academy of Sciences.

This issue seemed to persist across different types of AI, they noted, and may not be easily remedied.

"There's been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionize modern medicine; however, there are potential pitfalls that must not be ignored," co-author Anders Hansen, PhD, from Cambridge's Department of Applied Mathematics and Theoretical Physics, said in a statement. "We've found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output."

To reach their conclusions, Hansen and coinvestigatorsfrom Norway, Portugal, Canada and the United Kingdomused several assessments to pinpoint flaws in AI algorithms. They targeted CT, MR and nuclear magnetic resonance imaging, and tested them based on instabilities tied to movement, small structural changes, and those related to the number of samples.

View post:

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution - Radiology Business

Posted in Artificial Intelligence | Comments Off on Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution – Radiology Business