Page 133«..1020..132133134135..140150..»

Category Archives: Ai

Artificial Intelligence Identifies Autozone And Lowes Among Todays Top Buys – Forbes

Posted: June 11, 2021 at 11:53 am

getty

For the second day in a row, investors shook off hotter than expected inflation. Despite the consumer price index skyrocketing in May by 5% from a year ago, shattering the estimated 4.7% and the fastest the index rose since August 2008, investors largely overlooked it. The 10-year Treasury yield fell to 1.44% after trading as high as 1.77% earlier in the year, somewhat easing fears and showing that inflation may only be transitory. Investors also continued to cheer Thursdays jobless claims data which again hit a pandemic era low. The Dow Jones gained 100 points, the S&P 500 added 0.2% to its record high, and the Nasdaq ticked up 0.1%. The Dow down 0.8% for the week, but the S&P 500 is up 0.2%, and the Nasdaq is up 1.5%. For investors looking to find the best opportunities, the deep learning algorithms at Q.ai have crunched the data to give you a set of Top Buys. Our Artificial Intelligence (AI) systems assessed each firm on parameters of Technicals, Growth, Low Volatility Momentum, and Quality Value to find the best Top Buys.

Sign up for the free Forbes AI Investor newsletterhereto join an exclusive AI investing community and get premium investing ideas before markets open.

Autozone is our first Top Buy today. Autozone is the largest aftermarket automotive parts and accessories retailer in the United States, and also has stores in Mexico, Puerto Rico and Brazil. Our AI systems rated Autozone A in Technicals, C in Growth, A in Low Volatility Momentum, and A in Quality Value. The stock closed down 0.89% to $1374.53 on volume of 209,990 vs its 10-day price average of $1395.19 and its 22-day price average of $1442.04, and is up 16.45% for the year. Revenue grew by 12.9% in the last fiscal year and grew by 27.1% over the last three fiscal years, Operating Income grew by 18.86% in the last fiscal year and grew by 39.32% over the last three fiscal years, and EPS grew by 26.59% in the last fiscal year and grew by 86.71% over the last three fiscal years. Revenue was $12631.97M in the last fiscal year compared to $11221.08M three years ago, Operating Income was $2501.58M in the last fiscal year compared to $2134.32M three years ago, and EPS was $71.93 in the last fiscal year compared to $48.77 three years ago. The stock is also trading with a Forward 12M P/E of 15.69.

Simple moving average of Autozone Inc (AZO)

Bunge Ltd is our next Top Buy today. Bunge Ltd operates as a holding company that engages in the supply and transportation of agricultural commodities. Our AI systems rated the company B in Technicals, A in Growth, C in Low Volatility Momentum, and B in Quality Value. The stock closed down 1.42% to $87.53 on volume of 591,239 vs its 10-day price average of $88.55 and its 22-day price average of $88.26, and is up 33.21% for the year. Revenue grew by 9.15% in the last fiscal year, Operating Income grew by 70.61% in the last fiscal year and grew by 180.12% over the last three fiscal years, and EPS grew by 87.16% in the last fiscal year and grew by 780.26% over the last three fiscal years. Revenue was $41404.0M in the last fiscal year compared to $45743.0M three years ago, Operating Income was $1412.0M in the last fiscal year compared to $860.0M three years ago, EPS was $7.72 in the last fiscal year compared to $1.64 three years ago, and ROE was 17.86% in the last year compared to 3.91% three years ago. The stock is also trading with a Forward 12M P/E of 13.15.

Simple moving average of Bunge Ltd (BG)

Spx Flow Inc is our third Top Buy today. Spx Flow is a manufacturing company, which provides equipment for HVACs, and constructs holistic production lines and unit operations to build a competitively optimal chemical processing system. Our AI systems rated the company C in Technicals, B in Growth, A in Low Volatility Momentum, and B in Quality Value by our AI systems. The stock closed down 2.14% to $64.79 on volume of 121,397 vs its 10-day price average of $67.86 and its 22-day price average of $68.12, and is up 11.25% for the year. Revenue grew by 5.5% in the last fiscal year, Operating Income grew by 22.64% in the last fiscal year, and EPS grew by 421.36% in the last fiscal year. Revenue was $1350.6M in the last fiscal year compared to $1593.9M three years ago, Operating Income was $97.6M in the last fiscal year compared to $135.8M three years ago, EPS was $0.14 in the last fiscal year compared to $1.02 three years ago, and ROE was 4.48% in the last year compared to 1.1% three years ago. Forward 12M Revenue is expected to grow by 1.45% over the next 12 months, and the stock is trading with a Forward 12M P/E of 24.6.

Simple moving average of Spx Flow Inc (FLOW)

Lowe's Cos Inc is our fourth Top Buy today. Lowes is a big-box home improvement retailer, and ranks just behind Home Depot as the largest home improvement retailer in the country. Our AI systems rated Lowes B in Technicals, C in Growth, A in Low Volatility Momentum, and B in Quality Value. The stock closed down 0.95% to $188.78 on volume of 3,905,835 vs its 10-day price average of $191.25 and its 22-day price average of $193.33, and is up 17.71% for the year. Revenue grew by 5.3% in the last fiscal year and grew by 32.3% over the last three fiscal years, Operating Income grew by 11.51% in the last fiscal year and grew by 144.39% over the last three fiscal years, and EPS grew by 18.29% in the last fiscal year and grew by 222.8% over the last three fiscal years. Revenue was $89597.0M in the last fiscal year compared to $71309.0M three years ago, Operating Income was $10892.0M in the last fiscal year compared to $4970.0M three years ago, EPS was $7.75 in the last fiscal year compared to $2.84 three years ago, and ROE was 342.33% in the last year compared to 48.63% three years ago. The stock is also trading with a Forward 12M P/E of 17.46.

Simple moving average of Lowe's Cos Inc (LOW)

Spectrum Brand Holdings is our final Top Buy today. The company engages in the manufacture, market, and distribution of consumer products such as Global Batteries and Appliances, Global Pet Supplies, Home and Garden, Hardware and Home Improvement, and Global Auto Care. Our AI systems rated the company B in Technicals, C in Growth, B in Low Volatility Momentum, and B in Quality Value. The stock closed down 0.53% to $86.13 on volume of 174,328 vs its 10-day price average of $87.46 and its 22-day price average of $89.26, and is up 10.28% for the year. Revenue grew by 12.24% in the last fiscal year and grew by 16.83% over the last three fiscal years, Operating Income grew by 38.96% in the last fiscal year and grew by 36.26% over the last three fiscal years, and EPS grew by 217.57% in the last fiscal year. Revenue was $3964.2M in the last fiscal year compared to $3808.7M three years ago, Operating Income was $388.3M in the last fiscal year compared to $396.0M three years ago, EPS was $2.18 in the last fiscal year compared to $20.74 three years ago, and ROE was 5.37% in the last year compared to 24.15% three years ago. The stock is also trading with a Forward 12M P/E of 16.78.

Simple moving average of Spectrum Brand Holdings Inc (SPB)

Liked what you read? Sign up for our free Forbes AI Investor Newsletterhereto get AI driven investing ideas weekly. For a limited time, subscribers can join an exclusive slack group to get these ideas before markets open.

See the article here:

Artificial Intelligence Identifies Autozone And Lowes Among Todays Top Buys - Forbes

Posted in Ai | Comments Off on Artificial Intelligence Identifies Autozone And Lowes Among Todays Top Buys – Forbes

How AI can truly advance healthcare and research, and where it’s gone wrong – Healthcare IT News

Posted: May 20, 2021 at 4:42 am

Artificial intelligence and machine learning can be used to advance healthcare and accelerate life sciences research. And there are many companies on the market today with AI offerings to do just that.

Derek Baird is president of North America at Sensyne Health, which offers AI-based remote patient monitoring for healthcare provider organizations and helps life sciences companies develop new medicines.

Baird believes some large companies have missed the mark on AI and ultimately dismantled public trust in these types of technologies, but that some companies have cracked the code by starting with the basics. He also believes AI success hinges on solving non-glamorous issues like data normalization, interoperability, clinical workflow integration and change management.

Healthcare IT News interviewed Baird to discuss the role of AI in healthcare today and how the technology can solve common problems and advance research.

Q: How can artificial intelligence and machine learning be used to advance healthcare and accelerate life sciences research? Where is it happening now, and what are the heavy lifts for the future?

A: AI is having a profound impact todayon the ways we research drugs, conduct clinical trials and understand diseases.And while I think the current role of AI is just the tip of a very big iceberg, I think that as an industry we need to be much more careful about the ways we describe the promise of AI in healthcare.

There has been so much hyperbole around AI that I think we tend to forget that these technologies are just part of the healthcare equation, not some silver bullet that will suddenly solve everything. Biology is hard, complex and still very mysterious in many ways, but AI is already showing promise by allowing us to sift through vast amounts of data much faster and more intelligently than we could before.

One of the ways we are seeing the promise of AI come to life today is in its ability to help us go beyond symptomology and into a deeper understanding of how diseases work in individuals and populations. Let's look at COVID-19.

Our big challenge with COVID-19 is not diagnosis, but that when we have a positive patient,we don't know how sick they will get. We have an understanding of the clinical characteristics of the disease, but the risk-factors underlying the transition from mild to severe remain poorly understood. Using AI, it is possible to analyze millions of patient records at a speed and level of detail that humans just cannot come close to.

We have been able to correlate many of the factors that contribute to severe cases, and are now able to predict those who will most likely be admitted to the ICU, who will need ventilation and what their chances of survival are.

In addition to a powerful individual and population-level predictive capability, these analyses have also given us a great start in understanding the mechanisms of the disease that can in turn accelerate the development and testing of therapeutics.

That is just one example.There are many more. The last 12 months have been a time of rapidly accelerating progress for AI in healthcare, enabling more coherent diagnoses for poorly understood diseases, personalized treatment plans based on the genetics of treatment response, and of course, drug discovery.

AI is being used in labs right now to help discover novel new drugs and new uses for existing drugs, and doing so faster and cheaper than ever before, so freeing up precious resources.

There are many technical heavy lifts, but innovation in the field of AI right now is truly incredible, and is progressing exponentially. I think the bigger lift right now is trust. The AI industry has done a disservice to itself and science by relentlessly overhyping it, obfuscating the way it works, overstating its role in the overall healthcare equationand raising fears around what is being done with everyone's data.

We need to start talking about AI in terms of what it is really doing today and its role in science and care, and we need to be much more transparent about data: how we get it, use itand generally ensure we are using it intelligently, responsibly andwith respect for patient privacy.

Q: You say that some large organizations missed the mark on AI and ultimately dismantled public trust in these types of technologies. Please elaborate. You believe AI success hinges on solving non-glamorous issues like data normalization, interoperability, clinical workflow integration and change management. Why?

A: Companies spent billions of dollars amassing unimaginably vast data sets and promised super-intelligent systems that could predict, diagnose and treat disease better than humans. Public expectations were out of whack, but so were the expectations of the healthcare organizations that invested in these solutions.

I think Big Tech operating in healthcare has not been helpful overall,not just in setting unreasonable expectations, but often in putting profit before privacy, and by bringing the bias of seeing people as products "users" to be monetized,rather than patients to be helped.

Public and industry confidence needs to be restored, and we need to correct the asymmetry between societal benefit and the ambitions of multinational technology platforms. In order to achieve that, the life sciences industry, clinicians, hospitals and patients need to know that data has been ethically sourced, and is secure, anonymous and being used for the direct benefit of the individuals who shared it.

Patients and provider organizations are rightfully concerned about how their data is secured, handled and kept private, and we have made it a top priority to build a business model with transparency at its center, being clear with stakeholders from the start about what data we are using and how it will be used.

As an industrywe have to be clear, not just about the breakthrough medical developments we achieve, but about the specific types of real-world evidence we are using such as genetic markers, heart rates and MRI images and how we protect it throughout the process.

Once we get these basics of trust and transparency right, we can begin to talk again about more aspirational plans for AI. This means investing in robust data storage solutions, collaborating with regulators and policymakers, and educating the wider industry on the power of anonymized data.

These kinds of initiatives and partnerships will ensure we are all meeting the highest standards and allow us to rebuild longer-term trust.

Q: In your experience, what kinds of results is healthcare seeing from using clinical AI to support medical research, therapeutic development, personalized care and population health-level analyses?

A: With AI, pharma companies can collect, store and analyze large data sets at a far quicker rate than by manual processes. This enables them to carry out research faster and more efficiently, based on data about genetic variation from many patients, and develop targeted therapies effectively. In addition, it gives a clearer view on how specific groups of patients with certain shared characteristics react to treatments, helping to precisely map the right quantities and doses of treatments to prescribe.

For example, do all patients with heart failure respond the same to the standard course of treatment? Clinical AI has told us the answer is no, by dividing patients into subgroups with similar traits and looking at the variations in treatment response.

You need AI to break a population down based on many traits, and groupings of traits,to get this kind of answer, because the level of complexity quickly becomes too cumbersome for human processing. In this case, sophisticated patient stratification was used to improve clinical trial design and ultimately ensure heart failure patients are receiving the right course of treatment.

Comprehensive analysis of de-identified patient data using AI and machine learning has the ability to drastically transform the healthcare industry. Ultimately, we want to prevent disease, and by having more information about why, how and in which people diseases develop, we can introduce preventative measures and treatments much earlier, sometimes even before a patient starts to show symptoms.

AI is also increasingly being used for operational purposes in hospitals. For example, during the pandemicAI was used to predict demand for mechanical ventilators need.

AI will continue to be a driving force behind future breakthroughs. There are still many challenges that lie ahead for personalized medicine, and still a way to go for it to be perfected, but as AI becomes more widely adopted in medicinea future of workable, effective and personalized healthcare may certainly be achievable.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Follow this link:

How AI can truly advance healthcare and research, and where it's gone wrong - Healthcare IT News

Posted in Ai | Comments Off on How AI can truly advance healthcare and research, and where it’s gone wrong – Healthcare IT News

114 Milestones In The History Of Artificial Intelligence (AI) – Forbes

Posted: at 4:42 am

Sixty-five years ago, 10 computer scientists convened in Dartmouth, NH, for a workshop on artificial intelligence, defined a year earlier in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

It was the event that initiated AI as a research discipline, which grew to encompass multiple approaches, from the symbolic AI of the 1950s and 1960s to the statistical analysis and machine learning of the 1970s and 1980s to todays deep learning, the statistical analysis of big data. But the preoccupation with developing practical methods for making machines behave as if they were humans emerged already 7 centuries ago.

HAL (Heuristically programmed ALgorithmic computer) 9000, a sentient artificial general intelligence ... [+] computer and star of the 1968 film 2001: A Space Odyssey

1308Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

1666Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), following Ramon Llull in proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts.

1726Jonathan Swift publishes Gulliver's Travels, which includes a description of the Engine, a machine on the island of Laputa (and a parody of Llull's ideas): "a Project for improving speculative Knowledge by practical and mechanical Operations." By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."

1755Samuel Johnson defines intelligence in A Dictionary of the English Language as Commerce of information; notice; mutual communication; account of things distant or discreet.

1763Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning.

1854George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations.

1865 Richard Millar Devens describes in the Cyclopdia of Commercial and Business Anecdotes how the banker Sir Henry Furnese profited by receiving and acting upon information prior to his competitors: Throughout Holland, Flanders, France, and Germany, he maintained a complete and perfect train of business intelligence.

1898At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla demonstrates the worlds first radio-controlled vessel. The boat was equipped with, as Tesla described it, a borrowed mind.

1910Belgian lawyers Paul Otlet and Henri La Fontaine establish the Mundaneum where they wanted to gather together all the world's knowledge and classify it according to their Universal Decimal Classification.

1914The Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention.

1921Czech writer Karel apek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work).

1925Houdina Radio Control releases a radio-controlled driverless car, travelling the streets of New York City.

1927The science-fiction film Metropolis is released. It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars.

1929Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism.

1937British science fiction writer H.G. Wells predicts that the whole human memory can be, and probably in short time will be, made accessible to every individual and that any student, in any part of the world, will be able to sit with his [microfilm] projector in his own study at his or her convenience to examine any book, any document, in an exact replica."

1943Warren S. McCulloch and Walter Pitts publish A Logical Calculus of the Ideas Immanent in Nervous Activity in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial neurons and how they might perform simple logical functions, will become the inspiration for computer-based neural networks (and later deep learning) and their popular description as mimicking the brain.

1947Statistician John W. Tukey coins the term bit to designate a binary digit, a unit of information stored in a computer.

1949Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill.These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

1949Donald Hebb publishes Organization of Behavior: A Neuropsychological Theory in which he proposes a theory about learning based on conjectures regarding neural networks and the ability of synapses to strengthen or weaken over time.

1950Claude Shannons Programming a Computer for Playing Chess is the first published article on developing a chess-playing computer program.

1950Alan Turing publishes Computing Machinery and Intelligence in which he proposes the imitation game which will later become known as the Turing Test.

1951Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.

1952Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own.

August 31, 1955The term artificial intelligence is coined in a proposal for a 2 month, 10 man study of artificial intelligence submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.

December 1955Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica.

1957Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a remarkable machine capable of what amounts to thought.

1957In the movie Desk Set, when a methods engineer (Spencer Tracy) installs the fictional computer EMERAC, the head librarian (Katharine Hepburn) tells her anxious colleagues in the research department: They cant build a machine to do our job; there are too many cross-references in this place.

1958Hans Peter Luhn publishes "A Business Intelligence System" in the IBM Journal of Research and Development. It describes an "automatic method to provide current awareness services to scientists and engineers."

1958John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research.

1959Arthur Samuel coins the term machine learning, reporting on programming a computer so that it will learn to play a better game of checkers than can be played by the person who wrote the program.

1959Oliver Selfridge publishes Pandemonium: A paradigm for learning in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes a model for a process by which computers could recognize patterns that have not been specified in advance.

1959John McCarthy publishes Programs with Common Sense in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs that learn from their experience as effectively as humans do.

1961The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey.

1961James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus.

1962Statistician John W. Tukey writes in the Future of Data Analysis: Data analysis, and the parts of statistics which adhere to it, musttake on the characteristics of science rather than those of mathematics data analysis is intrinsically an empirical science.

1964Daniel Bobrow completes his MIT PhD dissertation titled Natural Language Input for a Computer Problem Solving System and develops STUDENT, a natural language understanding computer program.

August 16, 1964Isaac Asimov writes in the New York Times: The I.B.M. exhibit at the [1964 Worlds Fair] is dedicated to computers, which are shown in all their amazing complexity, notably in the task of translating Russian into English. If machines are that smart today, what may not be in the works 50 years hence? It will be such computers, much miniaturized, that will serve as the brains of robots Communications will become sight-sound and you will see as well as hear the person you telephone. The screen can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.

1965Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do."

1965Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress.

1965I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

1965Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program.

1965Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science.

1966Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions. In a Life magazine 1970 article about this first electronic person, Marvin Minsky is quoted saying with certitude: In from three to eight years we will have a machine with the general intelligence of an average human being.

1968The film 2001: Space Odyssey is released, featuring HAL 9000, a sentient computer.

1968Terry Winograd develops SHRDLU, an early natural language understanding computer program.

1969Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. A learning algorithm for multi-layer artificial neural networks, it has contributed significantly to the success of deep learning in the 2000s and 2010s, once computing power has sufficiently advanced to accommodate the training of large networks.

1969Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, highlighting the limitations of simple neural networks.In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others.

1970The first anthropomorphic robot, the WABOT-1, is built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system.

1971Michael S. Scott Morton publishes Management Decision Systems: Computer-Based Support for Decision Making, summarizing his studies of the various ways by which computers and analytical models could assist managers in making key decisions.

1971Arthur Miller writes inThe Assault on Privacythat Too many information handlers seem to measure a man by the number of bits of storage capacity his dossier will occupy.

1972MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University.

1973James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research.

1976Computer scientist Raj Reddy publishes Speech Recognition by Machine: A Review in the Proceedings of the IEEE, summarizing the early work on Natural Language Processing (NLP).

1978The XCON (eXpert CONfigurer) program, a rule-based expert system assisting in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's requirements, is developed at Carnegie Mellon University.

1979The Stanford Cart successfully crosses a chair-filled room without human intervention in about five hours, becoming one of the earliest examples of an autonomous vehicle.

1979Kunihiko Fukushima develops the neocognitron, a hierarchical, multilayered artificial neural network.

1980I.A. Tjomsland applies Parkinsons First Law to the storage industry: Data expands to fill the space available.

1980Wabot-2 is built at Waseda University in Japan, a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ.

1981The Japanese Ministry of International Trade and Industry budgets $850 million for the Fifth Generation Computer project. The project aimed to develop computers that could carry on conversations, translate languages, interpret pictures, and reason like human beings.

1981The Chinese Association for Artificial Intelligence (CAAI) is established.

1984Electric Dreams is released, a film about a love triangle between a man, a woman and a personal computer.

1984At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming AI Winter, predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s.

1985The first business intelligence system is developed for Procter & Gamble by Metaphor Computer Systems to link sales information and retail scanner data.

1986First driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets.

October 1986David Rumelhart, Geoffrey Hinton, and Ronald Williams publish Learning representations by back-propagating errors, in which they describe a new learning procedure, back-propagation, for networks of neuron-like units.

1987The video Knowledge Navigator, accompanying Apple CEO John Sculleys keynote speech at Educom, envisions a future in which knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information.

1988Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation reads: Judea Pearl created the representational and computational foundation for the processing of information under uncertainty. He is credited with the invention of Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.

1988Rollo Carpenter develops the chat-bot Jabberwacky to "simulate natural human chat in an interesting, entertaining and humorous manner." It is an early attempt at creating artificial intelligence through human interaction.

1988Members of the IBM T.J. Watson Research Center publish A statistical approach to language translation, heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to machine learning based on statistical analysis of known examples, not comprehension and understanding of the task at hand (IBMs project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament).

1988Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In Prologue: A View from 1988 they wrote: One reason why progress has been so slow in this field is that researchers unfamiliar with its history have continued to make many of the same mistakes that others have made before them.

1989Yann LeCun and other researchers at AT&T Bell Labs successfully apply a backpropagation algorithm to a multi-layer neural network, recognizing handwritten ZIP codes. Given the hardware limitations at the time, it took about 3 days to train the network, still a significant improvement over earlier efforts.

March 1989Tim Berners-Lee writes Information Management: A Proposal, and circulates it at CERN.

1990Rodney Brooks publishes Elephants Dont Play Chess, proposing a new approach to AIbuilding intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: The world is its own best model The trick is to sense it appropriately and often enough.

October 1990Tim Berners-Lee begins writing code for a client program, a browser/editor he calls WorldWideWeb, on his new NeXT computer.

1993Vernor Vinge publishes The Coming Technological Singularity, in which he predicts that within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

September 1994 BusinessWeek publishes a cover story on Database Marketing: Companies are collecting mountains of information about you, crunching it to predict how likely you are to buy a product, and using that knowledge to craft a marketing message precisely calibrated to get you to do so. many companies believe they have no choice but to brave the database-marketing frontier.

1995Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web.

1997Sepp Hochreiter and Jrgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of a recurrent neural network used today in handwriting recognition and speech recognition.

October 1997Michael Cox and David Ellsworth publish Application-controlled demand paging for out-of-core visualization in the Proceedings of the IEEE 8th conference on Visualization. They start the article with Visualization provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem ofbig data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources. It is the first article in the ACM digital library to use the term big data.

1997Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion.

1998The first Google index has 26 million Web pages.

1998Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot.

1998Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation.

October 1998K.G. Coffman and Andrew Odlyzko publish The Size and Growth Rate of the Internet. They conclude that the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U. S. will overtake voice traffic around the year 2002 and will be dominated by the Internet.

2000Googles index of the Web reaches the one-billion mark.

2000MITs Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions.

2000Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting.

October 2000Peter Lyman and Hal R. Varian at UC Berkeley publish How Much Information? It is the first comprehensive study to quantify, in computer storage terms, the total amount of new and original information (not counting copies) created in the world annually and stored in four physical media: paper, film, optical (CDs and DVDs), and magnetic.The study finds that in 1999, the world produced about 1.5 exabytes of unique information, or about 250 megabytes for every man, woman, and child on earth. It also finds that a vast amount of unique information is created and stored by individuals (what it calls the democratization of data) and that not only is digital information production the largest in total, it is also the most rapidly growing. Calling this finding dominance of digital, Lyman and Varian state that even today, most textual information is born digital, and within a few years this will be true for images as well. A similar study conducted in 2003 by the same researchersfoundthat the world produced about 5 exabytes of new information in 2002 and that 92% of the new information was stored on magnetic media, mostly in hard disks.

2001A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love.

2003Paro, a therapeutic robot baby harp seal designed by Takanori Shibata of the Intelligent System Research Institute of Japan's AIST is selected as a "Best of COMDEX" finalist

2004The first DARPA Grand Challenge, a prize competition for autonomous vehicles, is held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route.

2006Oren Etzioni, Michele Banko, and Michael Cafarella coin the term machine reading,defining it as an inherently unsupervised autonomous understanding of text.

2006Geoffrey Hinton publishes Learning Multiple Layers of Representation, summarizing the ideas that have led to multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it, i.e., the new approaches to deep learning.

2006The Dartmouth Artificial Intelligence Conference: The Next Fifty Years (AI@50), commemorates the 50th anniversary of the 1956 workshop. The conference director concludes: Although AI has enjoyed much success over the last 50 years, numerous dramatic disagreements remain within the field. Different research areas frequently do not collaborate, researchers utilize different methodologies, and there still is no general theory of intelligence or learning that unites the discipline.

2007Fei-Fei Li and colleagues at Princeton University start to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research.

2007John F. Gantz, David Reinsel and other researchers at IDC release a white paper titled The Expanding Digital Universe: A Forecast of Worldwide Information Growth through 2010. It is the first study to estimate and forecast the amount of digital data created and replicated each year. IDC estimates that in 2006, the world created 161 exabytes of data and forecasts that between 2006 and 2010, the information added annually to the digital universe will increase more than six fold to 988 exabytes, or doubling every 18 months. According to the2010 and2012 releases of the same study, the amount of digital data created annually surpassed this forecast, reaching 1,227 exabytes in 2010, and growing to 2837 exabytes in 2,012. In 2020, IDC estimated that 59,000 exabytes of data will be created, captured, copied, and consumed in the world that year.

2009 Hal Varian, Googles Chief Economist, tells the McKinsey Quarterly: I keep saying the sexy job in the next ten years will be statisticians. People think Im joking, but who wouldve guessed that computer engineers wouldve been the sexy job of the 1990s? The ability to take datato be able to understand it, to process it, to extract value from it, to visualize it, to communicate itthats going to be a hugely important skill in the next decades.

See more here:

114 Milestones In The History Of Artificial Intelligence (AI) - Forbes

Posted in Ai | Comments Off on 114 Milestones In The History Of Artificial Intelligence (AI) – Forbes

IBM Think 2021: AI, Automation, Hybrid Cloud and Practical Innovation | eWEEK – eWeek

Posted: at 4:42 am

For the past year and a half, life has been both massively challenging and exhilarating for business technology vendors and their customers and partners. As the COVID-19 pandemic forced companies to fundamentally rethink the way workers, managers and executives performed essential tasks, vendors responded with innovative new solutions and services.

Organizations adopted and deployed those offerings at unheard-of speeds, accomplishing in weeks or months what once would have taken years. The result led to unusual or unique accomplishments. As IBM CEO Arvind Krishna pointed out during his IBM Think 2021 keynote address last week: I venture to say that 2020 was the first time in history that digital transformation spending accelerated despite GDP declining.

As vaccinations bring the pandemic under control and things return slowly to normal, how will businesses preserve or extend the transformational solutions they adopted? At IBM Think, Krishna and his leadership team offered valuable insights and new solutions to consider.

The announcements at Think 2021 mostly centered on areas that have long been focal points for IBM (and some of its competitors): hybrid cloud, artificial intelligence and quantum computing. What was different this time around was the practical and business value offered by new solutions and features.

Take AI, for one. Many if not most AI projects and efforts center on or have been designed to support large-scale moonshot efforts that underscore their owners far-sighted vision and willingness to take on big challenges. That can be both dramatic and problematic, given how often these projects complexities lead to setbacks, delays and failure. There is also a tendency toward forest for the trees confusion manifested by mistaking the results of complementary efforts, such as machine learning for AI itself.

During its decades-long involvement in AI R&D, IBM has been involved in its own share of moonshot projects. However, the AI solutions announced at Think were more in the line of practical innovations designed to maximize dependable business benefits. For example, AI enhancements drive the new AutoSQL function in IBMs Cloud Pak for Data that enables customers to receive queries to data in hybrid multi-cloud environments (on-premises, private clouds or any public cloud) up to 8X faster and at half the cost of prior solutions. The new intelligent data fabric in Cloud Pak for Data will automate complex management functions by using AI to discover, understand, access and protect information in distributed environments.

Another new AI-powered IBM solution is Watson Orchestrate, which is designed to increase the personal productivity of employees in sales, human resources, operations and other business functions by automating and simplifying business processes. The AI engine in Watson Orchestrate automatically selects and sequences pre-packaged skills required to perform tasks and connects them with associated applications, tools, data and historical details. There are no IT skills required for users. Instead, they can use natural language collaboration tools, such as Slack and email, to initiate work. Watson Orchestrate also connects to popular applications, including Salesforce, SAP and Workday.

Similarly, the new Maximo Mobile solution uses Watson AI to enhance the performance and productivity of field technicians who work on bridges, roads, production lines, power plants, refineries and other physical industrial and infrastructure assets. Users can use Maximo Mobile virtually anywhere, even in remote locations, to access operational data, human assistance and digital twins (virtual representations that act as real-time digital counterparts of physical objects or processes) to complete vital tasks.

The practical melding on AI and automation to better manage or perform complex processes was one of the most profound themes at IBM Think. In his keynote, CEO Krishna noted that automation is nothing new, Its been around for centuries. Industrial automation gave manufacturing companies economies of scale and cost advantage in making things such as cars and household appliances. The most profound economies of scale are no longer only about manufacturing; theyre about producing breakthrough ideas by people leveraging technology automation to tap into their knowledge.

Krishna addressed a common concern: That technologically-enabled automation will damage or eliminate traditional jobs. The future is not about how AI is going to replace jobs but how it will change jobs by bringing in what I call AI complementarity. What I mean by that is that AI is very good at accomplishing things that we dont particularly like doing, and vice versa.

Krishna also noted that AI-enabled automation can have a remarkable impact on workers and businesses alike. Research shows that high-powered automation can help you reclaim up to 50% of your time to focus on what matters most. IDC predicts that by 2025, AI-powered enterprises will see a major increase in customer satisfaction. Let me put a number on it: up to 1.5 x higher net promoter scores compared to the competition. Human ingenuity leveraging technology is what is going to drive a competitive advantage today.

This is a profound message for IBMs customers and partners, many of whom have been significantly, negatively impacted by Covid-19. As the pandemic eases and businesses work to regain forward momentum, significantly improving both process efficiency and customer satisfaction would be hugely beneficial.

Of course, AI-infused automation wasnt the only subject highlighted at IBM Think. The company also announced other new solutions focused on making life easier for enterprise IT professionals, including Project CodeNet, a large-scale, open-source dataset comprised of 14 million code samples, 500 million lines of code and 55 programming languages. Project CodeNet is designed to enable the understanding and translation of code by AIs and includes tools for source-to-source translation and transitioning legacy codebases to modern code languages. Another new AI-enabled solution, Mono2Micro, is a capability in WebSphere Hybrid Edition that is designed to help enterprises optimize and modernize applications for hybrid clouds.

Not surprisingly, IBM announced significant advancements in its Quantum computing efforts. Qiskit Runtime is a new software solution containerized and hosted in the hybrid cloud. In concert with improvements in both the software and processor performance of IBM Q quantum systems, Qiskit Runtime can boost the speed of quantum circuitsthe building blocks of quantum algorithmsby 120X, vastly reducing the time required for running complex calculations, sich as chemical modeling and financial risk analysis.

Think 2021 featured testimonials by numerous enterprise customers, including Johnson & Johnson, Mission Healthcare, NatWest Bank and CVS Health that underscored the benefits they are achieving with IBM solutions, including hybrid cloud, Watson AI and IT modernization. IBM also unveiled new competencies and skills training in areas including hybrid cloud infrastructure, automation and security. These were developed as part of the $1 billion investment the company has committed to supporting its partner ecosystem.

So, what were the final takeaways from IBM Think 2021? First and foremost, the company and its leadership are focused on helping enterprise customers and partners survive the challenges of the Covid-19 pandemic and prepare them to thrive as business and daily life resumes.

In some cases, companies will hope to return to and regain their past trajectories and IBMs portfolio of solutions should serve them well. But in many other instances, businesses will be pushing toward a new normal by adopting new and emerging innovations, including AI, advanced automation and hybrid cloud computing. Those organizations should have come away from Think 2021 knowing that IBM has their back, whether it is by providing the offerings they need immediately or investing in new solutions and services that will support future growth.

A final point about IBMs efforts in AI: The messaging at Think 2021 does not mean that the company is abandoning large-scale projects or long-term goals. But rather than focusing mostly or entirely on moonshot projects, the new IBM solutions infused with AI complementarity show that the company has its feet firmly on the ground. That business-focused message should and will sit well with IBMs enterprise customers and partners.

See original here:

IBM Think 2021: AI, Automation, Hybrid Cloud and Practical Innovation | eWEEK - eWeek

Posted in Ai | Comments Off on IBM Think 2021: AI, Automation, Hybrid Cloud and Practical Innovation | eWEEK – eWeek

Embracing the rapid pace of AI – MIT Technology Review

Posted: at 4:42 am

In a recent survey, 2021 Thriving in an AI World, KPMG found that across every industrymanufacturing to technology to retailthe adoption of artificial intelligence (AI) is increasing year over year. Part of the reason is digital transformation is moving faster, which helps companies start to move exponentially faster. But, as Cliff Justice, US leader for enterprise innovation at KPMG posits, Covid-19 has accelerated the pace of digital in many ways, across many types of technologies. Justice continues, This is where we are starting to experience such a rapid pace of exponential change that its very difficult for most people to understand the progress. But understand it they must because artificial intelligence is evolving at a very rapid pace.

Justice challenges us to think about AI in a different way, more like a relationship with technology, as opposed to a tool that we program, because he says, AI is something that evolves and learns and develops the more it gets exposed to humans. If your business is a laggard in AI adoption, Justice has some cautious encouragement, [the] AI-centric world is going to accelerate everything digital has to offer.

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in association with KPMG.

2021 Thriving in an AI World, KPMG

Laurel Ruma: From MIT Technology Review, Im Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is the rate of artificial intelligence adoption. Its increasing, and fast. A new study from KPMG shows that its accelerating in specific industries like industrial manufacturing, financial services, and tech. But what happens when you hit the gas pedal but havent secured everything else? Are you uneasy about the rate of AI adoption in your enterprise?

Two words for you: covid-19 whiplash.

My guest is Cliff Justice, who is the US leader for enterprise innovation for KPMG. He and his group focus on identifying, developing, and deploying the next generation of technologies, services, and solutions for KPMG and its clients. Cliff is a former entrepreneur and is a recognized authority in global sourcing, emerging technology such as AI, intelligent automation, and enterprise transformation. This episode of Business Lab is produced in association with KPMG. Cliff, thank you for joining me on Business Lab.

Cliff Justice: Its great to be here. Thanks for having me.

Laurel: So, were about to take a look at KPMGs survey results for its 2021 Thriving in an AI World report, which looks across seven industries. Why did KPMG repeat that survey for this year? What did you aim to achieve with this research?

Cliff: Well, artificial intelligence is evolving at a very rapid pace. When we first started covering and investing in artificial intelligence probably seven years ago, it was at a very nascent form. There were not very many use cases. Many of the use cases were based on natural language processing. About 10 years ago was when the first public use case of artificial intelligence made the headlines with IBM Watson winning Jeopardy. Since then, youve seen a very, very rapid progression. And this whole field is evolving at an exponential pace. So where we are today is very different than where we were a year or two ago.

Laurel: It does seem like just yesterday that IBM was announcing Watson, and the exponential growth of artificial intelligence is everywhere, in our cars, on our phones. Were definitely seeing it in more places than just this one kind of research case of it. One of the headlines from the research is that theres a perception that AI might be moving too fast for the comfort of some decision-makers in their respective industries. What does too fast look like? Is this due to covid-19 whiplash?

Cliff: Its not due to covid whiplash necessarily. The covid environment has accelerated the pace of digital in many ways, across many types of technologies. This is where we are starting to experience such a rapid pace of exponential change that its very difficult for most people to understand the progress. For any of us, even myself who works in this field, its very difficult to understand the progress and the pace of change. And getting an enterprise readygetting the people, the process, the enterprise systems, the risk, the cyber protections prepared for a world that is powered more and more by artificial intelligenceits difficult in normal circumstances. But when you do combine the digital acceleration and adoption thats taking place as a result of covid, along with the exponential development and evolution of artificial intelligence, its hard to understand the opportunities and threats that are posed to an organization.

Even if one could fully wrap their head around the progress of artificial intelligence and the potential of artificial intelligence, changing an organization and changing the mindset and the culture in a way to adopt and benefit from the opportunities that artificial intelligence poses and also protect against the threats take some time. So, it creates a level of anxiety and caution which is, in my view, well justified.

Laurel: So, speaking of that caution or planning needed to deploy AI, in a previous discussion at MIT Technologies Reviews EmTech Conference in 2019, you said that companies needed to rethink their ecosystem when deploying AI, meaning partners, vendors, and the rest of their company, to get everybody to come up to speed. At the time, you mentioned that would be the real challenge. Is that still true? Or do you think now that everything is progressing so quickly, thats the discomfort that some executives may be feeling?

Cliff: Well, thats true. It is still true. The ecosystem that got you to a level in more of an analog-centric world is going to be very different in a more AI-centric world. That AI-centric world is going to accelerate everything digital has to offer. What I mean by digital are the new ways of workingthe digital business models, the new ways of developing and evolving commerce, the ways we interact and exchange ideas with customers and with colleagues and coworkers. All of these are becoming much more digital-centric, and then artificial intelligence becomes one of the mechanisms that evolves and progresses the way we work and the way we interact. And it becomes a little more like a relationship with technology, as opposed to a tool that we program because AI is something that evolves and learns and develops the more it gets exposed to humans.

Now that we have much more human life-perceptive capabilities, thanks to the evolution of deep learning, (so by that, today, I mean more computer vision), technology is able to take on much more of the world than we were before. So understanding what technology, what AI, all of the capabilities that AI can bring and enhance and augment human capabilities is critical. Reestablishing and redeveloping the ecosystem around your business and around your enterprise is important. I think the bigger and more long-term issue though is culture, and its the culture of the enterprise that youre responsible for, that ones responsible for. But its also harnessing the culture, the external culture, the adoption, and the way you work with your customers, your vendors, suppliers, regulators, and external stakeholders. The mindset evolution is not equal in all of those stakeholder groups. And depending on the industry that youre operating in, it could be very unequal in terms of the level of adoption, the level of understanding, the ability, and the comfort to work with technology. And as that technology becomes more human-like, and were seeing that in virtual assistants and with those types of technologies, its going to be a bigger chasm to cross.

Laurel: I really like that phrasing of thinking of AI as a relationship with technology versus a tool, because that really does state your intentions when youre entering this new world, this new relationship, and that youre accepting that constant change. Speaking of the survey and various industries, some of the industries saw a significant increase in AI deployment like financial, retail, and tech. But here was it that digital transformation need or covid, or perhaps other factors that really drove that increase?

Cliff: Well, covid has had an acceleration impact across the board. Things that were in motionwhether these were adoption of digital technologies or growth or a change in consumer behaviorall of those trends that were in place before covid accelerated them. And that includes business models that were on the decline. We saw the trends that were happening in the malls. Thats just accelerated. Weve seen the adoption of technology thats accelerated. There are industries that covid has less of an effect on, not a zero effect, but less of an effect. Banking, financial services are less affected by covid than retail, hospitality, travel, logistics. Covid has really accelerated the change thats occurring in those industries.

AI, separate from covid, has a material impact across all of these. And as our survey said, industrial manufacturing, the use of robotics, the use of computer vision, artificial intelligence to speed productivity, and improved efficiency have really begun to become mainstream and at scale in industrial manufacturing. Same thing with financial services, consumer interaction has been improved with artificial intelligence in those areas. Technology, not surprisingly, has fully adopted AI or pretty close to fully adopted AI. And then weve seen a dramatic increase in retail as a result of AI. So online shopping, the ability to predict consumer demand has been a strong use case for AI in those industries.

Laurel: So, the laggards though, laggard industries were healthcare and life sciences at only, I say only, a 37% increase in adoption from last years survey. Thats still a great number. But do you think thats because fighting covid was the priority or perhaps because they are regulated industries, or there was another reason?

Cliff: Regulation is a common theme across those laggards. You have government, you have life sciences, healthcare. Financial services, though, is regulated too, and theyre a large adopter, so it cant be the only thing. I think the hypothesis around covid is probably more plausible because the focus in life sciences has been getting the vaccine out. Even though from our point of view and from what we see, government is a massive adopter. Just in terms of the potential within government, its still behind. But the sheer numbers and the sheer amount of activity thats taking place in government when you compare it to private enterprise is still pretty impressive. Its just that youre dealing with such a large-scale change and a lot more red tape and bureaucracy to make that change within a government enterprise.

Laurel: For sure. You mentioned earlier the industrial manufacturing sector, and that sector saw 72% of business leaders were influenced by the pandemic to speed AI adoption. What does that actually mean for consumers in that industry, as well as that sector as a whole?

Cliff: When I look at these numbers, theres not going to be an industry that is not affected by AI. The industries that are going to adopt it sooner and more rapidly or have an impact as a result of the pandemic, that is almost all been driven by remote work, the inability to get resources to a location, the impetus to drive automation, and AI being one of the foundational elements of automation. Because if you look at other parts of the survey where we ask, Where are the biggest benefits? its going to be found in efficiency and productivity. Thats fairly consistent across all industries when you look at where AI is being applied. So automation, productivity, predictive analytics, all of these areas are being driven by these themes around productivity. The use cases are different based on the industry, but the needs are very similar. The overarching themes and the overarching needs are very similar. You had some industries that were just impacted by the pandemic differently.

Laurel: Excitingly, maybe a difference in industrial manufacturing though, as you mentioned, are robotics. So a bit of our hardware play versus always software.

Cliff: Right. Yeah, in industrial manufacturing, youre seeing a retooling of factories. Youre seeing what some people call the Tesla effect, where there is a focus on the transformation and the automation of factorieswhere building the factory is almost as important as the product itself. Theres a lot of debate and a lot of discussion in that sector around how much to automate, and is there too much automation? I think in some of these public events where youve seen a rapid ramp-up in production where automation was used, youve seen some backing off of that as well. Too much technology can actually have counterproductive consequences and impact because there has to be human involvement in decision-making and the technology just isnt there yet. So, a lot of changes happening in that space. Were seeing a lot of evolution, a lot of new types of technologies. Deep learning is allowing more computer vision, more intelligent automation to take place in the manufacturing process within the factories.

Laurel: Speaking of keeping humans involved in these choices and ideas and technologies, strong cybersecurity is a challenge, really, for everybody, right? But the bad guys are increasingly using AI against companies and enterprises, and your only response and defense is more AI. Do you see cybersecurity specifically being an area that executives across the board accelerate spending for?

Cliff: Well, youre exactly right, cybersecurity is one of the biggest threats as technology advances, whether its AI-powered by classical computing or five or 10 years down the road when we have quantum computing made available to governments or to corporations. The security risks are going to continue to accelerate. AI is certainly an offense, but its a defense as well. So, predictive analytics using AI to predict threats, to defend against threats that are posed by AI, which are increasing the sophistication of penetration, phishing, and other ways to compromise the system. These technologies are sort of in an arms race between, as you said, the good guys and the bad guys. Theres no end in sight to that as we start to move into an era of real change, which is going to be underpinned by quantum computing in the future. This will only accelerate because you will need a new type of post-quantum cryptography to defend against the threats that quantum computers could pose to a security organization.

Laurel: Its absolutely amazing how fast, right? As we were saying, exponential growth especially with quantum computing, perhaps around the corner, five, 10 years, that sounds about right. The research though, does come back and say that a lot of respondents think their companies should have some kind of AI ethics policy and code of conduct, but not many do, not many do. So those that do are smaller companies. Do you think its just a matter of time before everyone does or its a board requirement even to have these AI ethics policies?

Cliff: Well, we do know that this is being discussed at the regulatory level. There are significant questions around where the government should step in with regulatory measures and where self-policing AI ethics... How does your marketing organization target behavior in its customer base? And how do you leverage AI to use the psychological profiles to enable sales? There are some ethical decisions that would have to be made around that, for example. The use of facial recognition in consumer environments is well debated and discussed. But the use of AI and the ethical use of AI targeting the psychology of consumers, I think that debate has just started largely this summer with some documentaries that came out that showed how social media is using AI to target consumers with marketing products and how that can be misused and misapplied by the bad guys.

So, yeah, this is just the tip of the iceberg. What were seeing today is just the initial opening statements when it comes to how far should we go with AI and what are the penalties that are applied to those who go further than we should, and are those penalties regulated by the government? Are they social penalties and just exposure or are these things that we need laws and rules that have some teeth for violating these agreed-upon ethics, whatever they may be?

Laurel: Its a bit of a push-me, pull-you situation, right? Because the technology is advancing really quickly, but societal or regulations may be a bit lagging. And at the same time, companies are not necessarily, maybe in some cases, adopting AI as quickly or are having problems staffing these AI initiatives. So, how are companies trying to keep up with talent acquisition, and should enterprises start looking, or perhaps already have, been looking at upskilling or training current employees how to use AI as a new skill?

Cliff: Yeah, these are very hard problems. If you look at the study and dive in, youll see the difference between large companies and small companies. I mean, the ability to attract talent that has gone through years and years of training in advanced analytics, computer engineering, deep learning, machine learning, and understanding the complexities and the nuances of training the weights and biases of complex, multilevel, deep learning algorithmsthat talent is not easy to come by. Its very difficult to take a classical computer engineer and retrain them in that type of statistical-based artificial intelligence, where youre having to really work with training these complex neural networks in order to achieve the goals of the company.

Were seeing the tech companies offer these services on the cloud, and that is a way to access artificial intelligence and access some of these tools is through the subscription to APIs, application program interfaces, and applying those APIs to your platforms and technologies. But to really have a competitive advantage, you need to be able to manipulate and develop and control the data that goes into training these algorithms. In todays world, artificial intelligence is very, very data hungry, and it requires massive amounts of data to get accurate and high-quality output. That data accrues to the largest companies and thats reflected in their valuation. So, we see who those companies are. A lot of that value is because of the data that they have access to. And the products that theyre able to produce are based on much of that data. Those products many times are powered by artificial intelligence.

Laurel: So back to the survey, one last data point here, 60% of respondents say that AI is at least moderately to fully functional in their organization. Compared to 10 years ago, that does seem like real progress for AI. But not everyone is there yet. What are some steps that enterprises can take to become more fully functional with AI?

Cliff: This is where I go back to what I said last year, which is to re-evaluate your ecosystem. Who are your partners? Who is bringing these capabilities into your business? Understand what your options are relative to the technology providers that are giving you access to AI. Not every company is going to be able to just go hire an AI expert and have AI. These are technologies, like I said, theyre difficult to develop. Theyre difficult to maintain. Theyre evolving at a lightning-fast exponential pace. So, the conversations that we would have had six months or a year ago would be different now, just because of the pace of change thats taking place in this environment. The recalcitrance is low to change in AI. And so, its moving faster than Moores Law. It is accelerating as fast as the data allows it. The algorithms themselves have been around for years. Its the ability to capture and use the data that is driving the AI. So, partnering with these capabilities, these technology companies that have access to data thats relevant to your industry is a critical element to being successful.

Laurel: When you do talk to executives about how to be successful with AI, how do advise them if they are behind the competitors and peers in deploying AI?

Cliff: Well, we do surveys like this. We do benchmarks. We harness benchmarks that are out there in other areas and other domains. We look at the pace of change and the relative benefit to that specific industry, and even more narrow than that, the function or the activity within that industry and that business. AI has not infiltrated every single area yet. Its on the way to doing that, but there are areas in customer service, the GNA, the back-office components of an organization, manufacturing, the analytics, the insights, the forecasting, all of that, AI has a strong foothold, so continuing to evolve that. But then there are elements in product design, engineering, other aspects of design that AI is moving into that theres barely a level playing field right now.

So, its uneven. Its very advanced in some areas, its not as advanced in others. I would also say that the perception that will come out in the survey of generalists in these areas may not consider some of the more advanced artificial intelligence capabilities that might be six months, a year, or two years down the road. But those capabilities are evolving very quickly and will be moving into these industries quickly. I would also look at the startup ecosystem as well. The startups are evolving quickly. The technologies that a startup is using and introducing into new industries to disrupt those industries are not necessarily being considered by the more established companies who have existing operating models and existing business models. So, a startup may be using AI and data to totally transform how an industry consumes a product or a service.

Laurel: Thats good advice as always. Cliff, thank you so much for joining us today in what has been a great conversation on the Business Lab.

Cliff: My pleasure. Its great talking to you.

Laurel: That was Cliff Justice, the US leader for enterprise innovation for KPMG, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

Thats it for this episode of Business Lab. Im your host, Laurel Ruma. Im the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts.

If you enjoy this episode, we hope youll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Reviews editorial staff.

Read more:

Embracing the rapid pace of AI - MIT Technology Review

Posted in Ai | Comments Off on Embracing the rapid pace of AI – MIT Technology Review

AI in the Courtroom to predict RNAs for offenders – The National Law Review

Posted: at 4:42 am

Judge Xavier Rodriquez wrote a book review about AI in court in the book "When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence," written by former US District Judge Katherine Bolan Forrest which addresses the growing use of artificial intelligence tools that augment or potentially displace human judgment. In specific, she focuses on AI assessment tools that are used to predict risk and needs assessments, or RNAs, for offenders.The May 18, 2021 book review entitled Judging A Book: Rodriguez Reviews 'When Machines Can Be Judge' included these comments:

These RNA tools are often used to guide judicial decisions on whether to grant a criminal defendant bail or remand, and the duration and conditions of a defendant's incarceration.

Forrest's prior service as a judge, her interest in technology, and her easy-to-read writing style makes for an interesting and understandable introduction to AI as it is currently used in the criminal justice process.

Forrest concludes her book with a discussion of how AI has been deployed in lethal autonomous weapons used by our military forces.

In an approving tone, she notes that these weapons can result in increased identification accuracy, allow for dispassionate decision making, and enable quick decisions about whether to engage a target.

Interesting perspective on AI in the Courtroom!

2021 Foley & Lardner LLPNational Law Review, Volume XI, Number 139

Read more:

AI in the Courtroom to predict RNAs for offenders - The National Law Review

Posted in Ai | Comments Off on AI in the Courtroom to predict RNAs for offenders – The National Law Review

The Church of AI is dead… so what’s next for robots and religion? – The Next Web

Posted: at 4:42 am

The Way of the Future, a church founded by a former Google and Uber engineer, is now a thing of the past.

Its been a few months since the worlds first AI-focused church shuttered its digital doors, and it doesnt look like its founder has any interest in a revival.

But its a pretty safe bet well be seeing more robo-centric religious groups in the future. Perhaps, however, they wont be about worshipping the machines themselves.

The worlds first AI church The Way of the Future, was the brainchild of Anthony Levandowski, a former autonomous vehicle developer who was convicted on 33 counts of theft and attempted theft of trade secrets.

In the wake of his conviction, Levandowski was sentenced to 18 months in prison but his sentence was delayed due to COVID and, before he could be ordered to serve it, former president Donald Trump pardoned him.

[Read more: Trump pardoned the guy who founded the church of AI]

The church, prior to Levandowskis conviction, was founded on the basic principal of preparing for a future where benevolent AI rulers held dominion over humans.

That may sound ridiculous but, based on articles such as this one, it seems like he was saying algorithms would help us to live better lives and wed be better off accepting and preparing for that than fighting against what was best for us.

If you ask me: thats the future of AI and religion, just minus the AI overlords part.

Levandowskis church wasnt as wacky as it might sound. Major religious organizations employ AI at various levels ranging from automaton-style prayer bots, to full on integration of AI-powered enterprise tools.

The Roman Catholic church embraces AI, though with some expected religious caveats. And some Muslim scholars believe the Islamic faith could help free AI technology from its current profit-driven paradigm that places goodness as secondary to profits.

Of course, none of these churches apparently believe that robots will one day deserve our spiritual allegiance as they guide us beyond the mortal coil. But the writing is on the wall for a different kind of AI-powered religious experience.

AI can be a powerful tool due to its ability to surface insights from massive amounts of data. This makes it a prime candidate for religious use, if for no other reason than its a new technology that people still dont quite understand.

In fact, whenever a new paradigm for technology comes along, religious groups tend to spring up in its wake.

When L Ron Hubbard invented the e-meter in 1952, for example, it was based on the pseudoscience technology behind the polygraph. A year later he founded the Church of Scientology.

The Tech is a bedrock of Scientology belief. Though the use of the term specifically seems to address techniques used to propagate the religions ideas, Hubbards writing and speeches tend to embrace technology as an important part of the religion.

Hubbards initial works spanned hundreds of texts, books, and speeches. But the onset of accessible television technology and mass media in the 1960s lead to the founding of Golden Era Productions, a state-of-the-art production facility where, to this day, all of Scientologysvideos are still produced.

Later, in 1974, a pair of UFO enthusiasts founded Heavens Gate, a religious group that was also heavily-influenced by technology throughout its existence.

Originally, the founders told followers a literal spaceship would come for them. But, as technology advanced andpersonal computers and the internetbegan to flourish, the group supported itself by designing websites. Some experts even believe some of the groups beliefs were based on mystical interpretations of computer code.

Both of these groups saw their genesis during periods of technological inflection points. Scientology began in the wake of the second World War. When the war started, many warriors were still fighting on horseback and the RADAR hadnt been invented. By the time WWII was over, technology had advanced to an unrecognizable state.

And Heavens Gate came to prominence just as personal computers and the internet were bringing the most curious, technologically-inclined people together around the globe.

Technology shifts that redefine the general public perception of whats possible tend to spur revolution in all domains and religion is no exception.

AI is a backbone technology. As such, its use by religious groups in the future will likely be as ubiquitous as their use of electricity or the internet.

After all, priests and pastors look things up on Google and chat on Facebook just like the rest of us. Its easy to imagine churches implementing AI stacks in their IT setups to help them with everything from record-keeping to building out chatbots that can surface ecclesiastical documents for parishioners on demand.

But there are other, less technology-basedways AI tech could be employed and, in these cases, the past is prescient.

If we use Scientology as an example, we can see a direct correlation between their e-meters and the modern AI paradigm where machine learning models require a human in the loop to be considered fully-functional.

Per the Church of Scientology,the e-meter device by itself does nothing. Basically, the e-meter is a piece of technology that doesnt work unless someone trained in its spiritual applications wields it.

There are thousands of AI systems that work the exact same way. Developers claim their work can do everything from predict crime using historical police reports to determine if someone is a terrorist from nothing but an image of their face.

Of course, these systems dont actually work. Theyre just like e-meters in that they can be demonstrated to perform a specific function (AI parses data, e-meters measure a small amount of electrical activity in our skin), but that function has nothing to do with what users are told theyre being employed for.

In other words: E-meters dont actually measure anything related to what auditors use them for, theyre much like the EMF meters that ghost hunters use to prove that ghosts exist.

And, in that exact same vein:AI cant tell if youre a terrorist by looking at your face. But it can be trained to label outputdata any way you want it to.

If you think all white men with mustaches are porn stars, you can train an AI to always identify them that way. If you want to label a group of people terrorists, you can train AI to label people who look a certain way as terrorists.

And, since it all happens in a black box, its impossible for developers to explain exactly how they work you simply have to have faith.

It is a demonstrable fact that AI systems and databases are inherently biased. And, to date, billion and trillion dollar-enterprises such as Google, Amazon, Facebook, Microsoft, and OpenAI have yet to come close to solving this problem.

We know these systems dont work, yet some of the most prestigious universities and largest companies in the world use them.

These broken, unfinished systems continue to proliferate because people have faith in them, no matter what the experts say.

We truly do live in a faith-based world when it comes to AI. When Elon Musk takes his hands off the wheel of his Tesla for minutes at a time during a televised interview, hes showing you that a billionaire genius has faith, and hes asking you to believe too.

We know its faith-based because, when it comes to brass tacks, Tesla requires drivers to keep their hands on the wheel and their eyes on the road at all times. Numerous accidents have occurred as a result of consumers misusing Teslas Autopilot and Full Self Driving technologies, and in every case where users took their hands off the wheel, Teslas claimed the driver was responsible.

Apparently, Musks faith in his product endswhere Teslas liability begins.

When facial recognition software companies tell us their products work, we believe them. We take it on faith because theres literally no way to prove the products do what they claim to do. When a facial recognition system gets something wrong, or for that matter, even when they get something right: we cannot know how it came to the result it did because these products do their work inside of a black box.

And, when so-called emotion-recognition systems attempt to predicthuman emotions, motivations, or sentiments, they require a huge leap of faith to believe. This is because we can easily demonstrate they dont function properly when exposed to conditions that dont fall within their particular biases.

Eventually, we hope real researchers and good actors will find a way to convince people that these systems are bunk. But it stands to reason theyre never going away.

They allow businesses to discriminate with impunity, courts to issue demonstrably racist sentences without accountability, and police to practice profiling and skip the warrant process without reprisal. Deep learning systems that make judgements on people allow humans to pass the buck, and as long as there are bigots and misogynists in the world these tools, no matter how poorly they function, will be useful.

On the other hand, its also clear this technology is extremely well-suited for religious use. Where Levandowski understood the power of algorithms as tools for, potentially, helping humans to live a better life, others will surely see a mechanism by which religious subjects can be uniformly informed, observed, and directed.

Whether this results in a positive experience or a negative one would be entirely dependent on how, exactly, religious groups chose to deploy these technologies.

As a simple, low-hanging fruit example, if an e-meter that, by itself, does nothing can become the core technology behind a religious group boasting tens of thousands of people, it stands to reason that deep learning-based emotion recognition systems and other superfluous AI models will certainly wind up in the hands of similar organizations.

When it comes to artificial intelligence technology and religion, Id wager the way of the future is the way of the past.

Follow this link:

The Church of AI is dead... so what's next for robots and religion? - The Next Web

Posted in Ai | Comments Off on The Church of AI is dead… so what’s next for robots and religion? – The Next Web

Kore.ai Launches SmartAssist in Japanese to Deliver AI-powered Call Center Automation – TechDecisions

Posted: at 4:42 am

ORLANDO, Fla. & TOKYO(BUSINESS WIRE)Kore.ai, a leading conversational AI software company, has today announced the launch of its AI powered contact center-as-a-service (CCaaS) solution, SmartAssist, in Japanese. SmartAssist will enable enterprises in Japan and Asia-Pacific transform their customer service operations through the use of AI.

Built on Kores no-code Conversational AI platform, SmartAssist provides end-to-end call automation for inbound customer service calls through combination of conversational IVR, virtual assistants and call deflection. Through a simple SIP transfer, SmartAssist deflects calls to appropriate virtual or live assistants. SmartAssist also gives customers automated speech recognition (ASR) and text-to-speech (TTS), making it easier for IVR enabled call centers to enhance their support technology stack.

SmartAssist is backed by Kores multi engine natural language processing (NLP) technology, to automate sophisticated conversations with personalization and relevant context. It also supports omnichannel deployment and remembers the context when the customer shifts from one channel to another in the course of a dialog to ensure a consistent experience. Also, when needed to engage a live agent on the call, SmartAssist passes on all the call history and caller details making it easier for agents to take the call forward.

The Covid-19 pandemic has accelerated the trend toward cloud contact centers and the need for automation and digital customer support. Conversational AI will play a key role in this transformation by driving contact center innovation and improving agent productivity. The Japanese version of SmartAssist will help enterprise customers in this region improve time-to-market and enhance the customer experience in their native language, said Sreeni Unnamatla Executive Vice President Asia Pacific and Japan.

Kore is helping global 2000 enterprises in automating routine business interactions and creating omnichannel experience for their customers. Kore is unique in the conversational AI market in that it allows customers to build virtual assistants through the companys no-code platform and also deploy pre-built virtual assistants for banking, healthcare and functional areas such HR, IT Support, and Sales. Kore differentiates itself through its conversational UX, superior NLP, explainable AI, and no-code unified platform that empowers people to use technology to transform how their business operates.

About Kore.ai

Kore increases the speed of business by automating customer and employee interactions through digital virtual assistants built on its market-leading conversational AI platform. Companies who prioritize customer and employee experience use Kores no-code conversational AI platform to raise NPS and lower operational costs. The top 4 banks, top 3 healthcare businesses, and over 100 Fortune 500 companies have automated a billion interactions since Kore was founded in 2015, and its pre-built industry and functional virtual assistants have made it easier and faster for these top-performing businesses to scale the impact of front office automation. Kore has been recognized as a leader by top analysts and ensures the success of its customers through a growing team headquartered in Orlando with offices in India, the UK, Japan, and Europe.

Visit kore.ai to learn more.

Contacts

Media ContactsKarthik G

karthik.gandrajupalli@kore.com

Visit link:

Kore.ai Launches SmartAssist in Japanese to Deliver AI-powered Call Center Automation - TechDecisions

Posted in Ai | Comments Off on Kore.ai Launches SmartAssist in Japanese to Deliver AI-powered Call Center Automation – TechDecisions

Google made AI language the centerpiece of I/O while ignoring its troubled past at the company – The Verge

Posted: at 4:42 am

Yesterday at Googles I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is speak, and the machine will answer.

But for many in the AI community, there was a notable absence in this conversation: Googles response to its own research examining the dangers of such systems.

In December 2020 and February 2021, Google first fired Timnit Gebru and then Margaret Mitchell, co-leads of its Ethical AI team. The story of their departure is complex but was triggered by a paper the pair co-authored (with researchers outside Google) examining risks associated with the language models Google now presents as key to its future. As the paper and other critiques note, these AI systems are prone to a number of faults, including the generation of abusive and racist language; the encoding of racial and gender bias through speech; and a general inability to sort fact from fiction. For many in the AI world, Googles firing of Gebru and Mitchell amounted to censorship of their work.

For some viewers, as Pichai outlined how Googles AI models would always be designed with fairness, accuracy, safety, and privacy at heart, the disparity between the companys words and actions raised questions about its ability to safeguard this technology.

Google just featured LaMDA a new large language model at I/O, tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.

Gebru herself tweeted, This is what is called ethics washing referring to the tech industrys tendency to trumpet ethical concerns while ignoring findings that hinder companies ability to make a profit.

Speaking to The Verge, Emily Bender, a professor at the University of Washington who co-authored the paper with Gebru and Mitchell, said Googles presentation didnt in any way assuage her concerns about the companys ability to make such technology safe.

From the blog post [discussing LaMDA] and given the history, I do not have confidence that Google is actually being careful about any of the risks we raised in the paper, said Bender. For one thing, they fired two of the authors of that paper, nominally over the paper. If the issues we raise were ones they were facing head on, then they deliberately deprived themselves of highly relevant expertise towards that task.

In its blog post on LaMDA, Google highlights a number of these issues and stresses that its work needs more development. Language might be one of humanitys greatest tools, but like all tools it can be misused, writes senior research director Zoubin Ghahramani and product management VP Eli Collins. Models trained on language can propagate that misuse for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information.

But Bender says the company is obfuscating the problems and needs to be clearer about how its tackling them. For example, she notes that Google refers to vetting the language used to train models like LaMDA but doesnt give any detail about what this process looks like. Id very much like to know about the vetting process (or lack thereof), says Bender.

It was only after the presentation that Google made any reference to its AI ethics unit at all, in a CNET interview with Google AI chief Jeff Dean. Dean noted that Google had suffered a real reputational hit from the firings something The Verge has previously reported but that the company had to move past these events. We are not shy of criticism of our own products, Dean told CNET. As long as its done with a lens towards facts and appropriate treatment of the broad set of work were doing in this space, but also to address some of these issues.

For critics of the company, though, the conversation needs to be much more open than this.

Read more:

Google made AI language the centerpiece of I/O while ignoring its troubled past at the company - The Verge

Posted in Ai | Comments Off on Google made AI language the centerpiece of I/O while ignoring its troubled past at the company – The Verge

We Must Remain Open to the Future Possibilities of AIEven if it Means Replacing Humans – IPWatchdog.com

Posted: at 4:42 am

To conclude that AI serving as an aid to human thinking is necessarily better than possibly replacing some aspects of human decision making, when we simply dont yet have the technological capability to test one over the other, would fall into the logical fallacy of equating a presumption with a conclusion.

In response to our recent article on artificial intelligence (AI) reducing transactional costs to help determine infringement and invalidity determinations, a commenter made an interesting counterpoint, paraphrased as the following: AI provides useful tools that should be used as an aid to human thinkers, not as a replacement to human thinking. Moreover, when it comes to AI making subjective determinations, such as obviousness or novelty, we should be skeptical of relying on AI, either legally or practically.

We appreciate the counterpoint and we wanted to address it in this follow-up article.

How do we determine the best role for AI in our patent system? If we have a choice between AI serving as an aid to human thinking, or possibly replacing some aspects of human decision making, what is the correct choice? What would better serve to improve our patent system?

To have a productive discussion re AIs proper place in our patent system, we need to first understand what improvement to our patent system means. When we have common ground as to how to assess improvement, only then we can discuss which role of AI would better implement that improvement.

To frame our understanding of such improvement, let us look at the patent system as it stands today.

The judiciary sits in the middle of the patent-transaction ecosystem. When presented with a case, the judiciary performs a two-step process. It takes on the role as (1) the arbiter of factual and legal contentions between parties, and (2) the enforcer of its ultimate decision. In its role as arbiter, the court determines informational attributes relating to patent validity, scope, and infringement. After this determination is made, it then enforces that decision.

In this system, the presumption is that the basic informational attributes of a patent are either unknown or at best contested, and we need the court system to make this determination.

Herein lies the problem with our patent system. Relying on the court to determine basic informational attributes of a patent is both costly and inefficient. It costs millions of dollars and takes several years to determine whether the patent is valid, infringed, and the damages.

Because the court is so inefficient at making these informational determinations regarding a patent, enforcement costs in turn are extremely high.

Further, these information and enforcement costs are intermingled; meaning, you cannot enforce a patent unless the same court system first determines the basic informational attributes of a patent, resulting in a costly self-perpetuating cycle of inefficiency.

Why is this important to understand when framing a discussion about the patent system and patent transactions within that system?

As Douglass C. North pointed out in his 1992 paper Transaction Costs, Institutions, and Economic Performance, the framework of our patent system creates the actors that operate within it.

The constraints imposed by the institutional framework (together with the other standard constraints of economics) define the opportunity set and therefore the kind of organizations that will come into existence.

North gave a very powerful example:

If the highest rates of return in a society are from piracy, then organizations will invest in knowledge and skills that will make them better pirates; if the payoffs are highest from increasing productivity, then firms and other organizations will invest in skills and knowledge that achieve that objective.

In our present-day patent system, extremely high informational costs create the economic driver to reduce enforcement and bargaining costs between parties in a patent transaction.

Put another way, the court systems high informational cost structure creates a driver to minimize enforcement costs, which manifests in todays patent litigation as early settlements that are below the cost of determining the informational attributes of a patent.

Put yet another way, the court systems high informational cost structure creates the economic driver for low-value and nuisance patent litigation (see part I and part II of an analysis relating to how we have historically misdirected patent policy to deter such nuisance patent litigation).

North recognized the central role of informational costs: [t]he cost of transacting arises because information is costly and held asymmetrically by the parties to exchange.

In a perfect patent system, the informational attributes of a patent are efficient to determine and known to both parties. When informational costs are low and informational attributes are known to both parties, the following occurs:

North describes this as the zero-cost transaction. This is a perfect system in which there are no transactional costs between a patent holder and an alleged infringer reaching an agreement on a patent transaction. The only money spent, if any, is for the value of a patent license.

So, when we are thinking about patent reform, the discussion should be centered on how do we approach a zero-cost transaction for patent transactions? This sets the standard for improvement.

Assuming we are on the same page regarding what it means to improve our patent system, this frames the next question: between (1) AI serving as an aid to human thinking, or (2) possibly replacing some aspects of human decision making, which of the two better serves to improve our patent system?

At this point, I dont believe we can actually answer that question, because we dont live in a world where AI can reliably replace aspects of human thinking with respect to our patent system.

But to conclude one is necessarily better than the other, when we simply dont have the technological capability to test one over the other, would fall into the logical fallacy of equating a presumption with a conclusion.

Instead, North would offer a different approach. He described characteristics of successful institutions. Namely, institutions that allow for decentralized decision-making and trial and error see greater success over time.

Therefore, institutions should encourage trials and eliminate errors. A logical corollary is decentralized decision making that will a society to explore many alternative ways to solve problems.

Applying Norths teachings to our patent system, he would recommend we test different methodologies to determine informational attributes of a patent and learn through trial and error which methodology best reduces informational costs. Only when we have the opportunity to apply and test different methodologies to determine informational attributes of a patent will we truly learn which method is best.

North certainly factored in the use of technology and technologys role in an institution:

Institutions, together with the technology employed, affect economic performance by determination transaction and transformation (production) costs.

Relying on the teachings of North, we should actively test AI in different applications and scenarios and determine which would allow us to approach a zero-cost transaction, particularly zero costs to determine the informational attributes of a patent.

But to enable us to test AI effectively, we cannot foreclose ourselves to the possibility that AIs proper place could be to actually replace some aspects of human thinking.

If AI replacing human decision making in certain circumstances would enable a zero-cost patent transaction, then this may be the proper place for AI in the patent system. But if using AI as a mere tool to aid human thinking enables us to approach this zero-cost transaction, then this may instead be the best role for AI.

In essence, lets not put the cart before the horse when making determinations regarding AIs proper role in our patent system. To improve our patent system, we need to come to common understanding on the key problem it faces, namely, its unsound economic underpinnings. And we need to allow ourselves greater flexibility to test different methods and technology to improve the patent system by helping us to eliminate, or at least significantly reduce, the high costs and inefficiencies of determining the informational attributes of a patent.

Gau Bodepudi Is the Managing Director at and co-founder of IP EDGE LLC. He has more than 12 years experience in all aspects of patent management and monetization, including strategic prosecution, litigation, licensing, brokering, and portfolio management within various technological fields such as ecommerce, consumer electronics, networking, financial services, mobile communications, and automotive technologies. Mr. Bodepudi also created a patent monetization blog, InvestInIP.com, where he writes on patent reform and policy

Eesha Kumar is an intern at IP EDGE LLC. She graduated with a bachelors degree in political science from The University of Georgia and is planning on attending law school.

Read the rest here:

We Must Remain Open to the Future Possibilities of AIEven if it Means Replacing Humans - IPWatchdog.com

Posted in Ai | Comments Off on We Must Remain Open to the Future Possibilities of AIEven if it Means Replacing Humans – IPWatchdog.com

Page 133«..1020..132133134135..140150..»