The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: September 2021
Explainable AI Is the Future of AI: Here Is Why – CMSWire
Posted: September 27, 2021 at 5:36 pm
PHOTO:Adobe
Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.
Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.
There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.
Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:
This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.
When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.
Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.
Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.
Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."
Related Article:What Is Explainable AI (XAI)?
Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.
A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.
Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.
Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.
Related Article:Ethics and Transparency: How We Can Reach Trusted AI
Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:
Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.
There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.
As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.
The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.
Related Article: Make Responsible AI Part of Your Company's DNA
Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.
Excerpt from:
Posted in Ai
Comments Off on Explainable AI Is the Future of AI: Here Is Why – CMSWire
The limitations of AI safety tools – VentureBeat
Posted: at 5:36 pm
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain safety constraints. At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning.
Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI safety tools are as effective as their creators purport them to be or whether they make AI systems safer in any sense.
OpenAIs Safety Gym doesnt feel like ethics washing so much as maybe wishful thinking, Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email. As [OpenAI] note[s], what theyre trying to do is lay down rules for what an AI system cannot do, and then let the agent find any solution within the remaining constraints. I can see a few problems with this, the first simply being that you need a lot of rules.
Cook gives the example of telling a self-driving car to avoid collisions. This wouldnt preclude the car from driving two centimeters away from other cars at all times, he points out, or doing any number of other unsafe things in order to optimize for the constraint.
Of course, we can add more rules and more constraints, but without knowing exactly what solution the AI is going to come up with, there will always be a chance that it will be undesirable for one reason or another, Cook continued. Telling an AI not to do something is similar to telling a three year-old not to do it.
Via email, an OpenAI spokesperson emphasized that Safety Gym is only one project among many that its teams are developing to make AI technologies safer and more responsible.
We open-sourced Safety Gym two years ago so that researchers working on constrained reinforcement learning can check whether new methods are improvements over old methods and many researchers have used Safety Gym for this purpose, the spokesperson said. [While] there is no active development of Safety Gym since there hasnt been a sufficient need for additional development we believe research done with Safety Gym may be useful in the future in applications where deep reinforcement learning is used and safety concerns are relevant.
The European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, have attempted to create standards for building trustworthy, safe AI. Absent safety considerations, AI systems have the potential to inflict real-world harm, for example leading lenders to turn down people of color more often than applicants who are white.
Like OpenAI, Alphabets DeepMind has investigated a method for training machine learning systems in both a safe and constrained way. Its designed for reinforcement learning systems, or AI thats progressively taught to perform tasks via a mechanism of rewards or punishments. Reinforcement learning powers self-driving cars, dexterous robots, drug discovery systems, and more. But because theyre predisposed to explore unfamiliar states, reinforcement learning systems are susceptible to whats called the safe exploration problem, where they become fixated on unsafe states (e.g., a robot driving into a ditch).
DeepMind claims its safe training method is applicable to environments (e.g., warehouses) in which systems (e.g., package-sorting robots) dont know where unsafe states might be. By encouraging systems to explore a range of behaviors through hypothetical situations, it trains the systems to predict rewards and unsafe states in new and unfamiliar environments.
To our knowledge, [ours] is the first reward modeling algorithm that safely learns about unsafe states and scales to training neural network reward models in environments with high-dimensional, continuous states, wrote the coauthors of the study. So far, we have only demonstrated the effectiveness of [the algorithm] in simulated domains with relatively simple dynamics. One direction for future work is to test [algorithm] in 3D domains with more realistic physics and other agents acting in the environment.
Firms like Intels MobileyeandNvidia have also proposed models to guarantee safe and logical AI decision-making, specifically in the autonomous car realm.
In October 2017, Mobileye released a framework called Responsibility-Sensitive Safety (RSS), a deterministic formula with logically provable rules of the road intended to prevent self-driving vehicles from causing accidents. Mobileye claims that RSS provides a common sense approach to on-the-road decision-making that codifies good habits, like maintaining a safe following distance and giving other cars the right of way.
Nvidias take on the concept is Safety Force Field, which monitors unsafe actions by analyzing sensor data and making predictions with the goal of minimizing harm and potential danger. Leveraging mathematical calculations Nvidia says have been validated in real-world and synthetic highway and urban scenarios, Safety Force Field can take into account both braking and steering constraints, ostensibly enabling it to identify anomalies arising from both.
The goal of these tools safety might seem well and fine on its face. But as Cook points out, there are a lot of sociological questions around safety, as well as who gets define whats safe. Underlining the problem, 65% of employees cant explain how AI model decisions or predictions are made at their companies, according to FICO much less whether theyre safe.
As a society, we sort of collectively agree on what levels of risk were willing to tolerate, and sometimes we write those into law. We expect a certain number of vehicular collisions annually. But when it comes to AI, we might expect to raise those standards higher, since these are systems we have full control over, unlike people, Cook said. [An] important question for me with safety frameworks is: at what point would people be willing to say, Okay, we cant make technology X safe, we shouldnt continue. Its great to show that youre concerned for safety, but I think that concern has to come with an acceptance that some things may just not be possible to do in a way that is safe and acceptable for everyone.
For example, while todays self-driving and ADAS systems are arguably safer than human drivers, they still make mistakes as evidenced by Teslas recent woes. Cook believes that if AI companies were held more legally and financially responsible for their products actions, the industry would take a different approach to evaluating their systems safety instead of trying to bandage the issues after the fact.
I dont think the search for AI safety is bad, but I do feel that there might be some uncomfortable truths hiding there for people who believe AI is going to take over every aspect of our world, Cook said. We understand that people make mistakes, and we have 10,000 years of society and culture that has helped us process what to do when someone does something wrong [but] we arent really prepared, as a society, for AI failing us in this way, or at this scale.
Nassim Parvin, an associate professor of digital media at Georgia Tech, agrees that the discourse around self-driving cars especially has been overly optimistic. She argues that enthusiasm is obscuring proponents ability to see whats at stake, and that a genuine, caring concern for the lives lost in car accidents could serve as a starting point to rethink mobility.
[AI system design should] transcend false binary trade-offs and that recognize the systemic biases and power structures that make certain groups more vulnerable than others, she wrote. The term unintended consequences is a barrier to, rather than a facilitator of, vital discussions about [system] design The overemphasis on intent forecloses consideration of the complexity of social systems in such a way as to lead to quick technical fixes.
Its unlikely that a single tool will ever be able to prevent unsafe decision-making in AI systems. In its blog post introducing Safety Gym, researchers at OpenAI acknowledged that the hardest scenarios in the toolkit were likely too challenging for techniques to resolve at the time. Aside from technological innovations, its the assertion of researchers like Manoj Saxena, who chairs the Responsible AI Institute, a consultancy firm, that product owners, risk assessors, and users must be engaged in conversations about AIs potential flaws so that processes can be created that expose, test, and mitigate the flaws.
[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact, Saxena told VentureBeat in a recent interview. [They also need to] invest more to ensure members who are designing the systems are diverse.
Read this article:
Posted in Ai
Comments Off on The limitations of AI safety tools – VentureBeat
AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review
Posted: at 5:36 pm
When it comes to digital transformation, the Covid crisis has provided important lessons for business leaders. Among the most compelling lessons is the potential data analytics and artificial intelligence brings to the table.
During the pandemic, for example, Frito-Lay ramped up its digital and data-driven initiatives, compressing five years worth of digital plans into six months. Launching a direct-to-consumer business was always on our roadmap, but we certainly hadnt planned on launching it in 30 days in the middle of a pandemic, says Michael Lindsey, chief growth officer at Frito-Lay. The pandemic inspired our teams to move faster that we would have dreamed possible.
The crisis accelerated the adoption of analytics and AI, and this momentum will continue into the 2020s, surveys show. Fifty-two percent of companies accelerated their AI adoption plans because of the Covid crisis, a study by PwC finds. Just about all, 86%, say that AI is becoming a mainstream technology at their company in 2021. Harris Poll, working with Appen, found that 55% of companies reported they accelerated their AI strategy in 2020 due to Covid, and 67% expect to further accelerate their AI strategy in 2021.
Will companies be able to keep up this heightened pace of digital and data-driven innovation as the world emerges from Covid? In the wake of the crisis, close to three-quarters of business leaders (72%) feel positive about the role that AI will play in the future, a survey by The AI Journal finds. Most executives (74%) not only anticipate AI will deliver more efficient make business processes, but also help to create new business models (55%) and enable the creation of new products and services (54%).
AI and analytics became critical to enterprises as they reacted to the shifts in working arrangements and consumer purchasing brought on by the Covid crisis. And as adoption of these technologies continues apace, enterprises will be drawing on lessons learned over the past year and a half that will guide their efforts well into the decade ahead:
Business leaders understand firsthand the power and potential of analytics and AI on their businesses. Since Covid hit, CEOs are now leaning in, asking how they can take advantage of data? says Arnab Chakraborty, global managing director at Accenture. They want to understand how to get a better sense of their customers. They want to create more agility in their supply chains and distribution networks. They want to start creating new business models powered by data. They know they need to build a data foundation, taking all of the data sets, putting them into an insights engine using all the algorithms, and powering insights solutions that can help them optimize their businesses, create more agility in business processes, know their customers, and activate new revenue channels.
AI is instrumental in alleviating skills shortages. Industries flattened by the Covid crisis such as travel, hospitality, and other services need resources to gear up to meet pent-up demand. Across industries, skills shortages have arisen across many fields, from truck drivers to warehouse workers to restaurant workers. Ironically, there is an increasingly pressing need to develop AI and analytics to compensate for shortages of AI development skills. In Cognizants latest quarterly Jobs of the Future Index, there will be a strong recovery for the U.S. jobs market this coming year, especially those involving technology. AI, algorithm, and automation jobs saw a 28% gain over the previous quarter.
AI is a critical ingredient to creating solutions to what is likely to be on-going, ever-changing skills needs and training, agrees Rob Jekielek, managing director with Harris Poll. AI is already beginning to help fill skills shortages of the existing workforce through career transition support tools. AI is also helping employees do their existing and evolving jobs better and faster using digital assistants and in-house AI-driven training programs.
AI will also help alleviate skills shortages by augmenting support activities. Given how more and more products are either digital products or other kinds of technology products with user interfaces, there is a growing need for support personnel, says Dr. Rebecca Parsons, chief technology officer at Thoughtworks. Many of straightforward questions can be addressed with a suitably trained chatbot, alleviating at least some pressure. Similarly, there are natural language processing systems that can do simple document scanning, often for more canned phrases.
AI and analytics are boosting productivity. Over the years, any productivity increases associated with technology adoption have been questionable. However, AI and analytics may finally be delivering on this long-sought promise. Driven by advances in digital technologies, such as artificial intelligence, productivity growth is now headed up, according to Erik Brynjolfsson and Georgios Petropoulos, writing in MIT Technology Review. The development of machine learning algorithms combined with large decline in prices for data storage and improvements in computing power has allowed firms to address challenges from vision and speech to prediction and diagnosis. The fast-growing cloud computing market has made these innovations accessible to smaller firms.
AI and analytics are delivering new products and services. Analytics and AI have helped to step-up the pace of innovation undertaken by companies such as Frito-Lay. For example, during the pandemic, the food producer delivered an e-commerce platform, Snacks.com, our first foray into the direct-to-consumer business, in just 30 days, says Lindsey. The company is now employing analytics to leverage its shopper and outlet data to predict store openings, shifts in demand due to return to work, and changes in tastes that are allowing us to reset the product offerings all the way down to the store level within a particular zip code, he adds.
AI accentuates corporate values. The way we develop AI reflects our company culture we state our approach in two words responsible growth, says Sumeet Chabria, global chief operating officer technology and operations at Bank of America. We are in the trust business. We believe one of the key elements of our growth the use of technology, data, and artificial intelligence must be deployed responsibly. As a part of that, our strategy around AI is Responsible AI; that means Being customer led. It starts with what the customer needs and the consequence of your solution to the customer; Being process led. How does AI fit into your business process? Did the process dictate the right solution?
AI and analytics are addressing supply chain issues. There are lingering effects as the economy kicks back into high gear after the Covid crisis issues with items from semiconductors to lumber have been in short supply due to disruptions caused by the crisis. Analytics and AI help companies predict, prepare, and see issue that may disrupt their abilities to deliver products and services. These are still the early days for AI-driven supply chains, a survey released by theAmerican Center for Productivity and Quality finds only 13% of executives foresee a major impact from AI or cognitive computing over the coming year. Another 17% predict a moderate impact. Businesses are still relying on manual methods to monitor their supply chains those that adopt AI in the coming months and years will achieve significant competitive differentiation.
Supply chain planning addressing disruptions in the supply chain can benefit in two ways, says Parsons. The first is for the easy problems to be handled by the AI system. This frees up the human to address the more complex supply chain problems. However, the AIsystem can also provide support even in the more complex cases by, for example, providing possible solutions to consider or speeding up an analysis of possible solutions by completing a solution from a proposal on a specific part of the problem.
AI is fueling startups, while helping companies manage disruption. Startups are targeting established industries by employing the latest data-driven technologies to enter new markets with new solutions. AI and analytics presents a tremendous opportunity for both startups and established companies, says Chakraborty. Startups cannot do AI standalone. They can only solve a part of the puzzle. This is where collaboration becomes very important. The bigger organizations have an opportunity to embrace those startups, and make them part of their ecosystem.
At the same time, AI is helping established companies compete with startups through the ability to test and iterate on potential opportunities far more rapidly and at far broader scale, says Jekielek. This enables established companies to both identify high potential opportunity areas more quickly as well as determine if it makes most sense to compete or, especially is figured out early, acquire.
The coming boom in business growth and innovation will be a data-driven one. As the world eventually emerges from the other side of the Covid crisis, there will be opportunities for entrepreneurs, business leaders and innovators to build value and launch new ventures that can be rapidly re-configured and re-aligned as customer needs change. Next-generation technologies artificial intelligence and analytics will play a key role in boosting business innovation and advancement in this environment, as well as spur new business models.
See more here:
AI Adoption Skyrocketed Over the Last 18 Months - Harvard Business Review
Posted in Ai
Comments Off on AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review
When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI
Posted: at 5:36 pm
While the U.S. is making strides in the advancement of AI use cases across industries, we have a long way to go before AI technologies are commonplace and truly ingrained in our daily life.
What are the missing pieces? Better data access and improved data sharing.
As our ability to address point applications and solutions with AI technology matures, we will need a greater ability to share data and insights while being able to draw conclusions across problem domains. Cooperation between individuals from government, research, higher education and the private sector to make greater data sharing feasible will drive acceleration of new use cases while balancing the need for data privacy.
This sounds simple enough in theory. Data privacy and cybersecurity are top of mind for everybody and prioritizing them go hand in hand with any technology innovation nowadays, including AI. The reality is that data privacy and data sharing are rightfully sensitive subjects. This, coupled with widespread government mistrust, is a legitimate hurdle that decision makers must evaluate to effectively provide access to and take our AI capabilities to the next level.
In the last five to 10 years, China has made leaps and bounds forward in the AI marketplace through the establishment of its Next Generation Artificial Intelligence Development Plan. While our ecosystems differ, the progress China has made in a short time shows that access to tremendous volumes of datasets is an advantage in AI advancement. It is also triggering a domino effect.
Government action in the U.S. is rampant. Recently, in June, President Biden established the National AI Research Task Force, which follows former President Trumps 2019 executive order to fast-track the development and regulation of AI signs that American leaders are eager to dominate the race.
While the benefits of AI are clear, we must acknowledge consumer expectations as the technology progresses. Data around new and emerging use cases shows that the more consumers are exposed to the benefits of AI in their daily lives, the more likely they are to value its advancements.
According to new data from the Deloitte AI Institute and the U.S. Chamber of Commerces Technology Engagement Center, 65 percent of survey respondents indicated that consumers would gain confidence in AI as the pace of discovery of new medicines, materials and other technologies accelerated through the use of AI. Respondents were also positive about the impact government investment could have in accelerating AI growth. The conundrum is that the technology remains hard to understand and relate to for many consumers.
While technology literacy in general has progressed thanks to the internet and digital connectivity, general awareness around data privacy, digital security and how data is used in AI remains weak. So, as greater demands are put on the collection, integration and sharing of consumer data, better transparency, education and standards around how data is collected, shared and used must be prioritized simultaneously. With this careful balance we could accelerate innovation at a rapid pace.
The data speaks for itself. The more of it we have, the stronger the results. Just like supply chain management of raw materials is critical in manufacturing, data supply chain management is critical in AI. One area that many organizations prioritize when implementing AI technology is applying more rigorous methods around data provenance and organization. Raw collected data is often transformed, pre-processed, summarized or aggregated at multiple stages in the data pipeline, complicating efforts to track and understand the history and origin of inputs to AI training. The quality and fit of resultant models the ability for the model to make accurate decisions is primarily a function of the corpus of data they were trained on, so it is imperative to identify what datasets were used and where they originated.
Datasets must be broad and show enough examples and variations for models to be correctly trained on. When they are not, the consequences can be severe. For instance, in the absence of sufficient datasets, AI-based face recognition models have reinforced racial profiling in some cases and AI algorithms for healthcare risk predictions have left minorities with less access to critical care.
With so much on the line, diverse data with strong data supply chain management is important, but there are limits to how much data a single company can collect. Enter the challenges of data sharing, data privacy and the issue of which information individuals are willing to hand over. We are seeing this play out through medical applications of AI, i.e., radiology images and medical records, and in other aspects of day-to-day life, from self-driving cars to robotics.
For many, granting access to personal data is more appealing if the purpose is to advance potentially life-saving technology, versus use cases that may appear more leisurely. This makes it critical that leading AI advancements prioritize the use cases that consumers deem most valuable, while remaining transparent about how data is being processed and implemented.
Two recent developments the National AI Research Task Force and the NYC Cyber Attack Defense Center are positive steps forward. While AI organizations and leaders will continue to drive innovation, forming these groups could be the driver in bringing AI to the forefront of technology advancement in the U.S. The challenge will be whether the action that they propose is impressive enough to consumers and outweighs privacy concerns and government mistrust.
Advancements in AI are driving insights and innovation across industries. As AI leaders it is up to us to continue the momentum and collaborate to accelerate AI innovation safely. For us to succeed, industry leaders must prioritize privacy and security around data collection and custodianship, create transparency around data management practices and invest in education and training to gain public trust.
The inner workings of AI technology are not as discernable as most popular applications and will remain that way for some time but how data is collected and used must not be so hard for consumers to see and understand.
About the Author
Rob Lee of Pure Storage
Rob Lee is the Chief Technology Officer at Pure Storage, where he is focused on global technology strategy, and identifying new innovation and market expansion opportunities for the company. He joined Pure in 2013 after 12 years at Oracle Corp. He serves on the board of directors for Bay Area Underwater Explorers and Cordell Marine Sanctuary Foundation. Lee earned a bachelor's degree and a master's degree in electrical engineering and computer science from the Massachusetts Institute of Technology.
Related
Read the original here:
When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical - EnterpriseAI
Posted in Ai
Comments Off on When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI
‘Pre-crime’ software and the limits of AI – Resilience
Posted: at 5:36 pm
The Michigan State Police (MSP) has acquired software that will allow the law enforcement agency to help predict violence and unrest, according to a story published by The Intercept.
I could not help but be reminded of the film Minority Report. In that film three exceptionally talented psychics are used to predict crimes before they happen and apprehend the would-be perpetrators. These not-yet perpetrators are guilty of what is called pre-crime, and they are sentenced to live in a very nice virtual reality where they will not be able to hurt others.
The publics acceptance of the fictional pre-crime system is based on good numbers: It has eliminated all pre-meditated murders for the past six years in Washington, D.C. where it has been implemented. Which goes to provefictionally, of coursethat if you lock up enough people, even ones who have never committed a crime, crime will go down.
How does the MSP software work? Let me quote again from The Intercept:
The software, put out by a Wyoming company called ShadowDragon, allows police to suck in data from social media and other internet sources, including Amazon, dating apps, and the dark web, so they can identify persons of interest and map out their networks during investigations. By providing powerful searches of more than 120 different online platforms and a decades worth of archives, the company claims to speed up profiling work from months to minutes.
Simply reclassify all of your online friends, connections and followers as accomplices and youll start to get a feel for what this software and other pieces of software mentioned in the article can do.
The ShadowDragon software in concert with other similar platforms and companion software begins to look like what the article calls algorithmic crime fighting. Here is the main problem with this type of thinking about crime fighting and the general hoopla over artificial intelligence (AI): Both assume that human behavior and experience can be captured in lines of computer code. In fact, at their most audacious, the biggest boosters of AI claim that it can and will learn the way humans learn and exceed our capabilities.
Now, computers do already exceed humans in certain ways. They are much faster at calculations and can do very complex ones far more quickly than humans can working with pencil and paper or even a calculator. Also, computers and their machine and robotic extensions dont get tired. They can do often complex repetitive tasks with extraordinary accuracy and speed.
What they cannot do is exhibit the totality of how humans experience and interpret the world. And, this is precisely because that experience cannot be translated into lines of code. In fact, characterizing human experience is such a vast and various endeavor that it fills libraries across the world with literature, history, philosophy and the sciences (biology, chemistry and physics) using the far more subtle construct of natural languageand still we are nowhere near done describing the human experience.
It is the imprecision of natural language which makes it useful. It constantly connotes rather that merely denotes. With every word and sentence it offers many associations. The best language opens paths of discovery rather than closing them. Natural language is both a product of us humans and of our surroundings. It is a cooperative, open-ended system.
And yet, natural language and its far more limited subset, computer code, are not reality, but only a faint representation of it. As the father of general semantics, Alfred Korzybski, so aptly put it, The map is not the territory.
Apart from the obvious dangers of the MSPs algorithmic crime fighting such as racial and ethnic profiling and gender bias, there is the difficulty in explaining why information picked up by the algorithm is relevant to a case. If there is human intervention to determine relevance, then that moves the system away from the algorithm.
But it is the act of hoovering up so much irrelevant information that risks the possibility of creating a pattern that is compelling and seemingly real, but which may just be an artifact of having so much data. This becomes all the more troublesome when law enforcement is trying to predict unrest and crimessomething which the MSP says it doesnt do even though its systems have that capability.
The temptation will grow to use such systems to create better order in society by focusing on the troublemakers identified by these systems. Societies have always done some form of that through their institutions of policing and adjudication. Now, companies seeking to profit from their ability to find the unruly elements of society will have every incentive to write algorithms that show the troublemakers to be a larger segment of society than we ever thought before.
We are being put on the same road in our policing and courts that weve just traverse in the so-called War on Terror that has killed a lot of innocent people and made a large number of defense and security contractors rich, but which has left us with a world that is arguably more unsafe than it was before.
To err is human. But to correct is also human, especially based on intangiblesintuitions, hunches, glimpses of perceptionwhich give us humans a unique ability to see beyond the algorthmically defined facts and even beyond those facts presented to our senses in the conventional way. When a machine failsnot in a trivial way that merely fails to check and correct databut in a fundamental way that miscontrues the situation, it has no unconscious or intuitive mind to sense that something is wrong. The AI specialists have a term for this. They say that the machine lacks common sense.
The AI boosters will respond, of course, that humans can remain in the loop. But to admit this is to admit that the future of AI is much more limited than portrayed and that as with any tool, its usefulness all depends on how the tool is used and who is using it.
It is worth noting that the title of the film mentioned at the outset, Minority Report, refers to a disagreement among the psychics, that is, one of them issues a minority report which conflicts with the others. It turns out that for the characters in this movie the future isnt so clear after all, even to the sensitive minds of the psychics.
Nothing is so clear and certain in the future or even in the present that we can allay all doubts. And, when it comes to determining what is actually going on, context is everything. But no amount of data mining will provide us with the true context in which the subject of an algorithmic inquiry lives. For that we need people. And, even then the knowledge of the authorities will be partial.
If only the makers of this software would insert a disclaimer in every report saying that users should look upon the information provided with skepticism and thoroughly interrogate it. But then, how many suites of software would these software makers sell with that caveat prominently displayed on their products?
Roughed up by Robocop disassembled robot. (2013) by Steve Jurvetson. via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Roughed_up_by_Robocop_(9687272347).jpg
Go here to see the original:
Posted in Ai
Comments Off on ‘Pre-crime’ software and the limits of AI – Resilience
Amazon delivery staff ‘denied bonus’ pay by AI cameras misjudging their driving – The Register
Posted: at 5:36 pm
In brief AI cameras inside Amazons delivery trucks are denying drivers' bonus pay for errors they shouldnt be blamed for, it's reported.
The e-commerce giant installed the equipment in its vehicles earlier this year. The devices watch the road and the driver, and send out audio alerts if they don't like the way their humans are driving.
One man in Los Angeles told Vice that when he gets cut off by other cars, the machine would sense the other vehicle suddenly right in front of him, and squawk: Maintain safe distance! Logs of the audio alerts and camera footage are relayed back to Amazon, and it automatically decides whether drivers deserve to get bonuses or not from their performance on the road.
These workers, who are employed via contractors, claim they are unfairly denied extra pay for errors that were beyond their control or for things that dont necessarily mean theyre driving recklessly, such as tuning the radio or glancing at a side mirror.
When I get my score each week, I ask my company to tell me what I did wrong, the unnamed driver said. My [delivery company] will email Amazon and cc me, and say, Hey we have [drivers] who'd like to see the photos flagged as events, but they don't respond. There's no room for discussion around the possibility that maybe the camera's data isn't clean.
An Amazon spokesperson said alerts can be contested and are reviewed by staff at the internet giant to weed out incorrect judgments by the software.
Deepfakes arent all bad. The technology is helping trans people feel comfortable with communicating in gamer communities by changing the sound of their voice with AI algorithms.
It can be difficult for trans gamers to speak in group chats when the pitch of their voice doesnt match their gender identities; some may want to sound more feminine or masculine, typically.
A startup called Modulate is helping them generate new voices or so-called voice skins by using machine-learning software that automatically adjusts the sound of their speech. Some trans people have started testing the algorithms but havent yet used it in the wild, according to Wired.
We realized many people dont feel they can participate in online communities because their voice puts them at greater risk, Mike Pappas, Modulates CEO, said. He claimed the software only has a 15 millisecond lag when transforming someones speech in real time to a different pitch.
Early testers said they were impressed with the softwares capabilities, although Modulate declined to provide a live demo for the magazine.
The British government has promised to invest more in the AI industry and review semiconductor supply chains to make sure it has enough computational resources to support the growth of the technology.
This is how we will prepare the UK for the next ten years, and is built on three assumptions about the coming decade, the report's summary began.
"1. Invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower;
"2. Support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions;
"3. Ensure the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.
The first point involves funding more scholarships to help more people obtain postgraduate education in machine learning and data science. Researchers are encouraged to collaborate with others from European and US institutions.
Other parts of the plan, however, are a little bit more wishy-washy. There isnt strict actions or policies in some parts, for example, a lot of inter-agency collaboration involves formulating yet more reports to understand strategic goals in supporting the AI economy or algorithmic transparency.
Aurora, the self-driving car software biz, has started testing autonomous heavy duty Class 8 trucks capable of hauling over 14,969 kilograms with shipping giant FedEx.
The trucks will be monitored by a safety driver as they drive the 500-mile round trip from Dallas and Houston, Texas, along the I-485 interstate highway, Aurora announced this month. The company is aiming to operate fleets of fully autonomous trucks without the help of safety drivers by 2023.
You can read more about it here.
See more here:
Amazon delivery staff 'denied bonus' pay by AI cameras misjudging their driving - The Register
Posted in Ai
Comments Off on Amazon delivery staff ‘denied bonus’ pay by AI cameras misjudging their driving – The Register
UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights – JD Supra
Posted: at 5:36 pm
On 15 September 2021, the UN Office of the High Commissioner for Human Rights (OHCHR) published a report, The right to privacy in the digital age, that analyses how artificial intelligence (AI), through the use of profiling, automated decision-making and machine learning technologies, affects peoples fundamental rights and freedoms such as the right to privacy, the right to health and freedom of expression.
The OHCHR urges for a moratorium on the sale and use of AI systems that pose a serious risk to human rights and remote biometric recognition systems in public spaces until adequate safeguards are put in place. It also recommends banning AI applications that cannot ensure compliance with international human rights law.
While the report recognised that AI is instrumental in developing innovative solutions, it stressed the effects of the ubiquity of AI on peoples fundamental rights. The report looks in detail at the use of AI solutions in key public and private sectors, for example, in national security, criminal justice, employment and when managing information online.
In this respect, the OHCHR highlighted a number of risks of AI that need to be addressed by states and businesses, for example:
The report recommends addressing these risks using a comprehensive human rights-based approach and outlines possible ways to address the fundamental problems associated with AI, including the implementation of a robust legislative and regulatory framework, which prevents and mitigates any adverse effects of AI on human rights. States should ensure that any permitted interference with the right to privacy and other human rights through the use of AI does not impair the essence of these rights and is stipulated by law, pursues a legitimate purpose, is necessary and proportionate, and requires adequate justification of AI-supported decisions. The OHCHR also recommends that public and private entities systematically conduct human rights due diligence throughout the entire life cycle of the AI systems (including a human rights impact assessment), increase transparency about the use of AI and actively combat discrimination.
The press release is available here and the report is available here.
Link:
Posted in Ai
Comments Off on UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights – JD Supra
We are sleepwalking into AI-augmented work – VentureBeat
Posted: at 5:36 pm
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
A recent New York Times article concludes that new AI-powered automation tools such as Codex for software developers will not eliminate jobs but simply be a welcome aid to augment programmer productivity. This is consistent with the argument were increasingly hearing that people and AI have different strengths and there will be appropriate roles for each.
As discussed in a Harvard Business Review story: AI-based machines are fast, more accurate, and consistently rational, but they arent intuitive, emotional, or culturally sensitive. The belief is that AI plus humans is something of a centaur, greater than either one operating alone.
This idea of humans plus AI producing better outcomes has become a tenant of faith in technology. Everyone talks about humans being freed up to perform higher-level functions, but no one seems to know just what those high-level functions are, how they translate into real work and jobs, or the number of people needed to perform them.
A corollary of this augmented-workforce narrative is that not only will AI-augmented work enable people to pursue a higher level of abstract thinking, it will according to some also lift all of society to a higher standard of living. This is certainly an optimistic vision, and we can hope for that. However, this could also be a story imbued with magical thinking, with the true end-game being fully automated work.
Dont get me wrong; there is some evidence to support the view that AI will help us work rather than take our jobs. For example, AI lab DeepMind is designing new chess systems for the two intelligences to work in tandem with humans rather than opposed to them.
And Kai-Fu Lee, the Oracle of AI, also buys into this promise. In his new book, AI 2041: Ten Visions for our Future, he argues that repetitive tasks from stacking shelves to crunching data will be done by machines, freeing workers for more creative tasks. Forrester Research has likewise articulated that AI deployment enables people to better use their creative skills.
But, of course, some people are more creative than others, meaning that not everyone would benefit from AI-augmented work to the same degree. Which in turn reinforces a concern that AI-fueled automation, even in its augmented work capacity, could widen already existing income disparities.
One problem with the AI-augmented workforce promise is that it tells us AI will only take on the repetitive work we dont want to do. But not all work being outsourced to AI is routine or boring.
Look no further than the role of the semiconductor chip architect. This is a highly sophisticated profession, an advanced application of electrical engineering in arguably one of the most complex industries. If ever there was a job that might be thought of as immune from AI, this would have been a strong candidate. Yet recent advances from Google and Synopsys (among others using reinforcement learning neural network software) have shown the ability to do in hours what often required a team of engineers months to achieve.
One ever-faithful tech watcher still argued that the algorithms will optimize and accelerate time-intensive parts of the design process so that designers can focus on making crucial calls that require higher-level decision making.
More than likely, the current perception of work augmented by AI is a reflection on the current state of the technology and not an accurate view of the future when automation will be far more advanced. We first saw the potential of neural networks a decade ago, for example, and it took several years until that technology was developed to the point where it had practical advantages for consumers and business. Fueled in part by the pandemic, AI tech is now being widely implemented. Even massage therapists should take note, as a robot masseuse can now deliver a deep tissue massage. Yet, these are still early days for AI.
Caption: EMMA from AiTreat, a robot that uses artificial intelligence to deliver massages. Source: CNN
AI advances are being led by improvements in both hardware and software. The hardware side is driven by Moores Law, the idea that semiconductors improve by roughly 2x the number of transistors producing roughly equivalent performance and power efficiency gains every couple of years (and similarly drive down the costs of computing). This principle has been credited with all manner of electronic advances over the last several decades. As noted in a recent IEEE Spectrum article: The impact of Moores Law on modern life cant be overstated. We cant take a plane ride, make a call, or even turn on our dishwashers without encountering its effects. Without it, we would not have found the Higgs boson or created theInternet. Or have a supercomputer in your purse or pocket.
There are reasons to think that Moores-Law driven improvements in computing are nearing an end. But advanced engineering, ranging from chiplets to 3D chip packaging promise to keep the gains coming, at least for a while. These and other semiconductor design improvements have led one chip manufacturer to promise a 1000x performance improvement by 2025!
The expected improvements in AI software may be equally impressive. GPT-3, the third iteration of Generative Pre-trained Transformer from OpenAI, is a neural networkmodel consisting of 175 billion parameters. The system has proven capable of generatingcoherent prosefrom a text prompt. This is what it was designed to do, but it turns out that it can also generate other forms of text as well, including computer code and can also generate images. Moreover, while the belief is that AI will help people to be more creative, it could be that it is already capable of creativity on its own.
At its launch in May 2020, GPT-3 was the largest neural network ever introduced, and it remains among the largestdenseneural nets, exceeded only by Wu Dao 2.0 in China. (At 1.75 trillion parameters, Wu Dao 2.0 is another GPT-like language model and probably the most powerful neural network yet created.)
Some expectations are for GPT-4 to also grow and contain up to a trillion parameters. However, OpenAI CEO Sam Altman has said that it will not be larger than GPT-3 but will be far more efficient through enhanced data algorithms and fine tuning. Altman also alluded to a future GPT-5. The point being that neural networks have a long way to run in size and sophistication. We are indeed in the midst of an age of AI acceleration.
In the new book,Rule of the Robots: How Artificial Intelligence Will Transform Everything, author Martin Ford notes that nearly every technology startup is now, to some degree, investing in AI, and companies large and small in other industries are beginning to deploy the technology. The pace of innovation will only continue to accelerate as capital continues to pour into AI development. Clearly, whatever we are seeing now in the way of AI-powered automation, including the belief that AI will help us work rather than take our jobs, is but an early stage for whatever is still to come. As for what is coming, that remains the realm of speculative fiction.
In Burn In: A Novel of the Real Robotic Revolution, a Yale-educated lawyer is among those impacted when his firm replaced 80% of the legal staff with machine learning software. This could happen in the near future. The remaining 20% were indeed augmented by the AI, but the 80% had to find other work. In his case, he winds up doing gig work as an online personal assistant to the wealthy. Currently, startup company Yohana is working to realize a variation of this vision. The company is initially offering a blend of human and AI services, starting with a living, breathing assistantthat draws on data to tackle the to-do lists of subscribers. It will be telling to see if these assistants will be like the secretaries of yore, but wielding AI, or if they will be displaced cognitive workers.
The AI-driven transition to a largely automated world will take time, perhaps a few decades. This will bring many changes, with some being highly disruptive. Adjustments will not be easy. It is tempting to think that ultimately this will enrich the quality of human life. After all, as Aristotle said: When looms weave by themselves, mans slavery will end.But embracing the AI augmented work concept as currently articulated could blind us to the potential risks of job loss.Kate Crawford, a scholar focused on the social and political implications of technology, believes AI is the most profound story of our time and a lot of people are sleepwalking into it.
We all need to have a clear-eyed understanding of the growing potential for disruption and to prepare as best we can, largely by acquiring those skills most likely to be needed in the coming era. Companies need to do their part in providing skills training, and retraining will increasingly need to be a near continuous process as the pace of technology change accelerates. Government needs to develop public policies that direct the market forces driving automation towards positive outcomes for all, even while preparing for a growing social safety net that could include universal basic income.
Gary Grossman is the Senior VP of Technology Practice atEdelmanand Global Lead of the Edelman AI Center of Excellence.
Excerpt from:
Posted in Ai
Comments Off on We are sleepwalking into AI-augmented work – VentureBeat
Feeding The Cognitive Enterprise: Nestl Pushes AI, Predictive Maintenance and Robotics – IoT World Today
Posted: at 5:36 pm
Carolina Pinart, the lead for Nestls AI program, said intelligence was driving progress toward the corporates three key strategic objectives.
Day One of Informa Techs AI Summit in London saw Nestls global product director for new generation technologies provide an in-depth look at AIs potential to transform the multinational corporation.
Carolina Pinart, the lead for Nestls AI program, said intelligence was driving progress toward the corporates three key strategic objectives: enhanced efficiency, digitized operations and sustainability. The latter target is with a view of achieving Nestles goal of net zero carbon emissions by 2050.
If we look into Nestls progress last year, we made really solid progress on our structural savings program across all areas of manufacturing, procurement and administration, with massive opportunities for automating and AI, said Pinart.
Pinart stressed that all three initiatives had enabled Nestl to leverage AI, predictive maintenance and robotics to support factory automation and customization on the assembly line.
Nestl is also striving to expand the flow, accessibility and utility of real-time data as it is collected from operational technology networks, both in supply chain management and procurement operations.
These efforts support our drive to enhance consumer and customer centricity, which is a paramount objective for our company, as well as manufacturing, flexibility, agility and also transparency and traceability across the supply chain, Pinart explained.
Since 2019, Nestl has sought to transform its business into a cognitive enterprise, with unbiased machine learning tools that can be extensively used to help automate, respond, react and decide business outcomes.
Through machine learning, the mission statement set out by the company underscores its commitment to derive more value for its employees and customers, as well as society and its shareholders.
By 2025, Nestl aims to be fully data-driven and cognitive in its approach. It is currently working to define corporate directives for AI as part of its global program, which also covers topics such as ethics, organization, technology, education and communication.
Without the global program, we might get there eventually but certainly not at the speed that we want.
The global program runs together with multi-year strategic initiatives to automate operations and improve the experience of its workforce, through data, analytics and other technologies.
These programs leverage AI and machine learning where appropriate as a toolbox to solve business challenges, Pinart explained.
Pinart believes deployment of AI in Nestles global supply chain must start with a strategic analysis of the intended outcomes, followed by work to identify real use-cases where AI can be prioritized, deployed and scaled.
Also vital for Nestles implementation strategy was identifying the foundational data for AI to be trained on as well as other tech capabilities.
Pinart said: We are working on creating data assets and having the right platforms in place to really support the strategy and use-cases.
[Another] question is around organization and talent that varies depending on what were trying to do with AI. We dont have a single operating model but the question is, what is the right operating model to run all of this.
Finally governance, AI raises concerns on people. So both consumers and business leaders are employees as well, right? So how do we address those concerns.
View post:
Posted in Ai
Comments Off on Feeding The Cognitive Enterprise: Nestl Pushes AI, Predictive Maintenance and Robotics – IoT World Today
AI Risk Management Framework Is Among Emerging Federal Initiatives on AI – JD Supra
Posted: at 5:36 pm
Artificial intelligence (AI) has drawn significant policy interest for quite some time now, and a federal approach is taking shape. A recent flurry of federal activity on the AI front has emanated in large part from the U.S. Department of Commerce (Commerce) including a move towards development of a risk management framework. The framework in particular may greatly influence how companies and organizations approach AI-related risks, including avoiding bias and promoting accuracy, privacy, and security.
Developing an AI Risk Management Framework
In the National Defense Authorization Act for Fiscal Year 2021 (2021 NDAA), Congress directed the National Institute of Standards and Technology (NIST) which falls under Commerce to develop a voluntary risk management framework for trustworthy [AI] systems. Following this directive, in late July, NIST issued a Request for Information (RFI) seeking input to help inform the development of what it refers to as the AI Risk Management Framework (AI RMF).
As NIST explained, the AI RMF is intended to help designers, developers, users, and evaluators of AI systems better manage risks across the AI lifecycle and aims to foster the development of innovative approaches to address characteristics of trustworthiness, including accuracy and mitigation of harmful bias. According to NIST, the development of the AI RMF will involve several iterations to encourage robust and continuing engagement and collaboration with interested stakeholders and will include open, public workshops, along with other forms of outreach and feedback.
In the RFI, NIST proposed that the AI RMF should feature eight key attributes. Specifically, NIST explained that the AI RMF should:
This directive as well as NISTs RFI outlining the initial attributes and goals of the AI RMF aligns with NISTs extensive work on similar frameworks for cybersecurity and privacy.
The RFI is the beginning of the collaborative development process at NIST with respect to the AI RMF. In the immediate near term, NIST is hosting a public workshop on October 19-21, 2021 to share feedback received in response to the RFI and discuss plans and goals for the AI RMF.
The National Artificial Intelligence Advisory Committee
On September 8, Secretary of Commerce Gina Raimondo announced that Commerce would be establishing a National Artificial Intelligence Advisory Committee (NAIAC) pursuant to a separate directive from the 2021 NDAA. The NAIAC, which will also include a subcommittee focused on the use of AI in law enforcement, will provide recommendations to the President on a broad range of AI-related topics, including the United States competitiveness in the AI space, the state of science related to AI, AI workforce issues, and opportunities for international AI cooperation.
NIST has already issued a Federal Register notice formally calling for nominations to the NAIAC. As this notice explains, the NAIAC will consist of at least nine members representing academia, industry, nonprofits, and federal laboratories who will be tasked with submitting a report to the President and Congress within one year, and then once every three years thereafter. Nominations for the NAIAC are due October 25, 2021.
Looking Ahead
These recent developments coming out of Commerce are by no means the only federal AI actions that have been taken since President Biden assumed office. For example, the Biden Administration launched AI.gov in May in order to introduce more Americans to the governments activities in the AI space. Similarly, in June, the White House Office of Science and Technology Policy and the National Science Foundation established a National Artificial Intelligence Research Resource Task Force that will help to expand access to educational tools and resources in order to spur AI innovation.
Given the responsibilities that have been assigned to it and the actions that it has already taken, Commerce and NIST in particular appears poised to be central to the federal governments approach. As such, stakeholders should consider engaging with NIST in its AI workstreams, including through participation in the upcoming workshop.
Link:
AI Risk Management Framework Is Among Emerging Federal Initiatives on AI - JD Supra
Posted in Ai
Comments Off on AI Risk Management Framework Is Among Emerging Federal Initiatives on AI – JD Supra







