Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Big Tech
- Black Lives Matter
- Boca Chica Texas
- Casino Affiliate
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Elon Musk
- Ethical Egoism
- Fake News
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Las Vegas
- Life Extension
- Marie Byrd Land
- Mars Colonization
- Mars Colony
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Proud Boys
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Daily Archives: September 23, 2019
Posted: September 23, 2019 at 7:44 pm
KT's DL (Deep Learning) based AI module has been implemented and tested on WeDo's RAID FMS system. This AI module, trained with KT Big Data, has showed strong results for fraud detection and prevention, and proved to be highly efficient and effective for a number of fraud use cases, with very high accuracy. KT and WeDo plan to supply the AI-IRSF (AI based International Revenue Share Fraud) module with the RAID platform to CSPs (Communication Service Providers) by the end of the year.
AI-IRSF is an AI system that prevents a fraud that involves hacking of IP-PBX (IP telephony exchange) to generate illegal calls to international numbers.
With this Cooperation Agreement, KT will develop and supply more AI based FMS modules to integrate with WeDo's Fraud and Risk Management system. Additional AI based modules will also run on the WeDo's system, and the modular capability of RAID will allow CSPs to choose from different fraud detection models for their market similar to how one chooses applications from a smartphone app store. The open architecture of RAID will also allow other CSPs to develop their own models as well.
WeDo Technologies is part of the Mobileum group, a leading enterprise software and analytics company in roaming, security, fraud and risk management serving more than 700 telecommunication providers in more than 180 countries.
According to IDC, the global AI application market will rapidly expand from US $15 billion (2017) to US $56 billion in 2020 with CAGR of 55%.
KT plans to apply this AI technology with KT group's financial subsidiary BC Card and Korea's No.1 caller info app "KT WhoWho" for safe financial operation and mobile communication.
KT AI, that has been trained with KT group's extensive Big Data, will secure safety and efficiency of telecommunication FMS. Furthermore, KT will extend its AI ability to global financial FDP (Fraud Detection and Prevention) market.
Global AI-FDP which had the market size of US $1.4 billion (about 9% share of the total market) in 2017, is expected to grow to US $5 billion in 2020.
Young Woo Kim(KT's Senior Vice President, Head of Global Business Development Unit) said: "Within the Cooperation Agreement with WeDo, KT's AI and Big Data technologies combined with WeDo's Global FMS Market ability will become the very successful first step toward global AI market advancement."
Rui Paiva(WeDo Technologies CEO and Mobileum Chief of Revenue Assurance & Fraud Management Business Unit & CMSO) said: "This Cooperation Agreement with KT, is a landmark development for the fraud management market, in a path to provide deep AI fraud prevention capabilities, successfully tested with real life data, to the wider community."
About WeDo Technologies a Mobileum company
WeDo Technologies, founded in 2001, is part of the Mobileum group, a leading enterprise software and analytics company in roaming, security, fraud and risk management serving more than 700 telecommunication providers in more than 180 countries.
Mobileum delivers analytics solutions that generate and protect revenue, reduce direct and indirect costs, and accelerate digital transformation for Communication Service Providers (CSP). Mobileum focus on key CSP domains, including roaming and interconnect, counter fraud and security, revenue assurance, data monetization and digital transformation. Mobileum's success is built upon its unique Active Intelligence platform, which logically combines analytics, engagement, and action technology, with deep network insights and easy CSP system integration to deliver seamless end-to-end solutions. In a recent independent third-party survey of global mobile operators, Mobileum was voted as #1 in Innovation for a group of 183 suppliers.
WeDo Technologies innovative risk management solutions analyze large quantities of data, enabling monitoring and control of enterprise wide processes to ensure revenue protection and risk mitigation. In addition, its data insights empower business management solutions that accelerate automation and optimization across the digital enterprise, and enable efficient business processes such as incentive compensation, collections, and wholesale roaming management
Mobileum is based in California's Silicon Valley, with offices in Argentina, Brazil, Egypt, Hong Kong, India, Jordan, Portugal, Malaysia, Mexico, Singapore, United Arab Emirates, United Kingdom and Uruguay.
Useful Contacts: Phone: +351 962 018 267
Public Relations and Corporate Communications Sara Machadosara.firstname.lastname@example.org
Analyst Relations and Product Marketing Carlos Marquescarlos.email@example.com
SOURCE WeDo Technologies - a Mobileum Company
Posted: at 7:44 pm
From startups to large firms--everyone is opting for AI-powered digital marketing tools to enhance campaign planning & decision making
September21, 20195 min read
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Artificial Intelligence (AI) is no longer the next big thing, it is now a big thing now in digital marketing. All digital marketing operations are now affected by AI-powered tools. From startups to large firms are opting for AI-powered digital marketing tools to enhance campaign planning & decision making.
AI-based tools are now a flourishing market, with a drastic change in demand. According to most of the digital marketers AI enhancing all the areas where the predictive analysis, decision making & automation efforts required.
How AI is adding value to digital marketers life?
Digital marketers are trying hard to leverage AI for strategic planning and campaign decision making. Most of them found AI helpful and enhancing their productivity and reducing their efforts. AI-powered analytics tools provide better insights for campaign management, budget planning, & ROI analysis. AI can gather the insights from a truckload of unstructured and structured data sources in a fraction of sec.
Related:The Real Reason Sales and Marketing Teams Use AI
All the human interactions with a business affect the digital marketing strategy and business revenue.
Reportedly, brands who have recently adopted AI for marketing strategy, predict a 37 percent reduction in costs along with a 39 percent increase in revenue figures on an average by the end of 2020 alone.
AI-powered Recommendation Engine to Understand the Customers
Artificial intelligence tools help digital marketers to understand customer behavior and make the right recommendations at the right time. A tool with the millions of predefined conditions knows how customers react to a particular situation, ad copy, videos or any other touchpoint. While humans cant assess the large set of data better than a machine in a limited timeframe.
Related:How AI Is Driving Marketing Automation
You can collect the insights on your fingertips with the help of AI. Where to find an audience? how to interact with them? What to send them? How to send them? What is the right time to connect? When to send a follow-up? All these answers lie in the AI-powered digital marketing platforms.
With a smart analysis pattern AI, tools can make better suggestions and help in decision making. A personalized content recommendation to the right audience at the right time guarantees the success of any campaign.
Digital marketers are really getting pushed harder to demonstrate the success of content and campaigns. With AI tools utilization of potential data is very easy and effective.
According to a2019 studyby Forrester and Albert, only 26% of marketers are making use of autonomous AI, while 74% take a more manual approach with assistance from AI.
AI technology evolving every aspect of digital marketing to name a few audience targeting, audience interest analysis, web optimization, smart content writing and recommendation, advanced tracking and reporting and more.
How AI will play major roles in digital marketing solutions:
Customer data management
Customer behavior analysis and customer experience analysis
Trend analysis for campaign planning
Smart pattern analysis
Real-time data analysis and decision making
Voice search technology
AI in digital marketing is poised to reach a global market of USD 21 billion by 2023, growing at a steady Compound Annual Growth Rate (CAGR) of 26%.
How to prepare for AI-based platforms? How to include them in your digital marketing strategy?
Digital marketing technology platforms are evolving with great pace and it needs some specific set of skills. If you want to opt for smart marketing technology you should start using AI-powered tools from a small scale and increase the limits as you grow. A roadmap is required to stay ahead from the crowd, few tips to prepare for that:
Analyze the actual impact of AI on your digital marketing operations not all the AI tools are helpful for you. You should increase your basic AI knowledge to understand how the tools could make an impact on your current campaigns and reporting areas. You can go for Udemy courses, or YouTube tutorials or Free online courses available.
Evaluate different AI software to employ for your digital marketing campaigns There is a wide range of tools available in the market today for each and every marketing activity. You should evaluate the potential platforms to boost your campaign performance. Go for the demos, product documentation and webinars to know about the tools.
Follow the leading companys case studies Read the case studies form the organization who already implemented the AI-tools in their digital marketing campaigns and showed significant results.
Be creative, be experimental See how you can incorporate the AI tools and your current campaigns, how you can try something new to run experiments to enhance the campaign performance. Being creative is a human-thing, Leverage the power of it!
Go for industry-specific use-cases - To understand the effectiveness of these AI tools, you should explore the industry-specific use-cases. Learn how they implemented the strategy and what was the outcome, how they executed the strategy.
Try new tools every day Take free demos, trials of the tools and explore their potential, leverage your small marketing activities with the help of these tools, once you understand the logic behind it, and outcome rate, you can implement a right digital marketing strategy.
Required some technical skills too AI-based tools require some technical knowledge to integrate with your digital marketing operations. So, be ready or consult with the technical team to provide the necessary support. In-house competency is much needed.
Connect with agencies who already built a strategy with AI Some creative agencies already employed the advanced tools to run the successful digital marketing campaigns. Connect with them, partner with them to gain access to the insights.
AI has a remarkable impact on all the areas of digital marketing and will keep growing in the future. The future of digital marketing is here, the faster you learn the faster you grow. The days are gone when digital marketers run the data and find the insight and other team works on the campaign based on the insights. Things are moving with great pace in digital marketing space. The early adopters will win the game!
Maria Bartiromo talks artificial intelligence, the dot-com crash and why shell never retire – MarketWatch
Posted: at 7:44 pm
Maria Bartiromo has been covering business news for 30 years, and shes got her eye on the next big wave: artificial intelligence.
The Fox Business Network anchor, who recently re-signed with the network for a multiyear deal, is releasing an hour-long investigative documentary about artificial intelligence. The segment, which has been in the works for a year now, includes interviews with chief executive officers of major companies including IBM IBM, +0.13% and Ford. F, -0.11%. Fox News parent company Fox Corp FOXA, +0.00% was previously owned by MarketWatch parent News Corp NWS, +0.49%.
Artificial intelligence isnt just making demands to Siri on Apples iPhones, AAPL, +0.45% or telling your Google GOOG, +0.33% email inbox to identify spam. Eventually, people will be talking about artificial general intelligence, which is machine learning that is, the computer deciding on its own what is or isnt relevant to a persons request based on its past experiences. This technology has the potential to create entirely new jobs, eliminate other ones and also save lives, Bartiromo said, and the key for Americans to succeed in the face of this new technology is to understand where and how it will be implemented.
See: Artificial intelligence is revolutionizing the workplace, but its also dominated by men
The documentary will air on Sunday at 8 p.m. EST on Fox News FOX, +0.31%, and is broken into six parts. Bartiromo spoke with MarketWatch about how artificial intelligence will affect future workers and retirement, as well as her own plans.
MarketWatch: What do you see as the biggest impact of artificial intelligence on businesses? Is there a certain sector or facet of the workplace that will be most affected?
Maria Bartiromo: Artificial intelligence is powered by data so any job that involves a lot of data will be impacted first. So lets say youre a mortgage broker, or the person who looks at the eligibility of someone to see if theyre going to get a mortgage. You have to go through large sets of data, which a machine can do fast.
I think the industry that will be most impacted, in a good way, is health care. The machine can look at a radiology report, a mammogram, an MRI it can look at a million eyes and understand which one of those eyes is diseased. Or a million skins and see really simply, easily and quickly what the specifics of that skin is and what the propensity is for it to have cancerous cells. Thats what IBMs Watson is doing in hospitals theyre using AI in very big ways to understand routines and what these reports might show. It wont replace a doctor, but it will just be able to let the doctor do more. When you go to a doctor and you say you have a headache or stomach ache, the first thing the doctor does is eliminate things. That takes time. A computer can do that very quickly. I think it will be lifesaving.
But while it can be lifesaving, it can also be job killing in other industries. I think when you look at those industries, thats when the worry comes in and really sparks a debate in business about how to unleash artificial intelligence in an ethical way.
MW: People seem to think AI will replace older workers. What do you think?
Bartiromo: Not necessarily. It is really about a persons savvy with technology. You can be an older person and be really good at technology, you dont have to be a young person to understand how technology impacts things. So its really about how savvy you are with these changes and how to keep up with this skill set, or you need to be trained. We have seen technology replace jobs our entire careers. We didnt know what jobs would be available in 1998 with the dot-com boom. I remember covering this with a front-row seat in the 1990s when Amazon AMZN, -0.49% went public. We didnt know what the technology meant, that drones might replace delivery people or shopping was going to move online as opposed to brick and mortar. There will be jobs, but the scary thing is we dont know about them yet. It is not targeted toward older people but it is targeted toward white collar.
Read: How AI is catching people who cheat on their diets, job searches and college work
MW: Will AI impact retirement, and if so, how?
Bartiromo: Work is shifting. I dont know how many people you know who retire at 65 and say all right, Im going to go hit the hammock. I am certainly not going to do that. People may retire but they just retire from a job they had for 30 years and then theyll do something else. I am a firm believer of never retire and the reason I say that is because you have to constantly keep your mind going and have to constantly challenge your mind because if you dont use it, youll lose it.
MW: What does retirement mean to you? What do you envision retirement even being for yourself?
Bartiromo: I dont know because Im nowhere near it. I know in my frame of mind right now, Im not going to go hit the hammock. I am a curious person. I love work, I love learning new things and I have an open mind to new ideas. So I doubt when I am faced with that prospect that Im just going to stop working. Im going to do something to enrich my mind. I have no expectations that when I retire from my job Im going to rest and relax. Its just not who I am.
MW: What advice do you have for someone starting to work, especially if there will be more technological advancements in the workplace?
Bartiromo: I would say dont write this off. Understand what you dont know and make sure to bone up on technology if you are in an industry that is data-heavy. Make sure you understand that in the future, machines will be able to handle that so arm yourself with the right training to thrive in the profession. Its really important not to say I dont get that, Im lame when it comes to technology thats a cop out and it will hurt you in the future. Make sure to improve on the skills you need regarding technology and make sure you know the systems.
MW: What would you say are some of the biggest misconceptions of AI in the workplace?
Bartiromo: One misconception is that you have to be afraid of it. You have people like Elon Musk (the chief executive officer of Tesla TSLA, +0.25% ) who say it is dangerous but the reality is it is happening. Technology has always impacted jobs. Dont be afraid of it, understand it and try to educate yourself. Understand what parts of your routine are vulnerable to being replaced.
Another misconception is that when we talk about AI we are talking about Siri and your phone or the Echo home device. Yeah, that is artificial intelligence, but it is more about machine learning, so when I say Siri, put on yoga music, and it learns what yoga music is, that is machine learning. When you talk about artificial general intelligence, where the computer is mimicking the brain to a tee that could be changing the way you live and work.
MW: In your career reporting on business news, how have you seen the emergence of AI reported? How have people been talking about it and how has the perception of AI changed in those years?
Bartiromo: I have been covering business news for 30 years. I have had a front-row seat and seen many cycles and many different innovations. When I first got into business in 1989 at CNN, we were in the middle of the individual investor revolution, where individuals were hungry for information and wanted to arm themselves with information because they thought they could make their own investment decisions. So what happened then we had a whole industry swarmed and discount brokerage firms like E*Trade, ETFC, -0.13% Ameritrade (now TD Ameritrade) AMTD, -0.43% and Schwab SCHW, -0.47% of course thrived in that. Information was peoples currently and that was the power for them to get ahead in their lives.
Also see: The rise of artificial intelligence comes with rising needs for power
From there, we saw the cycle and euphoria of the dot-com boom, so people were investing in things with a .com at the end of it because why? I dont know why. There were no revenue or earnings but the stock was trading at an enormous valuation. It was just mania where we thought we were on the doorstep of a new revolution.
Back then, a lot of companies said they didnt have a dot-com site and they didnt need to compete with Amazon. We know now you did need to compete with Amazon because if you werent online, you took a back seat. We are seeing companies look at AI and start adopting it slowly. We will see companies adopt it in a huge way because theyre going to realize they have to do this to be competitive. The first adopters will be enterprise, so companies, not the consumer, and it will be in your jobs.
See the original post:
Posted: at 7:44 pm
Businesses still dont have a clear understanding of what to expect when it comes to the ROI of AI. Many believe that AI is just like any other software solution: the returns should, in theory, be immediate. But this is not the case. In addition, business leaders are often duped into thinking the path to ROI is a lot smoother than it is when it comes to AI because AI vendors tend to exaggerate the results their software generates.
In reality, identifying a metric to reliably measure the impact AI is having at a business is very hard.
In this article, we delve deeper into how business leaders should think about identifying ROI metrics that might help them understand the return they could generate from AI projects. To do this, we explore insights from interviews with three experts who were on our AI in Industry podcast this past month.
Special thanks to our three interviewees:
You can listen to our full playlist of episodes in our AI ROI playlist from the AI in Industry podcast. This article is based in large part on all three of these interviews:
Subscribe to the AI in Industry podcast wherever you get your podcasts:
We begin our analysis with a discussion of how to measure the ROI of AI.
AI projects inherently involve a level of uncertainty and experimentation before they can be deemed successful. In a small number of AI use-cases, identifying a measurable metric for projected returns may be relatively simple. For instance, in predictive maintenance applications for the manufacturing sector, businesses can link the returns directly to a reduction in maintenance costs or reduction in machinery downtimes.
But in other applications, such as improving customer experiences in banking, identifying a small number of reliable metrics to measure success is far more challenging.
Unless businesses have a clear understanding of the returns, they stand to risk losing on their AI investments. One way to ensure an AI project has a measurable metric is to choose a specific business problem where there already exists a non-AI solution and results are being measured and tracked.
Jan Kautz, VP of Learning and Perception Research at NVIDIA, who we interviewed for our previous podcast series on getting started with AI, seemed to agree that developing an AI solution for an existing business problem might be easier when it comes to measuring success rather than developing a completely new AI use-case with no precedent:
The danger of doing something completely new in AI is that you dont actually know if what you are doing is actually correct because you have nothing to compare it to. I would suggest banks to pick an area where they already have an existing system in place so that you can compare what the results of the AI system are and know if you are at least getting better results than the existing system
Business leaders also need to understand that in order to deploy an AI project across an organization, they not only require data scientists, but also data engineers. Data scientists are those that develop machine learning algorithms for a particular capability.
Data engineers usually undertake the task of implementing the solution across the enterprise. This might involve identifying if the existing data infrastructure is set up in a sustainable way that will allow AI systems to function smoothly over time and across the organization or that the devops process is capable of sustaining AI projects.
Narayanan believes most successful AI projects that can show positive results will involve data scientists working in collaboration with data engineers. Input from these employees is critical to understanding what a measurable metric of return might because they have the deepest understanding of what the AI system can do.
But these employees usually lack the insight to connect technical benefits to the overall business gains, which needs to come from the subject-matter experts in the domain into which AI is being applied.
Business leaders need to take into account both these perspectives to truly understand what benefits they are likely to get from their AI projects today. This will also help them accurately analyze what they want these AI benefits to look like in the future and tweak their systems towards that eventuality.
According to Martin, in order to successfully realize returns from AI projects, businesses need to figure out how to test their initial assumptions, experiment with AI systems, and identify use-cases as quickly as possible.
Testing whether these initial pilot projects have been successful means measuring the performance of the AI system in the task that it is being applied to.
Measuring success in these initial projects can even go wrong in ways that are not related to the technical challenges involved with AI. For instance, if a business implements an AI customer service software and only a few users are introduced to it because of ineffective marketing campaigns, measuring the returns of the AI system become even more challenging.
This is because the AI system might have been designed perfectly, but the pilot test might not have been accurately representative of whether any returns gained will actually lead to gains when deployed across the organization.
According to Martin, its critical for business leaders to understand that pilot test projects must not be run at scale across the enterprise. Enacting a large project, such as completely overhauling a fraud detection system at a bank, should only be done after careful analysis of the results from several experimental pilot projects. This is in line with Andrew Ngs advice to shoot for first AI projects with 6-12 month timeframes, not massive multi-year roll-outs.
Leaders need to think about this in phases, where the first step is to identify which small AI projects can potentially help the business gain knowledge about working with data and AI capabilities.
This doesnt mean that the smaller AI projects dont need to result in any success metrics. Rather, it means that in some cases, the pilot that shows the most immediate returns may not be the ideal first step for enterprise-wide adoption given a companys goals and long-term AI strategy. Leaders should focus on AI being a long-term skillset that is attained in incremental steps.
In order to measure a specific return, businesses also need to establish what kind of budgets they need for AI projects.
Unlike simple software automation, where costs are much easier to calculate, predicting the budgetary requirements for AI projects is more complex. Martin added that this was one of the more common AI-related questions that business leaders ask him. He said:
If a business leader is looking to answer questions like how much budget an AI project might require before starting the project, the best advice I can give businesses is to first ask how much budget can they realistically allocate to AI projects and then plan around that figure. AI projects are not easy to budget for because you dont know whats going to work and what is not; it involves a lot of experimentation. A business might not be able to ascertain how many such experiments they might need to run before finding a valuable use-case.
Martin stresses the fact that businesses need to think about AI from a long-term strategic perspective. They will have to make a decision on whether they are an AI company or not. Being an AI company means there will be a period of constant experimentation with uncertain results that could sometimes even take 6 months of experimentation to yield any noticeable results.
A recent article on MIT Sloan Business Review states that new ways of working and new management strategies (what might be called change management) are among the largest factors keeping most AI initiatives from generating ROI. Our own research arrives at the same conclusion.
Theres also no guarantee that an AI project will not go above budget, given the aforementioned uncertainty and experimentation involved. Butthis will give the data science team leaders an idea of how many experiments they might be able to conduct realistically and which ones they might need to prioritize.
One big challenge that many businesses might be grappling with when it comes to AI likely lies in ensuring that every dollar that goes into AI projects sees a significant return. Getting a return as soon as possible is the ideal business scenario.
Narayanan spoke about what misconception business leaders might have about measuring returns from AI :
Most of our knowledge around what AI can do for business stems from well-marketed examples in the news media. We find that most of these use-cases have been business problems that are well defined in nature. For example, we have seen reports of AI software beating the best human chess players or Beating humans in Alpha Go These are problems that have definitive end points. But when it comes to the most common business problems in fortune 500 companies. These do not have definite outcome.
What Narayanan seems to be articulating is that businesses might ask questions such as, Is our next product launch going to succeed? These questions are significantly more open-ended than a board game with definitive results. The term success might mean different things to different people or teams within an organization.
It might be hard to frame a clear question with a definitive answer for business problems. These questions at best might indicate that their problem statements can be extremely hazy and complex.
It might be impossible for any firm to look at a bucket of data and report how much business value might be gained from leveraging that data given the right kind of algorithms. This might be hard to digest for business leaders, but they need to expect uncertainty when it comes to AI.
This is not a traditional business mindset in many industries. According to Narayanan:
This is a cultural shift in the way of thinking of about how data might be critical for AI success. Leaders need to think about how AI can solve a business problem at scale for the enterprise, that is aligned with their business objectives as a whole while being highly sustainable.
In this section, we put forth a list of frameworks that business leaders can follow in order to maximize the possibility of gaining positive returns from their AI projects and effectively measuring it as such.
Traditionally in business, the term ROI usually corresponds to short-term financial gain, often in terms of improved revenue. AI is a broad technology and sticking to this traditional method of defining ROI might not be the best place to start for businesses. For instance, AI might very well be used to potentially increase revenue in an application.
However, AI can also be used to reduce costs, improve customer experience, or increase the productivity of a specific team within the business. The first step to understanding AI ROI might be to associate the returns with any types of positive business outcome, not necessarily financial gains, including leveling up a teams AI-related skillset.
Carmona said that in his experience, there have been several instances in which businesses have needed to invest funds in an AI project as it is being built due to budgetary constraints.
At the same time, business leaders might be looking for immediate returns on their AI investments. According to Carmona, balancing these two factors (uncertainty in AI projects and gaining returns fast) is something business leaders have to figure out before starting AI projects of any kind.
He spoke about a particular framework used by Microsoft (called the Agile AI framework) to find a balance between the two. We detail the steps involved in this framework below with insights from the interview:
Narayanan stated that one of the critical things for business leaders to understand about measuring the returns of AI projects is to first frame the business question that AI is being applied to in a way that is specific.
For instance, leaving aside the technical concerns, businesses first need to ask questions such as Is AI being used to solve a problem in the rate of growth of the organization, or is it being used to improve the efficiency of a business process or to improve customer experiences?
He went on to give an example of a firm that he claimed worked with Fractal Analytics in the past to explain this concept better:
About 18 months before getting into AI projects the client we were working with brought in a visionary leader who said im not supporting any initiative that cant show progress in 6 weeks.. He seemed to enforce this constraint even though he understood that there are a number of initiatives that are transformative long term engagements at enterprise level. This allowed the company to be more rapid defining areas to work and ruthless about what success and progress means and therefore establish a codified approach to measurement.
In a 12 month period they executed 30-40 different initiatives that they called Minimum Viable Proposition (MVPs). They identified 5-6 which had the potential to become transformative at enterprise level and this year they are taking these to deploy on an organizational scale.
According to Narayanan, the client gleaned the following three insights from this process:
See the rest here:
Posted: at 7:44 pm
The aftermath of an El Nio event in Peru in 2017
By Warren CornwallSep. 18, 2019 , 1:30 PM
The dreaded El Nio strikes the globe every 2 to 7 years. As warm waters in the tropical Pacific Ocean shift eastward and trade winds weaken, the weather pattern ripples through the atmosphere, causing drought in southern Africa, wildfires in South America, and flooding on North Americas Pacific coast. Climate scientists have struggled to predict El Nio events more than 1 year in advance, but artificial intelligence (AI) can now extend forecasts to 18 months, according to a new study.
The work could help people in threatened regions better prepare for droughts and floods, for example by choosing which crops to plant, says William Hsieh, a retired climate scientist in Victoria, Canada, who worked on early El Nio forecasts but who was not involved in the current study. Longer forecasts could have large economic benefits, he says.
Part of the problem with some El Nio forecasts is that they rely on a relatively small set of historical statistics for factors such as ocean temperature. Other forecasts use climate models but struggle to create the detailed pictures of the ocean needed for long-range forecasts.
The new research uses a type of AI called a convolutionalneural network, which is adept at recognizing images. For example, the neural network can be trained to recognize cats in photos by identifying characteristics shared by all cats, such as whiskers and four legs. In this case, researchers trained the neural network on global images of historic sea surface temperatures and deep ocean temperatures to learn how they corresponded to the future emergence of El Nio events.
Such neural networks need a large number of training images before they can identify underlying patterns. To get around the shortage of historic El Nio data, the scientists fed the program re-creations of historic ocean conditions produced by a set of reputable climate models, ones frequently used for study climate change, says the studys lead author, Yoo-Geun Ham, a climate scientist at Chonnam National University in Gwangju, South Korea. As a result, the scientists could show the computer system not just one set of actual historic data, spanning 1871 to 1973, but several thousand simulations of that same data by the climate models.
When tested against real data from 1984 to 2017, the program was able to predict El Nio states as far out as 18 months, the team reports today in Nature. The program was far from perfect: It was only about 74% accurate at predicting El Nio events 1.5 years into the future. But thats still better than best current model, which is only 56% accurate for that time frame, Ham says.
The AI also proved more adept at pinpointing which part of the Pacific would heat up the most. That has real-world implications, because El Nios centered in the eastern Pacific, closer to South America, translate into hotter water temperatures in the northern Pacific and more flood-inducing rain in the Americas, compared with El Nios that are centered farther to the west.
The use of the climate models to create extra training data is a clever way around the shortcomings of other approaches, Hsieh says. It appears enough of an advance that it should be deployed for real forecasts, he says.
But its not clear how much real-world benefit will come from pushing forecasts beyond 1 year, cautions Stephen Zebiak, a climate scientist and El Nio modeling expert at Columbia Universitys International Research Institute for Climate and Society in Palisades, New York. The kind of lead time that is actionable is probably less than a year, because decision-makers are unlikely to take action further in advance, he says.
The researchers have already begun to issue forecasts extending into 2021, and predict a likely La Nia eventEl Nios cooler oppositewhich can bring heavier monsoons and droughts. But major government forecasting agencies are not yet considering the groups predictions. Ham says he and his colleagues are tweaking the model to extend the forecast even further. Meanwhile, he says his team is now working to improve forecasts for another ocean pattern, the Indian Ocean Dipole. That fluctuation in ocean temperatures can influence rain and tropical cyclones in Asia and Australia.
Read the original:
Posted: at 7:44 pm
Suzanne Livingston, curator of the recent Barbican Centre exhibition about artificial intelligence, will speak about the impact of technology on cities at Dezeen Day on 30 October.
She will take part in a discussion about future cities, which will explore how urban areas will change in the face of technological, social and environmental pressures.
Livingston curated AI: More than Human, which ran in London from May to August this year and will now tour internationally.
The exhibition explored how AI will affect our lives and featured cutting-edge work by designers including Neri Oxman, Es Devlin, TeamLab and Yuri Suzuki. It explored topics including facial recognition, robotics and how AI can improve urban planning and road safety.
Livingston has a PhD in Philosophy from Warwick University, where she was a founding member of the influential Cybernetic Culture Research Unit (CCRU).
She worked as global principal at branding consultancy Wolff Olins, where she worked on strategy and exhibitions for museums and technology companies.
Now working independently as a consultant, she writes about technology, belief systems, innovation and evolution.
Livingston will be joined on the future cities panel by transportation designer Paul Priestman and experimental architect Rachel Armstrong.
Dezeen Day takes place atBFI Southbankin central London on 30 October. The international conference aims to set the agenda for architecture and design. It will discuss topics including design education, future cities and post-plastic materials.
Speakers includePaola Antonelli,Benjamin Hubert,Dara HuangandPatrik Schumacher. Seeall the speakers that have been announced so far.
Reducedearly bird ticketsplus a limited number of half-price student tickets areon sale now. Buy them using the widget below orclick here to subscribe to the Dezeen Day newsletterfor regular updates.
The illustration is by Rima Sabina Aouf.
See the original post:
Posted: at 7:44 pm
Nuclear weapons and artificial intelligence are two technologies that have scared the living daylights out of people for a long time. These fears have been most vividly expressed through imaginative novels, films, and television shows. Nuclear terror gave us Nevil Schutes On the Beach, Kurt Vonneguts Cats Cradle, Judith Merrils Shadow on the Hearth, Nicholas Meyers The Day After, and more recently Jeffrey Lewis 2020 Commission Report. Anxieties about artificial intelligence begat Jack Williamsons With Folded Hands, William Gibsons Neuromancer, Alex Garlands Ex Machina, and Jonathan Nolan and Lisa Joys Westworld. Combine these fears and you might get something like Sarah Connors playground dream sequence in Terminator 2, resulting in the desert of the real that Morpheus presents to Neo in The Matrix.
While strategists have generally offered more sober explorations of the future relationship between AI and nuclear weapons, some of the most widely received musings on the issue, including a recent call for an AI-enabled dead hand to update Americas aging nuclear command, control, and communications infrastructure, tend to obscure more than they illuminate due to an insufficient understanding of the technologies involved. An appreciation for technical detail, however, is necessary to arrive at realistic assessments of any new technology, and particularly consequential where nuclear weapons are concerned. Some have warned that advances in AI could erode the fundamental logic of nuclear deterrence by enabling counter-force attacks against heretofore concealed and mobile nuclear forces. Such secure second-strike forces are considered the backbone of effective nuclear deterrence by assuring retaliation. Were they to become vulnerable to preemption, nuclear weapons would lose their deterrent value.
We, however, view this concern as overstated. Because of AIs inherent limitations, splendid counter-force will remain out of reach. While emerging technologies and nuclear force postures might interact to alter the dynamics of strategic competition, AI in itself will not diminish the deterrent value of todays nuclear forces.
Understanding the Stability Concern
The exponential growth of sensors and data sources across all warfighting domains has analysts today facing an overabundance of information. The Defense Departments Project Maven was born out of this realization in 2017. With the help of AI, then-Deputy Secretary of Defense Robert Work sought to reduce the human factors burden of [full-motion video] analysis, increase actionable intelligence, and enhance military decision-making in support of the counter-ISIL campaign. Hans Vreeland, a former Marine artillery officer involved in the campaign, recently explained the potential of AI in facilitating targeted strikes for counterinsurgency operations, arguing that AI should be recognized and leveraged as a force multiplier, enabling U.S. forces to do more at higher operational tempo with fewer resources and less uncertainty. Such a magic bullet would surely be welcome as a great boon to any commanders arsenal.
Yet, some strategists warn that the same AI-infused capabilities that allow for more prompt and precise strikes against time-critical conventional targets could also undermine deterrence stability and increase the risk of nuclear use. Specifically, AI-driven improvements to intelligence, surveillance, and reconnaissance would threaten the survivability of heretofore secure second-strike nuclear forces by providing technologically advanced nations with the ability to find, identify, track, and destroy their adversaries mobile and concealed launch platforms. Transporter-erector launchers and ballistic missile submarines, traditionally used by nuclear powers to enhance the survivability of their deterrent forces, would be at greater risk. A country that acquired such an exquisite counter-force capability could not only hope to limit damage in case of a spiraling nuclear crisis but also negate its adversaries nuclear deterrence in one swift blow. Such an ability would undermine the nuclear deterrence calculus whereby the costs of imminent nuclear retaliation far outweigh any conceivable gains from aggression.
These expectations are exaggerated. During the 1991 Gulf War, U.S.-led coalition forces struggled hard to find, fix, and finish Iraqi Scud launchers despite overwhelming air and information superiority. Elusive, time-critical targets still seem to present a problem today. Facing a nuclear-armed adversary, such poor performance would prove disastrous. The prospect of just one enemy warhead surviving would give pause to any decisionmaker contemplating a preemptive counter-force strike. This is why nuclear weapons are such powerful deterrents after all and states who possess them go to great lengths to protect these assets. While some worry that AI could achieve near-perfect performance and thereby enable an effective counter-force capability, inherent technological limitations will prevent it from doing so for the foreseeable future. AI may bring modest improvements in certain areas, but it cannot fundamentally alter the calculus that underpins deterrence by punishment.
The limitations AI faces are twofold: poor data and the inability of even state-of-the-art AI to make up for poor data. Misguided beliefs about what AI can and cannot accomplish further impede realistic assessments.
The data used for training and operationalizing automated image-recognition algorithms suffers from multiple shortcomings. Training an AI to recognize objects of interest among other objects requires prelabeled datasets with both positive and negative examples. While pictures of commercial trucks are abundant, much fewer ground-truth pictures of mobile missile launchers are available. In addition to the ground-truth pictures potentially not representing all launcher models, this data imbalance in itself is consequential. To increase its accuracy with training data that includes fewer launchers than images of other vehicles, the AI would be incentivized to produce false negatives by misclassifying mobile launchers as non-launcher vehicles. Synthetic, e.g., manually warped, variations of missile-launcher images could be included to identify launchers that would otherwise go undetected. This would increase the number of false positives, however, because now trucks that resemble synthetic launchers would be misclassified.
Moreover, images are a poor representation of reality. Whereas humans can infer the function of an object from its external characteristics, AI still struggles to do so. This is not so much an issue where an objects form is meant to inform about its function, like in handwriting or speech recognition. But a vehicles structure does not necessarily inform about its function a problem for an AI tasked with differentiating between vehicles that carry and launch nuclear-armed ballistic missiles and those that do not. Pixilated, two-dimensional images are not only a poor representation of a vehicles function, but also of the three-dimensional object itself. Even though resolution can be increased and a three-dimensional representation constructed from images taken from different angles, this introduces the curse of dimensionality. With greater resolution and dimensional complexity, the number of discernable features increases, thus requiring exponentially more memory and running time for an AI to learn and analyze. AIs inability to discard unimportant features further makes similar pictures seem increasingly dissimilar and vice versa.
Could clever, high-powered AI compensate for these data deficiencies? Machine-learning theory suggests not. When designing algorithms, AI researchers face trade-offs. Data describing real-world problems, particularly those that pertain to human interactions, are always incomplete and imperfect. Accordingly, researchers must specify which patterns AI is to learn. Intuitively it might seem reasonable for an algorithm to learn all patterns present in a particular data set, but many of these patterns will represent random events and noise or be the product of selection bias. Such an AI could also fail catastrophically when encountering new data. In turn, if an algorithm learns only the strongest patterns, it may perform poorly although not catastrophically on any one image. Consequently, attempts to improve an AIs performance by reducing bias generally increase variance and vice versa. Additionally, while any tool can serve as a hammer, few will do a very good job at hammering. Likewise, no one algorithm can outperform all others on all possible problem sets. Neural networks are not universally better than decision trees, for example. Because there is an infinite number of design choices, there is no way to identify the best possible algorithm. And with new data, a heretofore near-perfect algorithm might no longer be the best choice. Invariably, some error is irreducible.
Nevertheless, tailoring improves AI performance. Regarding image recognition, intimate knowledge of the object to be detected allows for greater specification, yielding higher accuracy. On the counter-force problem, however, a priori knowledge is not easily obtained; it is likely to be neither clean nor concise. As discussed above, because function cannot be fully represented in an image, it cannot be fully learned by the AI. Moreover, like most military affairs, counter-force is a contested and dynamic problem. Adversaries will attempt to conceal their mobile-missile launchers or change their design to fool AI-enabled ISR capabilities. They could also try to poison AI training data to induce misclassification. This is particularly problematic because of the one-off nature of a counter-force strike, which prevents validating AI performance with real-world experience. Simulations can only get AI so far.
When it comes to AI, near-perfect performance is tied inextricably to operating in environments that are predictable, even controlled. The counter-force challenge is anything but. Facing such a complex and dynamic problem set, AI would be constrained to lower levels of confidence. Sensor platforms would provide an abundance of imagery and modern precision-guided munitions could be expected to eliminate designated targets, but automated image recognition could not guarantee the detection of all relevant targets.
The Pitfalls of a Faulty Paradigm
Poor data and technological constraints limit AIs impact on the fundamental logic of nuclear deterrence, as well as on other problem sets requiring near-perfect levels of confidence. So, why is the fuzzy buzz not making way for a more measured debate on specific merits and limitations?
The military-technological innovations of the past derived their power principally from the largely familiar and relatively intuitive physical world. Once the mechanics of aviation and satellite communication were understood, they were easily scaled up to enable the awesome capabilities militaries have at their disposal today. What many fail to appreciate, however, is how fundamentally different the world of AI operates and the enduring obstacles it contains. This unfamiliarity with the rules of the computational world sustains the application of an ill-fitting innovation paradigm to AI.
As discussed above, when problems grow more complex, AIs time and resource demands increase exponentially. The traveling salesman problem provides a simple illustration: Given a list of cities and the distances between each pair of cities, what is the shortest possible route a salesman can take that visits each city and returns to the origin city? A desktop computer can answer this question for ten cities (and 3,628,800 possible routes) in mere seconds. With just 60 cities the number of possible routes exceeds the number of atoms in the known universe (roughly 1080). Once the list gets up to 120 destinations, a supercomputer with as many processors as there are atoms in the universe each of them capable of testing a trillion routes per second would have to run longer than the age of the universe to solve the problem. Thus, in contrast to technological innovations rooted in the physical world, there is often no straight-forward way to scale up AI solutions.
Moreover, machine intelligence is much different from human intelligence. When confronted with impressive AI results, some tend to associate machine performance with human-level intelligence without acknowledging that these results were obtained in narrowly defined problem sets. Unlike humans, AI lacks the capacity for conjecture and criticism to deal flexibly with unfamiliar information. It also remains incapable of learning rich, higher-level concepts from few reference points, so that it cannot easily transfer knowledge from one area to another. Rather, there is a high likelihood of catastrophic failure when AI is exposed to a new environment.
Understanding AIs Actual Impact on Deterrence and Stability
What should we make of the real advantages AI promises and the real limitations it will remain constrained by? As Work, Vreeland, and others have persuasively argued, AI could generate significant advantages in a variety of contexts. While the stakes are high in all military operations, nuclear weapons are particularly consequential. But because AI cannot reach near-perfect levels of confidence in dynamic environments, it is unlikely to solve the counter-force problem and imperil nuclear deterrence.
What is less clear at this time is how AI, specifically automated image recognition, will interact with other emerging technologies, doctrinal innovations, and changes in the international security environment. AI could arguably enhance nations confidence in their nuclear early warning systems and lessen pressures for early nuclear use in a conflict, for example, or improve verification for arms control and nonproliferation.
On the other hand, situations might arise in which an imperfect but marginally AI-improved counter-force capability would be considered as good enough to order a strike against an adversarys nuclear forces, especially when paired with overconfidence in homeland missile defense. Particularly states with relatively small and vulnerable arsenals would find it hard to regard assurances that AI would not be used to target their nuclear weapons as credible. Their efforts to hedge against improving counter-force capabilities might include posture adjustments, such as pre-delegating launch authority or co-locating operational warheads with missile units, which could increase first-strike instability and heighten the risk of deliberate, inadvertent, and accidental nuclear use. Accordingly, future instabilities will be a product less of the independent effects of AI than of the perennial credibility problems associated with deterrence and reassurance in a world of ever-evolving capabilities.
As new technologies bring new forms of strategic competition, the policy debate must become better informed about technical matters. There is no better illustration of this requirement than in the debate about AI, where a fundamental misunderstanding of technical matters underpins a serious misjudgment of the impact of AI on stability. While faulty paradigms sustain misplaced expectations about AIs impact, poor data and technological constraints curtail its effect on the fundamental logic of nuclear deterrence. The high demands of counter-force and the inability of AI to provide optimal solutions for extremely complex problems will remain irreconcilable for the foreseeable future.
Rafael Loss (@_RafaelLoss) works at the Center for Global Security Research at Lawrence Livermore National Laboratory. He was a Fulbright fellow at the Fletcher School of Law and Diplomacy at Tufts University and recently participated in the Center for Strategic and International Studies Nuclear Scholars Initiative.
Joseph Johnson is a Ph.D. candidate in computer science at Brigham Young University. His research focuses on novel applications of game theory and network theory in order to enhance wargaming. He deployed to Iraq with the Army National Guard in 2003 and worked at the Center for Global Security Research at Lawrence Livermore National Laboratory.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The views and opinions expressed herein do not necessarily state or reflect those of Lawrence Livermore National Security, LLC., the United States government, or any other organization. LLNL-TR-779058.
Image: U.S. Air Force (Photo bySenior Airman Thomas Barley)
Read the original post:
Posted: at 7:44 pm
And yet the concerns have not cast their shadow over India since AI research is still in its infancy in the country
Artificial Intelligenceor AI is the new digital frontier that will transform the way the world works and lives. Profoundly so. At a basic level of understanding, AI is the theory and development of computer systems that can perform tasks that normally require human intelligence, such as visual perception, speech recognition and even decision-making.
Its gradual development in the half century since 1956 when the term was first used gave us no hint of the extraordinary leaps in technology that would occur in the last decade and a half.
A research study by the WIPO (World Intellectual Property Organization) underlines this phenomenon with its findings: since the 1950s, innovators and researchers have published more than 1.6 million AI-related scientific publications and filed patent applications for nearly 340,000 inventions, most of it occurring since 2012.
Machine learning, finds the WIPO study, is the dominant AI technique, found in 40 per cent of all the AI-related patents it has studied. This trend has grown at an average rate of 28 per cent every year from 2013 onwards.
More data, increased connectedness and greater computer power have facilitated the new breakthroughs and the AI patent boom. As to which sectors are changing rapidly, the study shows it is primarily telecommunications, transportation and life or medical sciences. These account for 42 per cent of AI-related patents filed so far.
In short, super intelligence, which most of us believed was science fiction and a development far into the future, now appears imminent. Thats why there is so much concern over the risks associated with AI from the greats of science like Stephen Hawking to technology giants such as Steve Wozniak and Elon Musk.
Of a piece is the unexpected caution being shown by the US Patent and Trademark Office. It has sought public comments on a range of AI-related concerns, many of which are centred on the diminishing role of humans in AI breakthroughs.
Among the questions it has posed is: What are the different ways that a natural person can contribute to the conception of an AI invention and be eligible to be a named inventor? Should an entity other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?
The dilemma for patent offices which have not addressed this worry is whether existing patent laws on inventions need to be revised to take into account inventions where an entity (computers) other than a natural person has contributed greatly to its conception.
Such esoteric concerns have not cast their shadow over India, understandably so since AI research is still in its infancy here. The Global AI Talent Report 2018 finds that India is a bit player in this critical area where, predictably, the US and China are in the forefront. Of the 22,000 PhD educated researchers worldwide working on AI, less than 50 are focused seriously on AI in India.
A NITI Aayog strategy paper on AI offers little hope because of the low intensity of research which is hobbled by lack of expertise, personnel and skilling opportunities and enabling data ecosystems. For momentous developments, watch the Chinese and American space.
(This article was first published in Down To Earth's print edition dated September 16-30, 2019)
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.
India Environment Portal Resources :
Posted: at 7:44 pm
The Journal Metabolism Clinical and Experimental mentions in a recent review that the use of artificial intelligence (AI) in medicine has come to cover such broad topics from informatics to the application of nanorobots for the delivery of drugs. AI has come a long way from its humble beginnings. With the advanced development of AI systems and machine learning, more significant medical applications for the technology are emerging. According to Cloudwedge, FocalNet, an AI system recently developed by researchers at UCLA, can aid radiologists and oncology specialists in diagnosing prostate cancer.
According to UK Cancer Research Magazine, over 17 million cancer cases were diagnosed across the globe throughout 2018. The same research suggests there will be 27.5 million new cancer cases diagnosed each year by 2040.
Although these recent statistics seem discouraging, if we compare diagnosis and treatment data, patient outcomes have improved significantly compared to a few decades ago in the 1970s, less than a quarter of people suffering from cancer survived. Today, thanks to progress in the field, survival rates have significantly improved. AI is a part of that progress.
As early as 1988, The Annals of Internal Medicine mentioned that conventional computer-aided diagnoses were limited, and to overcome the shortfalls, researchers turned to artificial intelligence. However, because of the limited technology available at the time, the system had to be manually trained by medical personnel, and it's likely that this training only incorporated the personal experience of a handful of doctors. Despite these limitations, this set the stage for the use of neural networks in today's medical field.
These neural networks are the most basic form of artificial intelligence. Machine learning is the branch of AI that is focused on teaching machines to be better at tasks iteratively. By developing algorithms that can help systems determine where they were right and where they were wrong automatically, the system could theoretically learn generations worth of data in a short space of time. Despite the theoretical soundness of the technique, and the use of complex algorithms that can recognize behaviors and patterns, AI technology has only recently been able to offer the human-like insight and determinations required for it to excel in the medical field.
Nature reports that the New York Genome Center relies on a unique piece of software for screening its patients for glioblastoma - an artificial intelligence system developed by IBM called Watson. Watson gained fame in 2011 thanks to its excellent performance in a televised game show, but the AI is now being to put to work aiding the diagnostic field. However, the system still needs more data to be trained to function appropriately, and as yet, AI isn't able to teach itself what is correct and what isn't. The goal for IBM's Watson is to be able to read patient files and then access the relevant information needed to give the most accurate diagnosis and treatment plan.
While it has the ability to understand the meaning of language and can develop on its own via machine learning, Watson still has a way to go before it can be introduced into the real world as an effective assistant. But even today, AI has shown in potential in some specialized medical tasks, with human help. According to a recent Northwestern University study, AI can outperform radiologists at cancer screening, especially in patients with lung cancer. The results show that using AI cut false positives by 11%. The medical field might not be so far away from having its own well-trained AI delivering proper diagnoses. It all depends on how fast AI technology advances and how quickly it can learn to diagnose like a human physician.
Go here to read the rest:
Posted: at 7:44 pm
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are unstoppable and a new digital Geneva Convention is required.
Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.
While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.
As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.
Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.
Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.
Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.
Theres still no clear responsible entity for death or injuries caused by an autonomous machine the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.
With military applications, many technologists have called for AI to never make a combat decision especially one that would result in fatalities on its own. While AI can make recommendations, a final decision must be made by a human.
The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.
Petrovs computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Unions strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right.
Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.
Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.
Many companies including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones have pledged not to develop AI technologies for harmful use.
Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.
Last month, a report from Dutch NGO PAX said leading tech firms are putting the world at risk of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the companys reputation.
Why are companies like Microsoft and Amazon not denying that theyre currently developing these highly controversial weapons, which could decide to kill people without direct human involvement? said Frank Slijper, lead author of PAXs report.
A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
Read the original here: