DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Researchers from Googles DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary Generals High-level Panel on Digital Cooperation.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present, the paper reads. This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationshipsand employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that an indifferent field serves the powerful. VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The DeepMind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call sites of coloniality, or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. Theres also ghost work, the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define algorithmic exploitation as the ways institutions or businesses use algorithms to take advantage of already marginalized people and algorithmic oppression as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession, the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamins concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.

Go here to see the original:

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - VentureBeat

Microsoft Shuffles Sales, Marketing to Focus on Cloud, AI – Bloomberg

By

July 3, 2017, 1:19 PM EDT July 3, 2017, 5:50 PM EDT

Microsoft Corp. reorganized its sales and marketing operations in a bid to woo more customers in areas like artificial intelligence and the cloud by providing sales staff with greater technical and industry-specific expertise.

The changes will mean thousands of job cuts in areas such as field sales, said a person familiar with the restructuring who asked not to be named because the workforce reductions arent public. The company had 121,567 employees as of March 31. The memo didnt mention any job cuts.

The company unveiled the steps in an email to staff Monday that was obtained by Bloomberg. Commercial sales will be split into two segments -- one targeting the biggest customers and one on small and medium clients. Employees will be aligned around six industries -- manufacturing, financial services, retail, health, education and government. Theyll focus on selling software in four categories: Modern workplace, business applications, apps and infrastructure and data and AI.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Microsoft is in a pitched battle with companies like Amazon.com Inc. and Alphabet Inc., for customers who want to move workplace applications and data to the cloud, as well as take advantage of advances in artificial intelligence. The company, which has not dramatically overhauled its salesforce in years, wants to tailor those teams better for selling cloud software rather than desktop and server solutions.

There is an enormous $4.5 trillion market opportunity across our Commercial and Consumer businesses, according to the email, which was sent byWorldwide Commercial Business chief Judson Althoff, Global Sales and Marketing group leader Jean-Philippe Courtois and Chris Capossela, the companys chief marketing officer.

In the consumer and device sales area, the Redmond, Washington-based company is creating six regions selling products like Windows software and Surface hardware, Office 365 cloud software for consumers and the Xbox game console. The group will also focus on new areas such as the Internet of Things, voice, mixed reality and AI.

Microsoft will track metrics including large companies deploying Windows 10, sales of Windows 10 Pro devices and competition against Alphabets Chromebooks and Apple Inc.s iPads.

Microsoft aims to expand its consumer business by creating desire for the same creativity tools that people have at work through Surface, Windows devices and Office 365, according to the memo.

In addition, gaming is growing rapidly across all device types and is evolving to new scenarios like eSports, game broadcasting, and mixed reality content and we will drive growth in this category as well, according to the memo.

See the original post here:

Microsoft Shuffles Sales, Marketing to Focus on Cloud, AI - Bloomberg

AI Is The Brains Exoskeleton – Forbes

Computers today are smart. Super smart. Butwehumans are still smarter.

We are now at a point withartificialintelligence (AI) andmachinelearning (ML) wherewe can use a new confluence of forces to increase human productivity andingenuity.Allthe while, we must remember why were using these newtools and how they can help us work smarter and faster.

If you saw the movie Aliens, you might remember theiconic imageof Ripley encased in a mechanical exoskeleton, ready to take on the deadly alien queen.AIs impact on human intelligenceis akintoamechanical exoskeleton on the human body.One turbocharges our strength, the other turbocharges our smarts.

Thelatest developments in AI canboosthuman intelligence, so we need to understandhow andwhere to apply the new power weve beendeveloping.

Human ingenuity can be augmented by AI.

Human ingenuity is critical

Human ingenuity will be criticalto howwe apply AI-poweredinsights. After all, we humans developed computer-based deep learning in the first place.

This new augmented intelligencehas tobe purposebuilt and customcrafted for specific workflows and applicationsin the workplacesomething computers wont be ableto pull offwith thefinesse, intuition,or ingenuityof humansanytime soon.

This new augmented intelligencealso needs to be appliedina strategically directed, channelled,and prescriptive way.From an automation augmentation perspective,this meanswere trusting the IT platform to(metaphorically)not only drive the car for us,buttoknow the road ahead, the potential for accident hotspots,and the fastest way to the the freeway.

Thisprescriptive ability to apply intelligence in the right place,at the right time,will help us engineer AI into our lives in the right way.Context and actionability are key; intelligenceisonlyhelpfulwhenused atthe time and place where it can have the most profound effect.

Its not me, its you(and your data)

Organizationsthatrealizeyou cant just plug in AI and hope for the best will be the onesthat make the mostof deep learning. These firms will be able tostress testtheir IT stacksand find out where bottlenecks and breaking points exist.

The other positive-negative side effect of this progressive approach is that it shows the organization whereitsdata isinadequate. Good AI with poor data is bad AI. When you know where you have gaps in your data estate, where corruptions exist and where data deduplication needs to happen, then yourdata provenancecan be more diligently managed.

Measuring our progress

Theres little point in pushing upwards to the higher tiers of AI-empoweredintellect if we dont track our progress.

Performanceanalyticsare nowasub-sector of the IT industry itself.This is where integrated data intelligence can allow a business to exchange insight between various applications and its wider IT stack. From this point,organizationscan use predefined key performance indicators (KPIs) tuned to theirindustryverticaltomeasure progress and more clearly identify their next target areas for AI-enrichment.

Thisis massively complex algorithmic logic that calculates metrics based upon historical, live transactional,and future predictive data stores.In simple,real-worldterms, thistranslates intobeing able to use a responsive and interactive dashboard that presents data in a tangible form with next-level drilldowns formore specific businessareas.

A higher state

In many ways, what weve been talking abouthere is the human opportunityto get to a higher state of beingif not spiritually or emotionally, then at least in terms of professional workplace productivity, effectiveness,and ingenuity. Having said that,the resultcould be more uplifting than we everintended.

As an industry,we have built these tools tobecome more effectiveinbusiness andtoprovide betterexperiencesforouremployees and customers.As we strip out unnecessary,repetitivework and focus on more creative, less stressful and more fulfilling roles, its not unreasonable to suggest that our personal wellbeingand mindfulnesswill also benefit.

We started AI to evolve the business journey, but at its core, this a human mission. Thank you, machines.

More:

AI Is The Brains Exoskeleton - Forbes

News digest light-activated nanoparticles, Government’s obesity plans, tumour ‘glue’ and AI research fund – Cancer Research UK – Science Blog

Scientists are developing light-activated nanoparticles to kill cancer cells Credit: Harry Gui on Unsplash

With news about the coronavirus pandemic developing daily, we want to make sure everyone affected by cancer gets the information they need during this time.

Were pulling together the latest government and NHS health updates from across the UK in a separate blog post, which were updating regularly.

A new combined therapy with the drug brentuximab has been approved for adults with a rare type of fast-growing lymphoma. Clinical trial evidence suggests the treatment could give people with this type of blood cancer more time before their disease progresses. More on this in our news report.

Boris Johnson is set to announce restrictions onmulti-buy and similar pricepromotions on a range offoodshigh in fat, sugar or salt in a bid to tackle rising obesity in the UK, according to The Times ()and Guardian. The announcement comes off the back of evidence showing that 40% of food spending goes on products on promotion. However, health campaigners have expressed their disappointment at the seeming lack of action on junk food marketing. According to the leaked documents, a 9pm watershed on junk food adverts is not on the cards at this time, although the plans may change in the coming weeks.

Scientists have developed light-activated nanoparticles that kill skin cancer cells in mice. The treatment involves linking tiny particles to short pieces of RNA that inhibit the production of essential proteins that cancer cells need to survive. Its early days yet, but scientists are hopeful that the light-activated technology can help to reduce side effects and make the treatment more targeted. Read more on this at New Atlas.

An excess of a protein thats essential to cell division PRC1 has been linked to many types of cancer, including prostate, ovarian and breast. Now scientists have found that the protein acts as a glue during cell division, precisely controlling the speed at which DNA strands separate as a single cell divides. These findings could help to explain why too much or too little PRC1 disrupts the division process and can be linked to cancer developing. Full story at Technology Networks.

Earlier this month, the Government announced over 16 million pounds in research funding to help improve the diagnosis of cancer and other life-threatening diseases, with Cancer Research UK putting in 3 million. Some of that money will go towards an Oxford-led project to improve lung cancer diagnosis. The team hope to use artificial intelligence to combine clinical, imaging and molecular data and make lung cancer diagnosis more accurate. More on this at Digital Health.

Weve partnered with Abcam to develop custom antibodies that could facilitate cancer research. Dr John Baker at Abcam said: We are proud to be working with Cancer Research UK to support their scientists and help them achieve their next breakthrough faster. Find out more at Cambridge Independent.

Technology Networks reports on a new study that has taken a closer look at how tiny bubble-like structures called vesicles can help cancer cells spread. Scientists found that vesicles from cancer cells contained high levels of proteins with lipid molecules attached, which are associated with the spread of cancer. Weve blogged before about how tumours spread.

Scarlett Sangster is a writer for PA Media Group

More on this topic

Continue reading here:

News digest light-activated nanoparticles, Government's obesity plans, tumour 'glue' and AI research fund - Cancer Research UK - Science Blog

The beginning of the road for AI in finance, the best is yet to come – Information Age

AI is one of several technologies that are disrupting the finance and banking industry. But, the potential of AI in finance is just beginning

The potential for AI to disrupt the finance industry is there. But, the best is yet to come.

AI is just one of several technologies that banks and other financial institutions are using to improve internal processes and bring new experiences to their customers. This is borne out of necessity: if traditional industries dont embrace advanced technologies in the right use cases there is a real chance of disruption. Why would HSBC, for example, let a challenger like Starling Bank out-innovate them?

Both the large and emerging players in the finance industry are opening their arms to AI. AI-based chatbots, for example, are increasingly be used as the first point of contact for customers. This point was reiterated by HSBCs AI programme manager, Sebastian Wilson, during a recent roundtable hosted by Information Age big banks are not standing still, because they realise the incredible level of service and personalisation that can be achieved when technology is used in the right way. Its easier for the disruptors, they dont have data silos and theyre largely based in the cloud, but the incumbents have resource.

As customers and employees demand better and faster ways to engage, organisations are turning to digital technology to help them transform, remain competitive and grow. Read here

Banks are also using AI to develop and target specific customer groups with highly personalised rates, offers, and pricing, according to Jonathan Shawcross, managing director of Banking at Gobeyond Partners.

Targeting key life events (such as buying a house) is nothing new in financial services. However, AI can improve the simplicity, speed, and precision of this marketing. In turn data generated is then used by the technology to learn; further improving targeting and consequently deepening customer relationships over time.

There is no doubt that the finance industry is in the midst of a transformation, largely because of advances in technology and the increased competition between the Davids and Goliaths of the world. But, there is a real sense that best is yet to come the truly transformational applications of AI is still very much ahead of us, believes Shawcross. We should expect to see large financial institutions really beginning to deploy highly intelligent, fast learning systems to reduce friction in both sales and service experiences.

Kam Dhillon, principal associate at Gowling WLG, agrees that while there is greater innovation in the financial markets, the use of AI in finance is currently nascent.

The CTOs and technology leaders of financial institutions do not have their heads in the sand. They know how important AI will be to their organisations business model moving forward. In one way this is represented by the emerging job functions at both the large incumbents and the smaller disruptors. Lloyds Banking Group has a head of robotics, automation and AI operations, and Starling Bank has a head of AI. In every business in the finance sector, no matter the size, there will be growing teams dedicated to AI and related technologies.

AI in finance will grow in importance as it represents a significant opportunity to improve the online customer experience; less effort, more personal and faster resolutions. The technology is also able to leverage the huge amounts of customer data that is stored in the finance sector, which now thanks to PSD2 and Open Banking will be able to shared between third parties, to customise marketing messages and in turn, enhance sales hit rates.

On the other side of profit and loss, Shawcross says that AI also represents an opportunity to cut costs and reduce risks. Given interest margins show no sign of improving in the short term, CTOs remain under huge pressure to reduce costs. On the cost side, robotics can be used successfully to automate low complexity processes. Whilst, on the risk side, AI can help financial institutions reduce fraud and money laundering risks through pattern detection, voice and image recognition.

There is certainly a cost and risk benefit to adopting AI, but at the same time there are also risks presented by the technology, on a practical, ethical, legal and reputational level, according to Dhillon. AI is complex and multifaceted and must be managed effectively. Subsequently, it is crucial that firms fully understand the technology used in AI and the governance around it, she explains.

The AI ethical play is a conversation that needs more visibility as the technology pervades every industry, not just finance. After all, without trust there is no innovation.

Solutions exist to prevent the unethical use of AI in corporate finance transactions, according to Drooms the secure cloud solutions provider. Read here

Shawcross advises organisations to invest time into understanding how to seamlessly integrate robotic and human processes. Firms, he says, must be mindful of when robots should hand over to humans, for instance when discretion or decisions are required or during more complex processes.

The big fear, as robots take over simpler processes and tasks, is that employees will be made redundant. As the technology advances and becomes more capable, organisations now need to think about how to upskill their people alongside this acceleration.

The automotive industry is a great example and one that financial services firms could take inspiration from, suggests Shawcross. By using this model they can help their teams become skilled technicians who oversee the work of robots, intervening when issues arise rather than performing the tasks themselves. This human transformation requires CTOs to work closely with their business and HR counterparts early in their planning cycle to enhance chances of success.

The financial services industry is not ready for the AI revolution, new study finds

Global business value of AI in banking forecast to reach $300bn by 2030

Should financial experts fear the rise of artificial intelligence?

Artificial intelligence will enable banks to increase customer loyalty

AI predictions: how AI is transforming five key industries

See the original post:

The beginning of the road for AI in finance, the best is yet to come - Information Age

Apple has started blogging to draw attention to its AI work – The Verge

After years of near-silence, Apple is slowly starting to make a bit of noise about its work on artificial intelligence. Last December the iPhone maker shared its first public research paper on the topic; this June it announced new tools to speed up machine learning on the iPhone; and today it started blogging. Sort of.

The companys new website, titled Apple Machine Learning Journal, is a bit grander than a blog. But it looks like it will have the same basic function: keeping readers up to date in a relatively accessible manner. Here, you can read posts written by Apple engineers about their work using machine learning technologies, says the opening post, before inviting feedback from researchers, students, and developers.

As the perennial question for bloggers goes, however: whats the point? What are you trying to achieve? The answer is familiar: Apple wants more attention.

Its clear that the recent focus on AI in the world of tech hasnt been kind to the iPhone maker. The company is perceived as lagging behind competitors like Google and Facebook, both in terms of attracting talent and shipping products. Other tech companies regularly publish new and exciting research, which makes headlines and gets researchers excited to work for them. Starting a blog doesnt do much to counter the tide of new work coming out of somewhere like DeepMind, but it is another small step into public life. Notably, at the bottom of Apples new blog, prominently displayed, is a link to the companys jobs site, encouraging readers to apply now.

Whats most interesting, though, is the blogs actual content. The first post (actually a re-post of the paper the company published last December, but with simpler language) deals with one of the core weaknesses of Apples AI approach: its lack of data.

Much of contemporary AIs prowess stems from its ability to sieve patterns out of huge stacks of digital information. Companies like Google, Amazon, and Facebook have access to a lot of user data, but Apple, with its philosophy of not snooping on customers in favor of charging megabucks for hardware has rather tied its hands in that regard. The first post on its machine learning blog offers a small riposte, describing a method of creating synthetic images that can be used to train facial recognition systems. Its not ground-breaking, but its oddly symbolic of what needs to be Apples approach to AI. Probably a blog worth following then.

Read the original here:

Apple has started blogging to draw attention to its AI work - The Verge

People are scared of artificial intelligence – here’s why we should embrace it instead – World Economic Forum

Artificial intelligence (AI) has gained widespread attention in recent years. AI is viewed as a strategic technology to lead us into the future. Yet, when interacting with academics, industry leaders and policy-makers alike, I have observed some growing concerns around the uncertainty of this technology.

In my observation, these concerns can be categorized into three perspectives:

It is understandable that people might have these concerns at this moment in time and we need to face them. As long as we do, I believe we dont need to panic about AI and that society will benefit from embracing it. I propose we address these concerns as follows:

Instead of writing off AI as too complicated for the average person to understand, we should seek to make AI accessible to everyone in society. It shouldnt be just the scientists and engineers who understand it; through adequate education, communication and collaboration, people will understand the potential value that AI can create for the community.

We should democratize AI, meaning that the technology should belong to and benefit all of society; and we should be realistic about where we are in AIs development.

We have made a lot of progress in AI. But if we think of it as a vast ocean, we are still only walking on the beach. Most of the achievements we have made are, in fact, based on having a huge amount of (labelled) data, rather than on AIs ability to be intelligent on its own. Learning in a more natural way, including unsupervised or transfer learning, is still nascent and we are a long way from reaching AI supremacy.

From this point of view, society has only just started its long journey with AI and we are all pretty much starting from the same page. To achieve the next breakthroughs in AI, we need the global community to participate and engage in open collaboration and dialogue.

Machine learning projects took home the most AI funding in 2019

We can benefit from AI innovation while we are figuring out how to regulate the technology. Let me give you an example: Ford Motor produced the Model T car in 1908, but it took 60 years for the US to issue formal regulations on the use of seatbelts. This delay did not prevent people from benefitting significantly from this form of transportation. At the same time, however, we need regulations so society can reap sustainable benefits from new technologies like AI and we need to work together as a global community to establish and implement them.

The World Economic Forum was the first to draw the worlds attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will helpnot harmhumanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, internet of things (IoT), precision medicine and environmental innovations.

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

By addressing the aforementioned concerns people may have regarding AI, I believe that Trustworthy AI will provide great benefits to society. There is already a consensus in the international community about the six dimensions of Trustworthy AI: fairness, accountability, value alignment, robustness, reproducibility and explainability. While fairness, accountability and value alignment embody our social responsibility; robustness, Reproducibility and explainability pose massive technical challenges to us.

Trustworthy AI innovation is a marathon, not a sprint. If we are willing to stay the course and if we embrace AI innovation and regulation with an open, inclusive, principle-based and collaborative attitude, the value AI can create could far exceed our expectations. I believe that the next generation of the intelligence economy will be forged in trust and differentiated by perspective.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Written by

Bowen Zhou, President, JD Cloud & AI; Chair, JD Technology Committee; Vice-President, JD.COM

The views expressed in this article are those of the author alone and not the World Economic Forum.

Link:

People are scared of artificial intelligence - here's why we should embrace it instead - World Economic Forum

THE SCHMIDT HITS THE BAN: Keep your gloves off AI, military top brass – The Register

RSA USA Alphabet exec chairman Eric Schmidt is worried that the future of the internet is going to be under threat once the worlds militaries get good at artificial intelligence.

Speaking at the RSA security conference in San Francisco, Google's ultimate supremo said he is worried the internet will be balkanized if countries lock down their borders to prevent citizens' personal information flowing into other nations. That would obviously be bad news for a global cloud giant like Google.

Schmidt also fears states are developing their own AI-powered cyber-weapons for online warfare. He said machine-learning research needs to be out in the open under public scrutiny, not locked away in some secret military lab.

For one thing, that would help everyone prepare their network defenses for AI-driven attacks, as opposed to being blindsided by highly classified technology. It would also get folks talking about whether or not it's a good thing to put powerful AI into the hands of untouchable exploit-wielding government intelligence agencies.

The technology industry needs to ask if we can come up with a way for countries not to use machine learning to militarise the internet, Schmidt said during a keynote address. If they did the internet would start to get shut down. Id like to see discussions on stopping that.

This is one of the reasons Google will open-source as much of its AI research as possible, he said. While some companies he mentioned no names want to keep their AI research private, Google thinks the benefits of being open and scrutinized by the crowd far outweighs any loss of competitive advantage.

We have to say: that's kinda funny, Eric, because Google and its AI wing Deepmind are close to being the most secretive closed-source organizations on the planet.

Schmidt said his Chocolate Factory has plowed millions of dollars into building machine-learning software, and thus had something of an advantage. But the next big breakthrough could be achieved by someone working out of their garage and thats healthy competition.

He had been surprised by the power of AI systems, as research in the sector hit a brick wall in the 1980s. But increasing computing power, and better machine-learning programming and algorithms, will make artificial intelligence commonplace soon.

The first area we are going to see it widely deployed is in computer vision, he predicted. Computers are already showing themselves to be superior to humans in this regard, he said, pointing to cases where computers are better at analyzing medical images than human doctors in part because they have been trained on millions of images instead of just thousands that a medic might see in their career.

Self-driving cars would also be early adopters, he said, for similar reasons. Then again, given the problems some Google cars have with balloons and bright sunlight this may not come as fast as Schmidt thinks.

While a self-aware AI system is a the stuff of popular fiction, Schmidt told the audience not to worry too much about it. While getting AI systems that share human values and which can be controlled is an important philosophical question, he said, theres no sign that the Singularity is on the horizon.

We are nowhere near that in real life, were still in baby stages of conceptual learning, he said. In order to worry about that you have to think 10, 20, 30, or 40 years into the future.

See the rest here:

THE SCHMIDT HITS THE BAN: Keep your gloves off AI, military top brass - The Register

Active.AI and Glia join forces on customer service through conversational AI – Finextra

Active.Ai, a leading conversational AI platform for financial services, and Glia, a leading provider of Digital Customer Service, today announced a strategic partnership; Together, the fintechs are empowering financial institutions to meet customers in the digital domain and support them through conversational AI, allowing them to drive efficiencies, reduce cost and most importantly, facilitate stronger customer experiences.

Glias Digital Customer Service platform enables financial institutions to meet customers where they are and communicate with them through whichever methods they preferincluding messaging, video banking and voiceand guide them using CoBrowsing. Over 150 financial institutions have improved their top and bottom line and increased customer loyalty through leveraging Glias platform.

Over 25 leading financial institutions across the world use Active.Ais platform to handle millions of interactions per month across simple and complex banking conversations. Active.Ais low-code platform enables banks and credit unions to deploy and scale rapidly with 150+ use cases pre-built out-of-the-box to increase customer acquisition, reduce customer service turnaround time and deepen customer engagement.

Being able to strategically blend AI and the human touch has become a key differentiator for banks and credit unions; doing so enables them to improve efficiencies while helping ensure every customer interaction is consistent, convenient and seamless, said Dan Michaeli, CEO and co-founder of Glia. Our partnership with Active.AI will help further our mission of helping financial institutions modernize the way they support customers in the digital world.

"Customers today expect a frictionless omnichannel experience, and the future of financial services is all about AI/human collaboration. We are excited to partner with Glia to enable financial institutions to deliver great customer experiences and achieve higher NPS, says Ravi Shankar, Active.ai, CEO.

See the original post:

Active.AI and Glia join forces on customer service through conversational AI - Finextra

Artificial intelligence gets real in the OR – Modern Healthcare

Dr. Ahmed Ghazi, a urologist and director of the simulation innovation lab at the University of Rochester (N.Y.) Medical Center, once thought autonomous robotic surgery wasnt possible. He changed his mind after seeing a research group successfully complete a running suture on one of his labs tissue models with an autonomous robot.

It was surprisingly preciseand impressive, Ghazi said. But whats missing from the autonomous robot is the judgment, he said. Every single patient, when you look inside to do the same surgery, is very different. Ghazi suggested thinking about autonomous surgical procedures like an airplane on autopilot: the pilots still there. The future of autonomous surgery is there, but it has to be guided by the surgeon, he said.

Its also a matter of ensuring AI surgical systems are trained on high-quality and representative data, experts say. Before implementing any AI product, providers need to understand what data the program was trained on and what data it considers to make its decisions, said Dr. Andrew Furman, executive director of clinical excellence at ECRI. What data were input for the software or product to make a particular decision must also be weighed, and are those inputs comparable to other populations? he said.

To create a model capable of making surgical decisions, developers need to train it on thousands of previous surgical cases. That could be a long-term outcome of using AI to analyze video recordings of surgical procedures, said Dr. Tamir Wolf, co-founder and CEO of Theator, another company that does just that.

While the companys current product is designed to help surgeons prepare for a procedure and review their performance, its vision is to use insights from that data to underpin real-time decision support and, eventually, autonomous surgical systems.

UC San Diego Health is using a video-analysis tool developed by Digital Surgery, an AI and analytics company Medtronic acquired earlier this year. The acquisition is part of Medtronics strategy to bolster its AI capabilities, said Megan Rosengarten, vice president and general manager of surgical robotics at Medtronic.

Theres a lot of places where were going to build upon that, Rosengarten said. She described a likely evolution from AI providing recommendations for nonclinical workflows, to offering intra-operative clinical decision support, to automating aspects of nonclinical tasks, and possibly to automating aspects of clinical tasks.

Autonomous surgical robots arent a specific end goal Medtronic is aiming for, she said, though the companys current work could serve as building blocks for automation.

Intuitive Surgical, creator of the da Vinci system, isnt actively looking to develop autonomous robotic systems, according to Brian Miller, the companys senior vice president and general manager for systems, imaging and digital.Its AI products so far use the technology to create 3D visualizations from images and extract insights from how surgeons interact with the companys equipment.

To develop an automated robotic product, it would have to solve a real problem identified by customers, Miller said, which he hasnt seen. Were looking to augment what the surgeon or what the users can do, he said.

Read the original post:

Artificial intelligence gets real in the OR - Modern Healthcare

COVID-19 Impact Review: What to Expect from AI in Cyber security Industry in 2020? | Market Production-Consumption Ratio, Technology Study with…

According to AllTheResearch, the Global AI in Cybersecurity Market Ecosystem will see substantial growth by USD 2.3 billion in 2023 and 27.3% of CAGR by 2027

Artificial intelligence has led to an increase in the adoption rate of AI in Cyber security Market Ecosystem with the help of the increasing penetration of internet in both developing and developed countries. The private financial and banking sector has been marked as a major industry for the use of these security solutions, followed by healthcare, aerospace & defense, and automotive sectors.

The growth of thisAI in Cyber Security Marketis primarily driven by increasing disposable incomes, growing technological innovation in Logistics, Healthcare, transportations, Automotive, Retail, BFSI and Aerospace industry, AIin Cyber Security Ecosystem are helping organizations in monitoring, detecting, reporting, and countering cyber threats to maintain data confidentiality. The increasing awareness among people, advancements in information technology, upgradation of intelligence and surveillance solutions, and increasing volume of data gathered from various sources have demanded the use of reliable and improved cyber security solutions in all industries.

GetExclusive Sample PDF with Top Companies Market Positioning Datahttps://www.alltheresearch.com/sample-request/331Cyber SecurityCompatible Various Sector:Cybersecurity alludes to innovation and practices planned for shielding systems and data from harm or unapproved get to.

Cybersecurity is essential since governments, organizations and military associations gather, procedure and store a great deal of data on PCs

Cyber attackers are putting money into automation to launch attacks, while many organizations are still exploring the manual effort to combine internal security results and put them in context with information about external threats.

However, AI in Cyber Security solutions can detect patterns of malicious behavior in network traffic and files and websites that are introduced to the network.Impact of COVID-19:

AI in Cyber Security Market report analyses the impact of Coronavirus (COVID-19) on the AI in Cyber Security industry.Since the COVID-19 virus outbreak in December 2019, the disease has spread to almost 180+ countries around the globe with the World Health Organization declaring it a public health emergency. The global impacts of the coronavirus disease 2019 (COVID-19) are already starting to be felt, and will significantly affect the AI in Cyber Security market in 2020.The outbreak of COVID-19 has brought effects on many aspects, like flight cancellations; travel bans and quarantines; restaurants closed; all indoor events restricted; emergency declared in many countries; massive slowing of the supply chain; stock market unpredictability; falling business assurance, growing panic among the population, and uncertainty about future.

COVID-19 can affect the global economy in 3 main ways: by directly affecting production and demand, by creating supply chain and market disturbance, and by its financial impact on firms and financial markets.

Get Sample PDF of COVID-19 ToC to understand the impact and be smart in redefining business strategies.https://www.alltheresearch.com/impactC19-request/331GlobalAI in Cyber SecurityMarket Segmentation:Following Top Key players are profiled in this Market Study:Accenture, Capgemini, Cognizant, HCL Technologies Limited, Wipro

By Application:Logistics, Healthcare, transportations, Automotive, Retail, BFSI Aerospace, Consumer Electronics, Oil & Gas, Others

Global AI in Cyber security Ecosystem

The AI in Cyber security Ecosystem was dominated by North America in 2018 and the region accounted for 38.3% share of the overall revenue. The growth is attributed to the presence of prominent players such as IBM, Cisco Systems Inc., Dell Root 9B, Symantec Carpeted Micro Inc., Check Point Software Technologies Ltd., Herjavec, and Palo Alto Networks, which offer advanced solutions and services to all the sectors in the region. Increasing awareness about cybersecurity among private and government organizations is anticipated to drive the need for cyber security solutions over the forecast period. North America is expected to retain its position as the largest market for cyber security solutions over the forecast period.

Table of Content:Ecosystem Report Table of Content

AI in Cyber SecurityMarket Ecosystem Positioning

AI in Cyber SecurityMarket Ecosystem Sizing, Volume, and ASP Analysis & Forecast

And More

View Complete Report with Different Company Profileshttps://www.alltheresearch.com/report/331/ai-in-cyber-security-ecosystem-marketAllTheResearch

Contact Person: Rohit B.

Tel: 1-888-691-6870

Email:[emailprotected]

About Us:AllTheResearch was formed with the aim of making market research a significant tool for managing breakthroughs in the industry. As a leading market research provider, the firm empowers its global clients with business-critical research solutions. The outcome of our study of numerous companies that rely on market research and consulting data for their decision-making made us realise, that its not just sheer data-points, but the right analysis that creates a difference.

Read more here:

COVID-19 Impact Review: What to Expect from AI in Cyber security Industry in 2020? | Market Production-Consumption Ratio, Technology Study with...

Step Up Your Content Marketing Game With AI – Built In

Whether youre running a company, working in a marketing department, or making your way as a freelance writer, you need content. You need to regularly research new topics and output new material to grow your brand awareness. The modern internet is basically a content arms race the more you can produce, the more eyeballs youre going to capture. Fortunately, with automation and the right tools, you boost your productivity in content marketing.

The content explosion weve seen is a bit of a double-edged sword. More content is available than ever before, but the growing mass of information and social media buzz can make it difficult to find valuable content on any given topic. In particular, raising awareness about your brand or product and making it stand out on the web is increasingly difficult.

Despite the difficulty, content marketing is a great way to grow a business, as I have pointed out previously. But with so much noise out there, how do you make sure your content is visible to your target audience?

Heres where automation and software come into play. With the right tools, youll be able to navigate through the maze of modern content and carve out a niche for you and your business. For the purpose of this article, Im assuming you have already identified a niche and a target audience, and your goal is to understand this audience better and engage it in conversation through your content.

The first step in the content creation process is understanding the field youre entering. You need to know who is writing about a particular niche, what theyre saying, and what kind of audience is receptive to the content and sharing it.

You can solve both of these tasks using resources like Ahrefs or Buzzsumo. These tools are advanced trackers which allow you to:

Track particular keywords in a search engine

See search volume based on location and other details

See what people are sharing on social media platforms like Facebook, Twitter, LinkedIn

See what kinds of questions people ask on Quora or Reddit

Conducting this analysis will allow you to see what keywords are underserved that is, where there is high demand in terms of searches but a low supply of content meeting that demand. Use this strategy to find phrases, keywords, and topics to write about. Youll be able to download reports in spreadsheet form and then analyze them further either manually or with data science tools. For example, lets say youre running a gaming blog and want to write about gaming laptops. By looking at Ahrefs to find underserved content, you can see quickly that content on MSI and Asus Gaming laptops is lacking. This couldbe a great story angle for your own content.

You should also make use of Google Trends, a free tool that allows you to compare how the volume of various search queries changed over a given period of time. This tool can provide a useful first step to checking whats trending.

All in all, this first step is a fundamental phase in which you build a list of keywords, and in particular underserved keywords, which are relevant to your brand. The next goal is to make these keywords into headlines and then stories.

The next step in content creation is to prepare headlines and choose angles for each story. You want to find something unique about the given piece of content that is both underserved and also coherent with the brand that youre building.

Usually, content managers handle this job, and this step is also the hardest to automate. You can extract some headlines from Ahrefs or Buzzsumo, but its up to you to determine how to approach a particular story and what should be featured in it. For now, theres no software that can build the structure of a text for you. Sometimes you can modify existing headlines to make them work for you. For example, you can lookarticles naming conventions foryour niche and adjust certain words, like tweaking9 Best Content Marketing Tools You Should Try in 2020 to fit your specific content. Sometimes, though, you need to invent everything from scratch.

By researching what your competition is writing about, what people are currently searching for on the web, and what industry-specific topics are relevant on Reddit and Quora, you will be able to findand then fillgaps in coverage. Once you've developed the headlines and angles, you're ready to begin generating content.

Once youve got your keywords from step one and have ideas for particular stories from step two, the next step is to write them. You can consider various options at this stage depending on your individual budget, speed, and scale. Note that these options are only if you dont want to create content yourself, which would be another strategy you could employ.

The cheapest option is to find writers on one of many services for freelance work. Currently, the most popular are Fiverr and Upwork, and both work well when it comes to writing. You can choose fromnumerous writers to choose from to suit your needs,and you can find both individual freelancers or agencies as well.

If you want to work with a larger team of writers on a more consistent basis, you should use a content curation software like Curata or Contentful. These allow you to organise your content creation efforts in a single place so you can track your content calendar, see whos working on what, and publish pieces on a schedule. Be aware, though, that this is a more expensive option than hiring freelancers directly on Fiverr or Upwork.

Another option if youre on a budget or you just want to scale your content creation by creating thousands of texts quickly is to use AI tools. Current machine learning solutions are still not perfect and wont replace copywriters in general, but they can boost your productivity by a lot. Moreover, thanks to templates, youll be able to use one form to create dozens or hundreds of texts out of it. Often you just have to upload a spreadsheet with your data, create a sample text with one example to function as a template to be able to iterate the process over all the other rows, getting content at scale. For example, traffic or weather news reporting is automated this way. The most advanced solutions on the market right now when it comes to AI-assisted writers are Arria NLG and Contentyze.

The final step in content creation is correction and distribution. Grammarly is a great tool for scanning any text for potential errors. Moreover, it will give you suggestions on style and allow you to check the content for originality. Making original content is especially important to make sure its positioned high in search engines.

To that end, dont forget about SEO optimization. Making sure that all the necessary keywords from step one are in place is crucial to make your content easier for readers to discover. You can give SurferSEO a try for automated SEO suggestions. Alternatively, you can delegate SEO tasks to freelancers just like you did with writing. Fiverr and Upwork feature plenty of SEO experts too.

When it comes to distribution of content, CRMs like Curata or Contentful can do the job, by allowing you to create complex content schedules. Alternatively, you can also use Hootsuite or Buffer to help you with distribution on various social media channels from one place.

All in all, each step of the content creation process can be fairly well automated or delegated with the right tools. From the point of view of your brand, whats most important is creating a coherent, long-term content strategy thatguidesyou in choosing what topics to approach and how to createthat strategyon a macro level. Once you've got a strategy,micro-level content creation from researching to writing todistribution is much easier to automate.

Expert Contributors

Built Ins expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industrys definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Go here to see the original:

Step Up Your Content Marketing Game With AI - Built In

Coca-Cola Wants to Use AI Bots to Create Its Ads – Adweek

BARCELONA, SPAINCoca-Cola is one of the most beloved brands in the world and is known for creating some of the best work in the advertising industry. But can an AI bot replace a creative? Mariano Bosaz, the brands global senior digital director, wants to find out.

Speaking to Adweek at Mobile World Congress on Tuesday, Bosaz said that hes partly in Barcelona this week to get a better feel for how brands can use artificial intelligence because hes interested in swapping in robots for humans that crank out ads.

Content creation is something that we have been doing for a very long timewe brief creative agencies and then they come up with stories that they audio visualize and then we have 30 seconds or maybe longer, Bosaz said. In content, what I want to start experimenting with is automated narratives.

As part of a recent restructuring to make Coke a digital business, the brandhired its first chief digital marketing officer, David Godsman. That digital transformation includes four focus areas: Customer and consumer experience, operations, new businesses and culture. Within the customer and consumer bucket, Coke is interested in using artificial intelligence to improve content, media and commerceparticularly when it makes the creative process more effective.

"If you try to implement a technology that is too old, obviously its obsolete but if youre trying to implement technology too early, youre going to have a problem with scale."

-Mariano Bosaz, global senior digital director, Coca-Cola

In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media. It doesnt need anyone else to do that but a robotthats a long-term vision, he said. I dont know if we can do it 100 percent with robots yetmaybe one daybut bots is the first expression of where that is going.

Bosaz isnt alone in envisioning human-less creative. AI is already being used to create commercial music and jingles and publishers like the AP are experimenting with using robots to write copy.

He noted that while bots and data may not be able to write an entire script, they are capable of putting together the first 5 seconds of a commercial or the end of a spot because you always have the same closing in Coke ads, where an image of the brands logo flashes across the screen with a tagline.

In terms of Coca-Colas interest in AI for media buying, Bosaz said that Coke already buys ads programmatically but that its far from putting more than half of its media budget into programmatic.

Coca-Cola is also looking for ways to use programmatic technology to fulfill ecommerce sales through tactics like subscriptions, though Bosaz didnt say exactly what that may entail.

Right now, Coke sells though third-party retailers like Amazon and Tesco, vending machines and a small portion of sales come from direct-to-consumer programs like Share a Coke.

In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media.

Souped-up vending machines are particularly intriguing in countries like Japan, where mobile adoption and vending machine sales are high. Coke has a Japanese app called Coke On that lets consumers pay for drinks. Once you have that, then you can use beacons so that you know when people are passing by the machines and you can understand habit of consumption, location and time, Bosaz said.

At the same time, Bosaz said marketers need to keep in mind privacy concerns with the Internet of Things and need to find the right balance of using consumer data to provide better services that they appreciate without crossing the line.

That includes devices like Amazon Echo and also includes Cokes own packaging, bottles and trucks. For example, the brand is testing beacons in Belgium in retail stores that pull in live data as shoppers move around the store.

You can see that [in] real-time and then you have historical data that [helps] you predict behavior, Bosaz said.

At Mobile World Congress, Bosaz said that hes looking for the next breakout technology thats also big enough to work for a massive brand like Coke.

Here, I try to understand timing, he said. You need to have the right timing. If you try to implement a technology that is too old, obviously its obsolete but if youre trying to implement technology too early, youre going to have a problem with scale.

Follow this link:

Coca-Cola Wants to Use AI Bots to Create Its Ads - Adweek

The Future of AI Regulation: Part 1 – The National Law Review

INTRODUCTION

Artificial intelligence (AI) systems have raised concerns in the publicsome speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology. At the same time, the development of AI technology is seen as a matter of national priority, and fears of losing the AI technology race fuel national efforts to support its development.

The healthcare and life sciences sectors are highly influenced by US government policy; accordingly, these industry sectors should monitor carefully US government policy pronouncements on AI. This special report is the first of two that will review the US governments overarching national policy on AI, as articulated in Executive Order 13,859, Maintaining American Leadership in Artificial Intelligence (Executive Order), and the related draft Office of Management and Budget memorandum entitled Guidance for Regulation of Artificial Intelligence Applications (Draft Memo). While these two special reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo.

Artificial intelligence (AI) systems have raised concerns in the publicsome speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology. These concerns are heightened by the relative lack of specific legal and regulatory environment that creates guiderails for the development and deployment of AI systems. Indeed, the potential use cases of this new technology are startlingself-driving cars, highly accurate medical diagnosis and screenplay writing are all tasks that AI systems have proven themselves capable of performing. The black box nature of some of these systems, where there is an inability to fully understand how or why an AI system performs as it does, adds to the anxiety about how they are developed and deployed.

At the same time, many nations view the development of AI technologies a matter of national concern. Economic and academic competitiveness in the field is growing, and some governments are concerned that commercial enterprise alone will be insufficient to remain competitive in AI. It is not surprising, then, that governments around the world are beginning to address national strategies for the support of AI development, while at the same time struggling with the issue of regulationpreliminarily, conceptually and directlyincluding the US government.

The role of the government in every industry can be significant, even in a market-driven economy like the US. This is particularly true for those industries that are susceptible to innovation through AI technologies and also highly regulated, controlled or supplied by governments, such as healthcare. Accordingly, the healthcare and life science industries should payparticular attention to governmental pronouncements on policy related to AI.

On January 13, 2020, the Office of Management and Budget (OMB) published a request for comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, Guidance for Regulation of Artificial Intelligence Applications (the Draft Memo).1 OMB produced the Draft Memo in accordance with the requirements of Executive Order 13,859, Maintaining American Leadership in Artificial Intelligence (the Executive Order).2 The Executive Order called on OMB, in coordination with the Office of Science and Technology Policy Director, the Director of the Domestic Policy Council and the Director of the National Economic Council, to issue a memorandum that will:

(i) Inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy and American values; and

(ii) Consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy American values, and United States economic and national security.3

The Executive Order also required OMB to issue a draft version for public comment to help ensure public trust in the development and implementation AI applications.4 Public comments on the Draft Memo are due March 13, 2020.5 Although the Draft Memo, like the Executive Order, speaks in general terms, it does provide more focusthan the Executive Order in many ways. For example, the Executive Order requires implementing agencies to review their authorities relevant to applications of AI and submit plans to OMB to ensure consistency with the final OMB memorandum.6 The Draft Memo provides additional specificity regarding the information that the agencies must incorporate in their respective plans.7

This special report is the first of two that will review the five guiding principles and six strategic objectives articulated in the Executive Order and the specific provisions of the Draft Memo. While these two reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo. These articles will not, however, address national defense matters.

The Executive Order makes very clear that maintaining American leadership in AI is a paramount concern of the administration because of its importance to the economy and national security. In addition, the Executive Order recognizes the important role the Federal Government plays:

[I]n facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.8

The Executive Order identifies objectives that executive departments and agencies should pursue, which primarily address how the federal governmentcan participate in developing the US AI industry. These objectives are as follows:

1. PROMOTE AI R&D INVESTMENT: Promote sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.9

The first objective has a few interesting components. First, the reference to collaboration includes international partners and allies. This implies that the current administration considers the US AI industry as being both international and also, perhaps, governmental. In particular, the reference to allies implies that foreign governments may be partners in the development of the US AI technology industry, presumably, at least, with respect to national security matters. Second, this objective specifically references investment, implying that the administration anticipates financial investment from the identified collaboration partners, including non-US industry and governments. How agencies achieve this objective will be fascinating to discover, particularly in light of US government restrictions on foreign investment in sensitive US industries and the recently enacted regulations implementing the Foreign Investment Risk Review Modernization Act of 2018.10

Federal policy on investment in AI is the subject of the National Artificial Intelligence Research and Development Strategic Plan (the AI R&D Plan),11 a product of the work of the National Science & Technology Councils Select Committee on Artificial Intelligence. The AI R&D Plan is broadly consistentwith the Executive Order, but its objectives and goals pre-date the Executive Order, and were not changed after the Executive Order. Other Federal agencies have also begun the process of actively engaging in an effort to support AI development, including healthcare-related agencies. The Centers for Medicare and Medicaid Services (CMS), citing the Executive Order, announced an AI Health Outcomes Challenge that will include a financial award to selected participants.12 CMS has selected organizations to participate that span a number of industry sectors, and include large consulting firms, academic medical centers, universities, health systems, large and small technology companies, and life sciences companies.13 In addition, a recent report on roundtable discussions co-hosted by the Office of the Chief Technology Officer of the Department of Health and Human Services and the Center for Open Data Enterprise (the Code Report) has identified a number of recommendations for Federal investment within its own infrastructure to support the R&D efforts within and without the Federal Government.14

2. OPEN GOVERNMENT DATA:

Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy and confidentiality protections consistent with applicable laws and policies.15 This objective should resonate with those developers who believe the Federal Government holds valuable data for purposes of AI R&D. The Code Report hasalready identified potentially valuable healthcarerelated data within the Federal Government (and elsewhere) and presented a series of recommendations consistent with the Executive Order objectives. In addition, the AI R&D Plan calls for the sharing of public data as well.

3. REDUCE BARRIERS:

Reduce barriers to the use of AI technologies to promote their innovative application while protecting American technology, economic and national security, civil liberties, privacy, and values.16

Reducing barriers to use of AI technologies is an objective that implicates the existing regulatory landscape, as well as the potential regulatory landscape for AI technologies. Clearly, this objective is a call for agencies and departments to carefully balance the impact of regulations on development and deployment against what can only be described as an amorphous set of values. It remains to be seen whether we will see more definition here, although it should be noted that recent legislative efforts and regulations are reflecting certain values. For example, pending legislation in the State of Washington would require facial recognition services to be susceptible to independent tests for accuracy and unfair performance differences across distinct subpopulations, which can be defined by race, skin tone, ethnicity and other factors.17 The law would also require meaningful human review of all facial recognition services that are used to make decisions that produce legal effects on consumers or similarly significant effects on consumers.18

Ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.19

This objective includes quite a bit, and seems to imply a significant role for the Federal Government in terms of setting the objectives for technical standards for AI. In the summer of 2019, the National Institute of Standards and Technology (NIST) of the US Department of Commerce released a plan for Federal engagement in developing technical standards for AI in response to the Executive Order (the NIST Plan).20 The NIST Plan also clearly articulates the Federal Governments perspective on how standards should be set in the US, including a recognition of the impact of other government approaches:

The standards development approaches followed in the United States rely largely on the private sector to develop voluntary consensus standards, with Federal agencies contributing to and using these standards. Typically, the Federal role includes contributing agency requirements to standards projects, providing technical expertise to standards development, incorporating voluntary standards into policies and regulations, and citing standards in agency procurements. This use of voluntary consensus standards that are open to contributions from multiple parties, especially the private sector, is consistent with the US market-driven economy and has been endorsed in Federal statute and policy. Some governments play a more centrally managed role in standards development-relatedactivitiesand they use standards to support domestic industrial and innovation policy, sometimes at the expense of a competitive, open marketplace. This merits special attention to ensure that US standards-related priorities and interests, including those related to advancing reliable, robust, and trustworthy AI systems, are not impeded.21

The development of industry standards is already happening, evidenced, for example, by the publication of AI-related standards, including in healthcare, by the Consumer Technology Association.22 Another interesting aspect of this objective is to ensure that the standards reflect Federal priorities related to public trust and confidence in AI systems. An exploration of the issue of public trust is well beyond the scope of this short article, but even the most casual observer of this industry will note the very real lack of confidence in AI systems and fear associated with how they are being or may, in the future, be deployed.23 5. NEXT GENERATION RESEARCHERS: Train the next generation of American AI researchers and users through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI.24

The Future of AI Regulation: The Government as Regulator and Research & Development Participant 8The need for education related to the advances in technology is obvious, and is reflected in both the Code Report and the AI R&D Plan as well. It will be interesting to see how the Federal government achieves this objective, particularly in the many crossdisciplinary applications available. Already, some are reconsidering medical education in light of the advancement of AI systems.25

Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.26

At the same time the White House issued the Executive Order, the Department of Defense launched its AI strategy. This subject is beyond the scope of these articles.

The guiding principles articulated in the Executive Order are, in some instances, little more than restatements of aspects of the objectives. Given the general nature of the objectives, this is not surprising. Nonetheless, some of the guiding principles highlight critical issues.

1. COLLABORATION: The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security. 27

Collaboration, as we have seen, is a theme that permeates many of the strategic objectives. Collaboration across industry sectors and government can be a challenge, but public-private partnerships have a long history in the United States and elsewhere.

2. DEVELOP TECHNICAL STANDARDS AND REDUCE BARRIERS: The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by todays industries.28 Developing and deploying new technology within sensitive sectors, such as healthcare, requires balancing issues of safety with issues of overly burdensome regulation. The Food and Drug Administration (FDA) has been wrestling with this challenge for some time with respect to the treatment of clinical decision support tools covered in the 21st Century Cures Act, as well as digital health more broadly. Recently, the FDA published a discussion paper, which offers suggested approaches to the FDA clearance process that are designed to ensure efficacy while streamlining the review process.29

3.WORKFORCE: The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for todays economy and jobs of the future.30

This guiding principle reads more like an objective, and is very closely aligned with the fifth objective of the Executive Order. As noted already, we are seeing the need for cross-disciplinary training in areas where AI systems are likely to have application, and furthering the preparation of our workforce for these systems will be critical. 4. TRUST: The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.31 The need for trust and confidence in AI systems for us to take full advantage of the benefit they promise is universally understood. This is a subject that will be explored in other articles within this series.

5. INTERNATIONALIZATION: The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.32 This guiding principle is a reflection of many longstanding US policy goals of opening markets for US industry participants while protecting their valuable intellectual property. In addition, the protection of vital US industries from foreign ownership or control has been of interest to the US government for many years, and, as noted above, the tools at the governments disposal to protect this interest have been strengthened.

Even taken together, the objectives and guiding principles set forth in the Executive Order provide only a general sense of focus and direction, but it would be surprising if it had been more specific. The goals of the Federal Government are broad, cut across multiple government agencies and functions, include the collaboration of industry and foreign interests, and address the government as both regulator and participant in the development of the AI industry. Since the issuance of the Executive Order, Federal agencies have been moving forward and are beginning the process of addressing the goals of the Executive Order. Greater specificity is coming.

Regardless, a few themes can certainly be pulled from the Executive Order. First, it is clear that this administration views the Federal Government as an active participant in the development of the US AI industry. While not without some downside risk, this generally bodes well for the industry in terms of investment, workforce training, access to data and other Federal resources and, potentially, having a convener of resources.

Second, this administration recognizes the importance of international collaboration, but is also acutely aware of potential dangers and risk. The extent to and ways in which this and future administrations balance the risk and reward of international collaboration in AI is yet to be defined. Third, standards need to be established. This is perhaps the most obvious of the objectives set forth, but it is also the one most fraught. The link between trust and standards, and the degree and type of regulation applied to the AI industry, are all yet to be developed. Here, every agency and organization must contemplate the market, public perception, effective testing criteria, and appropriate role for government and self-regulation.

The final theme, and key takeaway, perhaps, is that we are not there yet. The Executive Order is a call to action of the executive departments and agencies tostart the process of coalescing around a central set of general objectives. We are far from seeing what this might look like, although many agencies have been addressing AI issues for years. A key development, and a key next step, will be the finalization of the Draft Memo and the development of executive department and agency work plans.

1 85 Fed. Reg. 1731, 1825 (Jan. 13, 2020). The full text of the Draft Memo is available on the White House website at https://www.whitehouse.gov/wp-content/uploads/2020/01/DraftOMB-Memo-on-R....

2 Exec. Order No. 13,859, Maintaining American Leadership in Artificial Intelligence, 84 Fed. Reg. 3967 (Feb. 11, 2019), availableat https://www.whitehouse.gov/presidential-actions/executive-ordermaintaini... (hereinafter Exec. Order).

3 Id. 6(a).

4 Id. 6(b).

5 85 Fed. Reg. at 1825.

6 Exec. Order 6(c).

7 Draft Memo, p. 10.

8 Exec. Order 1.

9 Exec. Order 2(a). 10 David J. Levine et al., Final Rules Issued on Reviews of Foreign Investments in the United States CFIUS (Jan. 23, 2020), availableat https://www.mwe.com/insights/final-rules-issued-on-reviews-offoreign-inv.... 11 NATL SCI. & TECH. COUNCIL, SELECT COMM. ON ARTIFICIAL INTELLIGENCE, THE NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN: 2019 UPDATE, available at https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.

12 CMS Newsroom, CMS Launches Artificial Intelligence Health Outcomes Challenge (Mar. 2019), available at https://www.cms.gov/newsroom/press-releases/cms-launchesartificial-intel....

13 AI Health Outcomes Challenge, available at https://ai.cms.gov/.

14 THE CENTER FOR OPEN DATA ENTERPRISE AND THE OFFICE OF THE CHIEF TECHNOLOGY OFFICER AT THE U.S. DEPT OF HEALTH & HUM. SERVS., SHARING AND UTILIZING HEALTH DATA FOR AI APPLICATIONS: ROUNDTABLE REPORTS (2019), p. 15, available athttps://www.hhs.gov/sites/default/files/sharing-and-utilizing-healthdata... (hereinafter Code Report).

15 Exec. Order 2(b).

16 Exec. Order 2(c).

17 Washington Privacy Act, S.B. 6281, 17(1) (2020) (hereinafter Washington Privacy Act).

18 Id. 17(7). The notion of human intervention between an AI system and an individual is not limited to this legislation. The notion is widely discussed as a core ethical concern related to AI systems, and has been adopted in some corporate policies (see, e.g.,https://www.bosch.com/stories/ethical-guidelines-for-artificialintellige...).

19 Exec. Order 2(d).

20 NIST, U.S. LEADERSHIP IN AI: A PLAN FOR FEDERAL ENGAGEMENT IN DEVELOPING TECHNICAL STANDARDS AND RELATED TOOLS (Aug. 2019), available at https://www.nist.gov/system/files/documents/2019/08/10/ai_standar ds_fedengagement_plan_9aug2019.pdf (hereinafter NIST Plan).

21 NIST Plan, p. 9.

22 See https://shop.cta.tech/collections/standards/artificialintelligence.

23 The issues surrounding trust in AI systems will be explored in a future article in this series.

24 Exec. Order 2(e).

25 See, e.g., Steven A. Wartman & C. Donald Combs, Reimagining Medical Education in the Age of AI, 21 AMA J. ETHICS 146 (Feb. 2019), available at https://journalofethics.amaassn.org/sites/journalofethics.ama-assn.org/f....

26 Exec. Order 2(f).

27 Id. 1(a).

28 Exec. Order 1(b).

29 FOOD & DRUG ADMIN., PROPOSED REGULATORY FRAMEWORK FOR MODIFICATIONS TO ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML)-BASED SOFTWARE AS A MEDICAL DEVICE (SaMD), available at fda.gov/files/medical%20devices/published/US-FDA-ArtificialIntelligence-and-Machine-Learning-Discussion-Paper.pdf.

30 Exec. Order 1(c).

31 Id. 1(d).

32 Exec. Order 1(e).

Here is the original post:

The Future of AI Regulation: Part 1 - The National Law Review

AI technology will soon replace error-prone humans all over the world but here’s why it could set us all free – The Independent

It has been oft-quoted albeit humouredly that the ideal of medicine is the elimination of the physician. The emergence and encroachment of artificial intelligence (AI) on the field of medicine, however, puts an inconvenient truth on the aforementioned witticism. Over the span of their professional lives, a pathologist may review 100,000 specimens, a radiologist more so; AI can perform this undertaking in days rather than decades.

Visualise your last trip to an NHS hospital, the experience was either one of romanticism or repudiation: the hustle and bustle in the corridors, or the agonising waiting time in A&E; the empathic human touch, or the dissatisfaction of a rushed consultation; a seamless referral or delays and cancellations.

Contrary to this, our experience of hospitals in the future will be slick and uniform; the human touch all but erased and cleansed, in favour of complete and utter digitalisation. Envisage an almost automated hospital: cleaning droids, self-portered beds, medical robotics. Fiction of today is the fact of tomorrow, doesnt quite apply in this situation, since all of the above-mentioned AI currently exists in some form or the other.

Sharing the full story, not just the headlines

But then, what comes of the antiquated, human doctor in our future world? Well, they can take consolation, their unemployment status would be part of a global trend: the creation displacing the creator. Mechanisation of the workforce leading to mass unemployment. This analogy of our friend, the doctor, speaks volumes; medicine is cherished for championing human empathy if doctors arent safe, nobody is. The solution: socialism.

Open revolt against machinery seems a novel concept set in some futuristic dystopian land, though, the reality can be found in history: The Luddites of Nottinghamshire. A radical faction of skilled textile workers protecting their employment through machine destruction and riots, during the industrial revolution of the 18th century. The now satirised term "Luddite", may be more appropriately directed to your fathers fumbled attempt at unlocking his iPhone, as opposed to a militia.

What lessons are to be learnt from the Luddites? Much. Firstly, the much-fictionalised fight for dominance between man and machine is just that: fictionalised. The real fight is within mankind. The Luddites fight was always against the manufacturer, not the machine; machine destruction simply acted as the receptacle of dissidence. Secondly, government feeling towards the Luddites is exemplified through 12,000 British soldiers being deployed against the Luddites, far exceeding the personnel deployed against Napoleons forces in the Iberian Peninsula in the same year.

Though providing clues, the future struggle against AI and its wielders will be tangibly different from that of the Luddite struggle of the 18th century, next; its personal, its about soul. Our higher cognitive faculties will be replaced: the diagnostic expertise of the doctor, decision-making ability of the manager, and (if were lucky) political matters too.

Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company

Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads

The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare

Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished

Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011

Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions

Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately

Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company

Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads

The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare

Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished

Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011

Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions

Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately

The monopolising of AI will lead to mass unemployment and mass welfare, reverberating globally. AI efficiency and efficacy will soon replace the error-prone human. It must be the case that AI is to be socialised and the means of production, the AI, redistributed: in other words, brought under public ownership. Perhaps, the emergence of co-operative groups made up of experienced individuals will arise to undertake managerial functions in their previous, now automated, workplace. Whatever the structure, such an undertaking will require the full intervention of the state; on a moral basis not realised in the Luddite struggle.

Envisaging an economic system of nationalised labour of AI machinery performing laborious as well as lively tasks shant be feared. This economic model, one of "abundance", provides a platform of the fullest of creative expression and artistic flair for mankind. Humans can pursue leisurely passions. Imagine the doctor dedicating superfluous amounts of time on the golfing course, the manager pursuing artistic talents. And what of the politician? Well, thats anyones guess

An abundance economy is one of sustenance rather than subsistence; initiating an old form of socialism fit for a futuristic age. AI will transform the labour market by destroying it; along with the feudalistic structure inherent to it.

Thought-provoking questions do arise: what is to become of human aspiration? What exactly will it mean to be human in this world of AI?

Ironically; perhaps it will be the machine revolution that gives us the resolution to age-old problems in society.

Visit link:

AI technology will soon replace error-prone humans all over the world but here's why it could set us all free - The Independent

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing (NLP), speech recognition and machine vision.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.

Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese room test.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services.

The concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality -- remains within the realm of science fiction.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes

AI is incorporated into a variety of different types of technology. Here are six examples:

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine. In the 1940s, Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing in 1950. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

The modern field of artificial intelligence is widely cited as starting in 1956 during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Artificial intelligence has made its way into a wide variety of markets. Here are eight examples.

AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Artificial intelligence and machine learning in cybersecurity products are adding real value for security teams looking for ways to identify attacks, malware and other threats.

Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.

As a result, AI security technology both dramatically lowers the number of false positives and gives organizations more time to counteract real threats before damage is done. The maturing technology is playing a big role in helping organizations fight off cyberattacks.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The Europe Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

See original here:

What is Artificial Intelligence (AI)?

Calling all AI experts – Technical.ly Philly

Next month, Comcast will host PHLAI, a technical conference for engineers and professionals interested in and working with machine learning and artificial intelligence.

Well bring together local practitioners in A.I. and machine learning to discuss past experiences and common technological goals aimed at making peoples lives better. Attendees will learn about new ideas and best practices from experts in the field and hear about the latest developments in machine learning and artificial intelligence.

As an example, I was fortunate to be part of the team here at Comcast that used A.I. to launch our Xfinity X1 voice remote. That device has changed the way people watch television, and to date weve deployed more than 14 million of them in homes all across our service area from San Francisco to Philadelphia. And it keeps getting smarter, faster and more accurate every day, all thanks to machine learning.

The PHLAIconference will take place onTuesday, Aug.15 at Convene Cira Centre in Philadelphia.

Featured speakers will include:

And as part of the event, we also want to hear from those who are solving their own problems with A.I.

Practitioners can share their stories and submit proposals until July 14.

Attendees can register here(its free). We hope to see you there!

Jeanine Heck serves as Executive Director in the Technology and Product organization of Comcast Cable. In this role, Heck brings artificial intelligence into XFINITY products. She was the founding product manager for the X1 voice remote, has led the launch of a TV search engine, and managed the companys first TV recommendations engine.

Read the original:

Calling all AI experts - Technical.ly Philly

Microsoft’s new AI sucks at coding as much as the typical Stack Overflow user – TNW

Microsoft has made some impressive leaps forward in the world of artificial intelligence (AI), but this might be its biggest yet. Microsoft Research, in conjunction with Cambridge University, has developed an AI thats able to solve programming problems by reusing lines of code cribbed from other programs.

The dream of one day creating an artificial intelligence with the ability to write computer programs has long been a goal of computer scientists. And now, were one step closer to its actualization.

The AI which is called DeepCoder takes an input and an expected output and then fills in the gaps, using pre-created code that it believes will create the desired output. This approach is called program synthesis.

In short, this is the digital equivalent of searching for your problem on Stack Overflow, and then copying-and-pasting some code you think might work.

But obviously, this is a lot more sophisticated than that. DeepCoder, as pointed out by the New Scientist, is vastly more efficient than a human. Its able to scour and combine code with the speed of a computer, and is able to use machine learning in order to sort the fragments by their probable usefulness.

At the moment, DeepCoder is able to solve problems that take around five lines of code. Its certainly early days, but its still undeniably promising. Full details about the system, and its strengths and limitations can be found in the research paper Microsoft published.

And at least DeepCoder wont ask you to plz send teh codes.

Read next: This drum-like keyboard lets you type in virtual reality like a boss

Shh. Here's some distraction

Continued here:

Microsoft's new AI sucks at coding as much as the typical Stack Overflow user - TNW

IBM’s AI Will Make Your Hospital Stay More Comfortable – Futurism

In Brief IBM's Watson, probably the most famous AI system in the world today, is making its way into hospitals to assist with menial tasks, thereby freeing up medical personnel. Watson has already made a big impact on the medical industry, as well as many others, and the AI shows no signs of slowing down. Dr. Watson, Coming Soon

IBMs Watsonhas done everything from beat human champions at the game of Go to diagnose undetected leukemia in a patient, saving her life. Now, the artificial intelligence (AI) system is poised to make life in a hospital a lot easier for patients and staff alike.

Right now, some medical staff spend almost 10 percent of their working hours answering basic patient questions about physician credentials, lunch, and visiting hours, Bret Greenstein, the vice president of Watsons Internet of Things (IoT) platform, tells CNET.

These staff members also have to tend to very basic needs that dont require medical expertise, such as changing the temperature in rooms or pulling the blinds. If assisted by some kind of AI-powered device, these workers could spend their time more effectively and focus on patient care.

Thats where Watson comes in. Philadelphias Thomas Jefferson University Hospitals have teamed up with IBM and audio company Harman to develop smart speakers for a hospital setting. Once activated by the voice command Watson, these speakers can respond to a dozen commands, including requests to adjust theblinds, thermostat, and lightsor to play calming sounds.

Watson is no stranger to thehealthcare industry. In addition to providing a correct diagnosis for the woman mentioned above, Watson was able to recommend treatment plans at least as well as human oncologists in 99 percent of the cases it analyzed, and it even provided options missed by doctors in 30 percent of those cases.

Watson will soon be working in many dermatologists offices, too, and while its integration into the medial field hasnt been free of problems, it is still the AI with the broadest access to patient data the key to better diagnoses and greater predictive power.

Watson has had a notable impact on various other industries, as well.

OnStar Gouses Watson, and it will be making driving simpler in more than 2 million 4G LTE-connected GM vehicles by the end of this year. Watson is also no stranger to retail, having been incorporated into operations at Macys, Lowes, Best Buy, and Nestle Cafesin Japan, and the AI is even helping to bring a real-life Robocop to the streets of Dubai.

Watson is branching out into creative work, too, which was previously assumed to be off-limits to AIs. The system successfullyedited an entire magazine on its own and has also created a movie trailer.

What the AI will do next is anyones guess, but its safe to say that Watson probably has a more exciting and ambitious five-year plan than most humans.

Follow this link:

IBM's AI Will Make Your Hospital Stay More Comfortable - Futurism

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Global Artificial Intelligence (AI) Market in Agriculture Industry Market 2021-2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence (AI) market in the agriculture industry is poised to grow by $458.68 million during 2021-2025, progressing at a CAGR of over 23% during the forecast period.

The market is driven by maximizing profits in farm operations, higher adoption of robots in agriculture, and the development of deep-learning technology. This study also identifies the advances in AI technology as another prime reason driving industry growth during the next few years.

The artificial intelligence (AI) market in agriculture industry analysis includes the application segment and geographic landscape.

The report on artificial intelligence (AI) market in agriculture industry covers the following areas:

The robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) market in agriculture industry vendors that include Ag Leader Technology, aWhere Inc., Corteva Inc., Deere & Co., DTN LLC, GAMAYA, International Business Machines Corp., Microsoft Corp., Raven Industries Inc., and Trimble Inc. Also, the artificial intelligence (AI) market in agriculture industry analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.

Key Topics Covered:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Application

Customer landscape

Geographic Landscape

Vendor Landscape

Vendor Analysis

For more information about this report visit https://www.researchandmarkets.com/r/58dom9

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See more here:

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 - ResearchAndMarkets.com - Business Wire