UK public sector failing to be open about its use of AI, review finds – TechCrunch

A report into the use of artificial intelligence by the U.K.s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare with health minister Matt Hancock setting out a tech-fueled vision of preventative, predictive and personalised care in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of healthtech apps and services.

He has also personally championed a chatbot startup, Babylon Health, thats using AI for healthcare triage and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology and Londons Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI efficiencies risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns all of which require transparency into AIs if theres to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controlability ordering an immediate halt to its use.

The U.K. parliamentary committee that reviews standards in public life has today sounded a similar warning publishing a series of recommendations for public-sector use of AI and warning that the technology challenges three key principles of service delivery: openness, accountability and objectivity.

Under the principle of openness, a current lack of information about government use of AI risks undermining transparency, it writes in an executive summary.

Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.

This review found that the government is failing on openness, it goes on, asserting that: Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.

In 2018, the UNs special rapporteur on extreme poverty and human rights raised concerns about the U.K.s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale warning then that the impact of a digital welfare state on vulnerable people would be immense, and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committees assessment, it is too early to judge if public sector bodies are successfully upholding accountability.

Parliamentarians also suggest that fears over black box AI may be overstated and rather dub explainable AI a realistic goal for the public sector.

On objectivity, they write that data bias is an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias.

The use of AI in the U.K. public sector remains limited at this stage, according to the committees review, with healthcare and policing currently having the most developed AI programmes where the tech is being used to identify eye disease and predict reoffending rates, for example.

Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage, the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are examining how AI can increase efficiency in service delivery.

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care noting the example of Hampshire County Council trialing the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers, and points to a Guardian article which reported that one-third of U.K. councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still significant obstacles to what they describe as widespread and successful adoption of AI systems by the U.K. public sector.

Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation, it writes. It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users, it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery, the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI with the committee noting there are three sets of principles that could apply to the public sector, which is generating confusion.

The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use, it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies use of AI complies with the U.K. Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI supporting the governments intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalisation.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.

This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements, it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of driving blind, with no control over who is in the AI driving seat.

This serious report sadly confirms what we know to be the case that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector, she said.The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. Its time for action.

Continued here:

UK public sector failing to be open about its use of AI, review finds - TechCrunch

Microsoft teams up with leading universities to tackle coronavirus pandemic using AI – TechRepublic

The newly-formed C3.ai Digital Transformation Institute has an open call for proposals to mitigate the COVID-19 epidemic using artificial intelligence and machine learning.

With the coronavirus impacting most of the world, the medical community is hard at work trying to come up with some type of magic bullet that will stop the pandemic from propagating. Can artificial intelligence (AI) and machine learning (ML) help nurture a solution? That's what Microsoft and a host of top universities are hoping.

In a blog post published last week, Microsoft detailed the creation of the C3.ai Digital Transformation Institute (C3.ai DTI), a consortium of scientists, researchers, innovators, and executives from the academic and corporate worlds whose mission it is to push AI to achieve social and economic benefits. As such, C3.ai DTI will sponsor and fund scientists and researchers to spur the digital transformation of business, government, and society.

Created by Microsoft, AI software provider C3.ai, and several leading universities, C3.ai DTI already has the first task on its agenda--to harness the power of AI to combat the coronavirus.

SEE:Coronavirus: Critical IT policies and tools every business needs(TechRepublic Premium)

Known as "AI Techniques to Mitigate Pandemic," C3.ai DTI's first call for research proposals is asking scholars, developers, and researchers to "embrace the challenge of abating COVID-19 and advance the knowledge, science, and technologies for mitigating future pandemics using AI." Researchers are free to develop their own topics in response to this subject, but the consortium outlined 10 different areas open for consideration:

"We are collecting a massive amount of data about MERS, SARS, and now COVID-19," Condoleezza Rice, former US Secretary of State, said in the blog post. "We have a unique opportunity before us to apply the new sciences of AI and digital transformation to learn from these data how we can better manage these phenomena and avert the worst outcomes for humanity."

This first call is currently open with a deadline of May 1, 2020. Interested participants can check the C3.ai DTI website to learn about the process and find out how to submit their proposals. Selected proposals will be announced by June 1, 2020.

The group will fund as much as $5.8 million in awards for this first call, with cash awards ranging from $100,000 to $500,000 each. Recipients will also receive cloud computing, supercomputing, data access, and AI software resources and technical support provided by Microsoft and C3.ai. Specifically, those with successful proposals will get unlimited use of the C3 AI Suite, access to the Microsoft Azure cloud platform, and access to the Blue Waters supercomputer at the National Center for Super Computing Applicationsat the University of Illinois Urbana-Champaign (UIUC).

To fund the institute, C3.ai will provide $57,250,000 over the first five years of operation. C3.ai and Microsoft will contribute an additional $310 million, which includes use of the C3 AI Suite and Microsoft Azure. The universities involved in the consortium include the UIUC; the University of California, Berkeley; Princeton University; the University of Chicago; the Massachusetts Institute of Technology; and Carnegie Mellon University.

Beyond funding successful research proposals, Microsoft said that C3.ai DTI will generate new ideas for the use of AI and ML through ongoing research, visiting professors and research scholars, and faculty and scholars in residence, many of whom will come from the member universities.

More specifically, the group will focus its research on AI, ML, Internet of Things, Big Data Analytics, human factors, organizational behavior, ethics, and public policy. This research will examine new business models, develop ways for creating change within organizations, analyze methods to protect privacy, and ramp up the conversations around the ethics and public policy of AI.

"In these difficult times, we need--now more than ever--to join our forces with scholars, innovators, and industry experts to propose solutions to complex problems," Gwenalle Avice-Huet, Executive Vice President of ENGIE, said. "I am convinced that digital, data science, and AI are a key answer. The C3.ai Digital Transformation Institute is a perfect example of what we can do together to make the world better."

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Here is the original post:

Microsoft teams up with leading universities to tackle coronavirus pandemic using AI - TechRepublic

Machine Learning Is Helping Martech Lead the AI Revolution – AdAge.com (blog)

Credit: NASA

Artificial intelligence gets a lot of press (thanks, Elon), but the fact is, AI couldn't be the rockstar it is without the behind-the-scenes help of machine learning (ML). While the two are closely related, there's a critical difference between AI and ML: AI makes decisions, while ML makes predictions. Think of it this way: it's AI that steers the Mars rover around the rock in its path, but it's ML that recognizes the rock to begin with.

In marketing and advertising, the best example of AI is the programmatic ecosystem. This includes the decisions made around whether or not to bid on a given impression, how much to bid, what creative to serve, and various campaign optimization techniques.

But in order for the AI to make good decisions, it needs valid ML predictions as input. The fastest, most efficient programmatic bidder on the planet would be useless if poor decisions were being made. Therefore, it behooves the teams in the marketing and advertising space that are building AI to focus first on delivering the strongest ML. This mindset is exactly what the industry has begun to adopt.

In the martech space we hear a lot about graphs, including social and identity graphs, as well as Google's PageRank, which ranks the relevance of websites to populate search results. These graphs typically refer to an interconnected web of consumers, devices, cookies, IDs, locations and websites. The most prominent application of these graphs is personalization -- from targeted ads, product recommendations, news articles and other digital customer experiences. That personalized content is a decision (AI), as a result of the predicted graph associations (ML).

Determining identity Brands and enterprises want to know if two given devices are owned by one person; if two individuals are part of the same household; if the logged-in user of an app is the same person who visited the website without logging in; and if the consumer who was recently at this location uses this other device. By determining identity, information from one environment can be leveraged in another.

Unfortunately, because we're dealing in a world of predictions, these questions don't yield binary answers. The world is more complicated than just black or white, yes or no. We don't see a movie because it's objectively good or bad. We see it because we read a good review, we like the actor, it won an award, or maybe the showtime is just convenient. There are a ton of influences and attributes behind every decision.

Understandably, brands and enterprises don't want a statistic for an answer, and so our job as technologists is to get as close as possible to a "Yes" or a "No" with certainty. One way to do that is by applying ML to these graphs and looking at how devices interact with each other.

Most of the firms using graphs brands, agencies, and enterprises are thinking about device pairs and how they interact with each other. They constantly ask: "Do Smartphone 1 and Tablet 2 belong to the same consumer?" In order to answer that, however, you have to take into account Desktop 3 and Smart TV 4. In other words, instead of determining just the probability of A or the probability of B, it's determining the probability of A given the probability of B.

Increasing complexity But it goes beyond A and B. It extends to, "What is the probability of A, given the probability of B and the probability of C through Z." When any of those individual inputs change, the rest of them change as well. And in our interconnected world, it's not even A-to-Z scenarios. It's billions of scenarios. You can see how arriving at a "yes or no" answer is not a simple task.

In fact, it gets even more complex. Because each device has attributes that interact with each other -- locations, IDs, cookies, etc. -- the whole data set acts as a fractal, where every attribute has attributes that have attributes. At some point, all this gets too complex to process for even the most advanced supercomputers. So while we're still left with an approximation at the end of all of this, by applying machine learning to the graph itself, we can get much closer to the truth, in a purely technical sense, or a "yes or no," in a practical sense.

Advanced ML is just now beginning to be applied outside of academia, and our world of digital advertising and marketing is among the first professions to implement it. From more relevant ads, to personalized content, to e-commerce recommendations, to predicting customer churn, to new applications in things like fraud detection and risk protection, marketers are finally seeing the concrete benefits of harnessing ML.

That said, new methods inevitably bring new challenges. Keeping pace with the rate of change, and bridging data science to business strategy, will be tantamount to marketers in the years ahead, as machine learning becomes a catalyst for innovation in our industry and beyond.

View post:

Machine Learning Is Helping Martech Lead the AI Revolution - AdAge.com (blog)

Elon Musk dismisses Mark Zuckerberg’s understanding of AI threat as ‘limited’ – The Verge

The war between AI and humanity may be a long a way off, but the war between tech billionaire and tech billionaire is only just beginning. Today on Twitter, Elon Musk dismissed Mark Zuckerbergs understanding of the threat posed by artificial intelligence as limited, after the Facebook founder disparaged comments Musk made on the subject earlier this month.

The beef (such as it is) goes back to a speech the SpaceX and Tesla CEO made to an assembly of US governors. Musk warned that there needed to be regulation on AI development before its too late. I keep sounding the alarm bell, but until people see robots going down the street killing people, they dont know how to react, because it seems so ethereal, he said, adding that the technology represents a fundamental risk to the existence of civilization.

Are both Musk and Zuckerberg missing the point?

Its a familiar refrain from Musk, and one that doesnt hold much water within the AI community. Pedro Domingos, a machine learning researcher and author of The Master Algorithm, summed up the feelings of many with a one word response on Twitter: Sigh. Later, Domingos expanded on this in an interview with Wired, saying: Many of us have tried to educate [Musk] and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.

Fast forward to this Sunday, when Zuckerberg was running one of his totally-normal-and-not-running-for-political-office Facebook Live Q&As. At around 50 minutes in, a viewer asks Zuckerberg: I watched a recent interview with Elon Musk and his largest fear for the future was AI. What are your thoughts on AI and how it could affect the world?

Zuck responds: I have pretty strong opinions on this ... I think you can build things and the world gets better, and with AI especially, Im really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios are I just, I don't understand it. It's really negative and in some ways I think it is pretty irresponsible.

He goes on to predict that in the next five to ten years AI will deliver so many improvements in the quality of our lives, and cites healthcare and self-driving cars as two major examples. People who are arguing for slowing down the process of building AI, I find that really questionable, Zuckerberg concludes. If youre arguing against AI youre arguing against safer cars that arent going to have accidents.

Someone then posted a write-up of Zuckerbergs Q&A on Twitter and tagged Musk, who jumped into the conversation with the comment below. Musk also linked approvingly to an article on the threat of superintelligent AI by Tim Urban. (The article covers the same ground as Nick Bostroms influential book Superintelligence: Paths, Dangers, Strategies. Both book and article hinge on the idea that exponential growth in computing power means AI could overtake human intelligence overnight.)

But as fun as it is to watch two extremely rich people, who probably wield more influence over your life than most politicians, trade barbs online, its hard to shake the feeling that both Musk and Zuckerberg are missing the point.

While AI researchers dismiss Musks comments on AI as alarmist, thats only in reference to the imagined threat of some Skynet-style doomsday machine. The same experts frequently point out that artificial intelligence poses many genuine threats that already affect us today. These include how the technology can amplify racist and sexist prejudices; how it could upend society by putting millions out of jobs; how it is set to increase inequality; and how it will be used as tool of control by authoritarian governments.

These are real dangers that need real solutions, not just sci-fi speculation.

And while Zuckerbergs comments on the potential benefits of AI in healthcare and road safety are heartening, focusing only on the good that artificial intelligence can deliver is in its own way as limited as focusing only on the threat. Really, we need to combine both Musk and Zuckerbergs approaches, and probably listen less to tech billionaires in the process.

See original here:

Elon Musk dismisses Mark Zuckerberg's understanding of AI threat as 'limited' - The Verge

I Asked AI to Write This Post for Me. Here Are the Results. – The Advocate

Photo: Baona | Getty Images

I Asked AI to Write This Post for Me. Here Are the Results.

In June 2020 an Artificial Intelligence system called GPT-3 went live. This AI model is focused on Natural Language Programming and was trained by reading trillions of words and sentences online. The net result is that it can generate impressive text that humans can barely tell was created by a computer.

A growing number of developers are being given access to GPT-3 to create real-world applications. In the coming months, you are going to start to see a plethora of AI applications that create content such as blogs, articles, reports, emails, advertising copy, and sales scripts.Its likely that huge amounts of text that will be written in 2021and beyond will be written by computers and edited by humans.

Related:3 Ways Small Businesses Can Quickly EmbraceAIWithout Big Data or Programmers

I was given access to a beta version of an application called Copy.ai which is specifically designed to create copy that could be suitable for marketing purposes. I asked the system to write me a short blog post titled Can an AI write business blogs. In under 3 seconds, it produced the following blog post:

Here are 10 things you need to know about using AI to write your business blogs:

1. AI is fast Very fast.

2. AI produces unique content Every time.

3. AI allows you to scale your content production Using AI to produce your content means you dont have to spend your time writing.

4. AI is customized for your industry AI can track the key topics in your industry and make sure that the content it produces is relevant and on-trend.

5. AI is a creative writing tool.

6. AI is not a replacement for copywriters.

7. AI will give you consistent quality content.

8. AI will allow you to produce better quality content than before.

9. AI can help you to produce content for your website, blog, social media and emails.

10. AI is not a one-size-fits-all solution.

Our AI software is completely customizable to your needs and is designed to produce content that is relevant to your business. If youre looking for a business blogging tool that can help you to produce better content at a faster rate then please contact us today.

As you can see, this isnt a perfect or complete blog post. I'm not even sure if all of the 10 points are true.On the whole, though, its not a bad starting point for an interesting article and it would be a lot easier for a busy entrepreneur to edit or expand this blog post rather than starting from scratch. In practical terms, you could run the AI system a dozen times in under a minute and pick a reasonable starting point.

For now, I believe AI applications have the power to generate directionally correct content that an experienced professional could edit and improve. Within afew years, the next version of this system will be ready (GPT-4) and it will likely be ten times more powerful. Its possible that this more advanced system will write most things better than most humans by 2025.

Related:HowAISolutions Are Solving 5 Long-Standing Business Challenges

I can imagine a group of farmers in 1890 looking at a tractor for the first time and wondering what will happen to the millions of farmhands being paid to plow fields. They couldnt have imagined the types of jobs or lives we live today that are made possible because a machine does a job humans had to do for thousands of years. As we move into the 2020s, we are crossing a similar frontier for humanity which may displace millions of peoples jobsbut may also give rise to completely new ways of living and working.

Related:I Asked AI to Write This Post for Me. Here Are the Results.Become an Expert on the Future of FinTech and Blockchain with This $40 BundleHow Technology is Revolutionizing Beauty Ecommerce

Here is the original post:

I Asked AI to Write This Post for Me. Here Are the Results. - The Advocate

Baidu debuts updated AI framework and R&D initiative – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

At Wave Summit, Baidus bi-annual deep learning conference, the company announced version 2.1 of PaddlePaddle, its framework for AI and machine learning model development. Among the highlights are a large-scale graph query engine; four pretrained models; and PaddleFlow, a cloud-based suite of machine learning developer tools that include APIs and a software development kit (SDK). Baidu also unveiled what its calling the Age of Discovery, a 1.5 billion RMB (~$235 million) grant program that will invest over the next three years in AI education, research, and entrepreneurship.

At Wave Summit, Baidu CTO Haifeng Wang outlined the top AI trends from the companys perspective. Deep learning with knowledge graphs has significantly improved the performance and interpretability of models, he said, while multimodal semantic understanding across language, speech, and vision has become achievable through graphs and language semantics. Moreover, Wang noted, deep learning platforms are coordinating closely with hardware and software to meet various development needs, including computing power, power consumption, and latency.

To this end, PaddlePaddle 2.1 introduces optimization of automatic mixed precision, which can speed up the training of models including Googles BERT by up to 3 times. New APIs reduce memory usage and further improve training speeds, as well as adding support for data preprocessing, GPU-based computation, mixed-precision training, and model sharing.

Also in tow with PaddlePaddle 2.1 are four new language models built from Baidus ERNIE. ERNIE, which Baidu developed and open-sourced in 2019, learns pretrained natural language tasks through multitask learning, where multiple learning tasks are solved at the same time by exploiting commonalities and differences between them. Beyond this, PaddlePaddle 2.1 brings an optimized pruning compression technology called PaddleSlim, as well as LiteKit, a toolkit for mobile developers that aims to reduce the development costs of edge AI.

PaddlePaddle Enterprise, Baidus business-oriented set of machine learning tools, gained a new service this month in PaddleFlow. PaddleFlow is a cloud platform that provides capabilities for developers to build AI systems, including resources management and scheduling, task execution, and service deployment via developer APIs, a command-line client, and an SDK.

In related news, Baidu says that as a part of its new Age of Discovery initiative, the company will invest RMB 500 million ($78 million) in capital and resources to support 500 academic institutions and train 5,000 AI tutors and 500,000 students with AI expertise by 2024. Baidu also plans to pour RMB 1 billion ($156 million) into 100,000 businesses for intelligent transformation and AI talent training.

Laments over the AI talent shortage have also become a familiar enterprise refrain. OReillys 2021 AI Adoption in the Enterprise paper found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing this as a significant barrier. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working on AI development and research globally, only 25% are well-versed enough in the technology to work with teams to take it from research to application.

PaddlePaddle researchers and developers will collaborate with the open source community to build a deep learning open source ecosystem and break the boundaries of AI technology, Baidu said in a press release. With the permeation of AI across various industries, it is critical for platforms to keep lowering their threshold to accelerate intelligent transformation.

Go here to see the original:

Baidu debuts updated AI framework and R&D initiative - VentureBeat

Will this creepy AI platform put artists out of a job? – Digital Arts

How do you use Artbreeder, and should you use it?

When discussions of AI and its effect on the labour force come up, the creative industry usually shrugs its shoulders. After all, how can artificial intelligence accurately replicate an artist's singular vision?

It's an understandable stance, but recent weeks may have sent a little shiver down creatives' spines. Only yesterday did Fast Company reveal that a designer commissioned by clients (below) turned out to be an AI system employed by one very adventurous design firm.

AI art generator Artbreeder meanwhile has grabbed attention through Bas Uterwijk's photo portraits of historical figures, all of which were generated from classic painted portraits. The photos, as seen here on Designboom, are remarkably polished and authentic. But would it work the other way around? Can equally excellent art stem from machine manipulated photos?

Artbreeder isn't the only AI art platform out there, but it's certainly grabbed the most attention. The website combines and manipulates any kind of image to produce countless variations using the magic of machine-learning, giving you the option to make landscapes, anime figures, portraits and more (and don't worry, all these images remain automatically private unless you choose otherwise.)

You can upload an image and let the website do the heavy lifting, or toggle features using its Edit-Genes tab. On portraits this allows you to change colours, race and accessories (adding facial hair or glasses), allowing the same for cartoonish anime and 'furry' creations, minus the last two options. Artbreeder also animates everything except anime characters, which might sadden the weaboos out there.

First up, note that you can only upload photos for the portrait and landscape sections. Everything else gives you a series of random images to play about with using slider controls; refresh the set if you'd prefer different options.

You can play with a given selection for the portraits and landscapes workspaces, but these are the fun ones which let you upload any image, as many have been doing in recent months.

With these photos you can generate new faces and landscapes as sourced from the original, either in photorealistic or impressionist form. Just use the slider controls with the 'Children' tab for these; if you get bored, you can 'Crossbreed' your upload with a public image from the database or another of your own.

Just be careful as you slide: whatever catches your eye won't be there when you toggle back as each 'push' creates a whole new series of variants. Save what you see as soon as you see it to avoid any disappointment.

Note that each image you upload is instantly converted, meaning it won't look like the one on your phone or hard drive. This variant of the original is what all the Children are based off, but you can tinker with this master copy using the Edit-Genes tab mentioned earlier.

As a free account user, you can upload up to five images in total. In order to do anything with your personal imagery, you have to click on the relevant page (Portrait, Landscape) and upload from there; images aren't stored in the cloud for you to share across sub-platforms. In other words, if you upload four pics in one of the Portrait tabs, you'll only be able to upload one Landscape pic, and that'll be the only one you can manipulate aside from the ones given to you by Artbreeder.

There are two Portrait pages. The others is marked as 'Old', presumably meaning it uses older networks instead of generating more classical-style portraits as you'd think; I preferred the results of the more current option.

No matter what section you use, you can create and download as many animations as you like. There is a limit on how many high-res versions you can download of your creations; paying allows you to save and upload more.

But would we recommend you to pay? And should artists be worried about websites like this? Judging by the portraits created, the answer right now would be no. Paintings generated are very rarely lifelike (if that's what you're looking for) and I noticed a limited output of painting styles; also, most faces look creepy and 'off'; there's barely anything cosy here. It can also take a while to upload a photo due to the popularity of the site; depending on time of day, processing can last up to an hour when queued.

The more impressive results though came from landscapes. While again not entirely faithful recreations, turning my industrial city-scape into a fantasy mountain scene, Artbreeder's BigGAN models resulted in these very impressive worlds; here's the original version I uploaded for you to compare with its 'remixes' below it.

Game environment artists may be intrigued by these results, and no doubt the software at work is improving as we speak. With an improved website, Artbreeder could be a very creepy 'rival' for digital artists indeed.

Related: The AI robots getting a retro Saul Bass branding

Link:

Will this creepy AI platform put artists out of a job? - Digital Arts

NHS is leading the way in AI adoption – ITProPortal

NHS Trusts are already waist deep into their AI deployment, changing the way they care for patients, diagnose illnesses and manage paperwork.

A new report by NetApp, based on a Freedom of Information request, found that out of the 61 NHS Trusts that responded to the request, more than half (52 per cent) have already deployed AI technologies and are using them for clinical care (20 per cent) and patient diagnosis (16 per cent). Sixteen per cent said theyd roll outAI within the next two years, and three quarters have already brought in an AI leader for their trust.

Among the different things AI can provide, NHS Trusts are mostly focused on speech recognition (28 per cent), robotic process automation (25 per cent) and machine learning (13 per cent), as these tools alleviate the pressure healthcare workers are faced with on a daily basis. They can also improve patient care, as well as speed up the delivery of personalised medicine.

The Trusts are also worried about ethics and patient data security, which is why more than half (59 per cent) have already reviewed, or plan to review, the governance policies for patient data.

Artificial intelligence has limitless potential in healthcare services and its encouraging to see the technology being used in half of NHS Trusts, said George Kurian, NetApp chief executive officer and president.

As healthcare moves towards preventative treatment and personalized medicines, artificial intelligence leaders in the NHS have a complex challenge to break through cultural and organizational barriers when it comes to providing healthcare professionals the access to data they require.

Progress is being made and the further deployment of AI-powered technologies such as speech recognition and machine learning will alleviate pressure on staff, accelerate innovation and reduce costs, Kurian continued. The world of artificial intelligence starts with data, and we are helping healthcare organizations simplify data services and build their data fabrics.

Continued here:

NHS is leading the way in AI adoption - ITProPortal

Management AI: Matching AI Models To Business Needs, Unsupervised Learning, Customer Segmentation, And Association – Forbes

Sergey Tarasov - stock.adobe.com

This is part two of my series based on Lomit Patels Lean AI (OReilly, ISBN:978-1-492-05931-8). The first discussed business applications can benefit from supervised learning. This article will discuss unsupervised learning. Again, refer to the books Figure 5-1, included below, for an overview of the four key types of artificial intelligence (AI) leveraged in machine learning (ML).

Different types of machine learning and business applications

Most managers, both line and even IT, do not need to understand the intricacies of machine learning. However, a high level knowledge will help their organizations understand that AI is a tool and must be linked to real business problems. Having an idea of how the high level classifications of ML link to real world issues can help focus both the technical and business staff to provide effective solutions.

As a quick reminder, supervised learning is we understand the results we want identified. The features (Parameters, variables, whatever) we need can then be chosen and the data labeled appropriately. That allows analysis that examines data to see where they fit within known patters of results.

That is not always possible, nor preferable. Sometimes there are new relationship, things that might not be expected. In many business arenas, but especially in the case of consumer markets, there is so much data to wade through in order to identify a link before competitors recognize the same relationship thereby providing a critical competitive advantage. Unsupervised learning is ideal for exploring data with little or no knowledge about what it could represent. It can be very helpful in finding patterns in raw data when you may not know exactly what you are looking for, says Lomit Patel.

Let us look at a couple of examples.

Customer segmentation is a core marketing tool. The goal is to understand the different types of buyers, see what links groups of individuals as per traits, and then build marketing campaigns that accurately address the needs of each group, or cluster, of customers.

At first blush, that might seem to be something that could use supervised learning. After all, we know there are traits based on gender, age, income, and other segments that we can define, and into which customers can be classified. That type of segmentation is clearly amenable to supervised learning, and we shouldnt ignore any tool we have.

Whats changed is the exponential increase in data we have about individuals, groups, and even companies. So, for instance, it might end up that people who shop at store A are more likely to buy product X, regardless of their age. Analysis continues to find new ways of clustering people based on data ways we would have never thought of in advance and for which classification doesnt work.

That is the difference between classification and clustering, things that, at a high level, sound the same. Supervised learning is for when we know the classifications (cancer v no cancer), while unsupervised learning can cluster data points based on variables where no previous link might have been expected. Customer segmentation is becoming far more advanced with unsupervised learning.

This one is used ever day in ecommerce. Everyone has seen shopping, movie, and other sites that suggest people who like X also like Z. That is association. Supervised learning does not work, as we have no idea what people like until that like is expressed. By building a neural network that can analyze those likes, unsupervised training can lead to a system that learns from the data to make suggestions. That is much better than training a machine based on current preferences because, as every marketer knows, preferences are not constant.

That last phrase is critical. Cancer is cancer. We might find new cancers, or find out a specific new way to detect an existing one. At that point, algorithms can be updated, but were still specifying exactly what the machine should identify, using a fixed feature set.

Associations, relationships between products, preferences, and more, are often part of culture, and that culture is constantly undergoing change. A strong ML system is trained to look at all the data and notice relationships that are previously unknown, and even the loosening of previously strong relationships. It is unsupervised learning that allows the systems to not be limited by what we already think we know.

When you know the results you need to get, supervised learning is the way to go. However, with the modern volumes of data, organizations can gain new and unexpected insight from seemingly unrelated data points. Unsupervised learning is the tool that helps find those new relationships, the new patterns and links that add insight in many areas of business.

You might have noticed that not everything in the world is black and white. Well, supervised and unsupervised learning arent completely independent. While some of the discussion above hints at that, the next entry in this Management AI series will discuss just that why hybrid systems are useful.

More:

Management AI: Matching AI Models To Business Needs, Unsupervised Learning, Customer Segmentation, And Association - Forbes

Artificial intelligence that mimics the brain needs sleep just like humans, study reveals – The Independent

Artificial intelligence designed to function like a human could require periods of rest similar to those needed by biological brains.

Researchers at Los Alamos National Laboratory in the US discovered that neural networks experienced benefits that were "the equivalent of a good night's rest" when exposed to an artificial analogue of sleep.

"We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development," said Yijing Watkins, a computer scientist at Los Alamos.

Sharing the full story, not just the headlines

The discovery was made by the team of researchers while working on a form of artificial intelligence designed to mimic how humans learn to see.

The AI became unstable during long periods of unsupervised learning, as it attempted to classify objects using their dictionary definitions without having any prior examples to compare them to.

When exposed to a state that is similar to what a human brain experiences during sleep, the neural network's stability was restored.

Russia has launched a humanoid robot into space on a rocket bound for the International Space Station (ISS). The robot Fedor will spend 10 days aboard the ISS practising skills such as using tools to fix issues onboard. Russia's deputy prime minister Dmitry Rogozin has previously shared videos of Fedor handling and shooting guns at a firing range with deadly accuracy.

Dmitry Rogozin/Twitter

Google celebrates its 21st birthday on September 27. The The search engine was founded in September 1998 by two PhD students, Larry Page and Sergey Brin, in their dormitories at Californias Stanford University. Page and Brin chose the name google as it recalled the mathematic term 'googol', meaning 10 raised to the power of 100

Google

Chief engineer of LIFT aircraft Balazs Kerulo demonstrates the company's "Hexa" personal drone craft in Lago Vista, Texas on June 3 2019

Reuters

Microsoft announced Project Scarlett, the successor to the Xbox One, at E3 2019. The company said that the new console will be 4 times as powerful as the Xbox One and is slated for a release date of Christmas 2020

Getty

Apple has announced the new iPod Touch, the first new iPod in four years. The device will have the option of adding more storage, up to 256GB

Apple

Samsung will cancel orders of its Galaxy Fold phone at the end of May if the phone is not then ready for sale. The $2000 folding phone has been found to break easily with review copies being recalled after backlash

PA

Apple has cancelled its AirPower wireless charging mat, which was slated as a way to charge numerous apple products at once

AFP/Getty

India has claimed status as part of a "super league" of nations after shooting down a live satellite in a test of new missile technology

EPA

5G wireless internet is expected to launch in 2019, with the potential to reach speeds of 50mb/s

Getty

Uber has halted testing of driverless vehicles after a woman was killed by one of their cars in Tempe, Arizona. March 19 2018

Getty

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Russia has launched a humanoid robot into space on a rocket bound for the International Space Station (ISS). The robot Fedor will spend 10 days aboard the ISS practising skills such as using tools to fix issues onboard. Russia's deputy prime minister Dmitry Rogozin has previously shared videos of Fedor handling and shooting guns at a firing range with deadly accuracy.

Dmitry Rogozin/Twitter

Google celebrates its 21st birthday on September 27. The The search engine was founded in September 1998 by two PhD students, Larry Page and Sergey Brin, in their dormitories at Californias Stanford University. Page and Brin chose the name google as it recalled the mathematic term 'googol', meaning 10 raised to the power of 100

Google

Chief engineer of LIFT aircraft Balazs Kerulo demonstrates the company's "Hexa" personal drone craft in Lago Vista, Texas on June 3 2019

Reuters

Microsoft announced Project Scarlett, the successor to the Xbox One, at E3 2019. The company said that the new console will be 4 times as powerful as the Xbox One and is slated for a release date of Christmas 2020

Getty

Apple has announced the new iPod Touch, the first new iPod in four years. The device will have the option of adding more storage, up to 256GB

Apple

Samsung will cancel orders of its Galaxy Fold phone at the end of May if the phone is not then ready for sale. The $2000 folding phone has been found to break easily with review copies being recalled after backlash

PA

Apple has cancelled its AirPower wireless charging mat, which was slated as a way to charge numerous apple products at once

AFP/Getty

India has claimed status as part of a "super league" of nations after shooting down a live satellite in a test of new missile technology

EPA

5G wireless internet is expected to launch in 2019, with the potential to reach speeds of 50mb/s

Getty

Uber has halted testing of driverless vehicles after a woman was killed by one of their cars in Tempe, Arizona. March 19 2018

Getty

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Read this article:

Artificial intelligence that mimics the brain needs sleep just like humans, study reveals - The Independent

AIs future lies in its truly human past – Asia Times

This is the third installment in a series.Read part 1 here and part 2 here.

After a period of euphoria, criticisms of the deep learning-centered approach to artificial intelligence have been growing. We seem to be entering a new episode of the manic-depressive cycles that have afflicted AI from the very beginning, correlated to ebbs and flows of funding from government agencies and investors.

Many today see the future of artificial intelligence in a revival of so-called symbolic AI the early approach to AI which aimed at reaching a truly human level of intelligence using methods of symbolic (mathematical) logic.

The initial idea is to program the system with a set of axioms (rules), an array of predicates (symbols representing objects, relations, attributes, fields, properties, functions and concepts) and rules of inference, so that the system can carry out logical reasoning of the sort humans do.

Early attempts to develop AI in this direction were laborious, and achieved useful results mainly in the area of so-called expert systems. In 1983 the AI pioneer John McCarthy noted that expert systems performance in their specialized domains are often very impressive.

Nevertheless, hardly any of them have certain common-sense knowledge and ability possessed by any non-feeble-minded human. This lack makes them brittle.Common-sense facts and methods are only very partiallyunderstood today, and extending this understanding is the key problem facing AI.

By far the most ambitious attempt to solve this problem is the Cyc project launched by the AI specialist Douglas Bruce Lenat around 1984. Gradually Lenat and his team built a gigantic AI system, which by 2017 had 1,500,000 terms (416 000 categories of objects, more than 1 million individual objects), 42,500 predicates (relations, attributes, fields, properties, functions),2,093,000 facts, and 24 million common-sense rules and assertions.

Many of the rules were written individually by members of Lenats group, taking over 1000 man-years of work.

The heart of Cyc is an inference engine which derives conclusions from statements built up from the terms and predicates in accordance with the rules. For this purpose, Cyc employs tools of mathematical logic such as the second order predicate calculus, modal logic and context logic.

This is all very impressive. Leaving aside the issue of Cycs performance in practice, which I am not in a position to judge, some important questions suggest themselves:

Does the structure of Cyc correspond to how common sense is acquired and used by human beings? Or is it more like a very sophisticated form of curve-fitting, like trying to square the circle?

In the effort to approximate a circle by polygons, we are obliged to keep adding more and more sides. But the sides of the polygon are still straight line segments; we never get anything curved. The stupid polygons have more and more corners, whereas the circle has none. In this respect they become more and more unlike the circle.

By analogy, the complexity and database volume of AI systems could grow indefinitely into the future, without ever getting to the Promised Land of human-like intelligence. This, however, would not prevent AI from becoming an ever more valuable instrument for human beings, assuming they remain intelligent enough to use it properly.

Stupidity of the AI pioneers

As this series continues I intend to address, in some detail, the second dimension of the AI stupidity problem, which goes back to pioneers of AI and pervades the field still today.

Here are some hints to whet the readers appetite and round out some points made earlier in this series.

As emphasized at the outset, I do not mean to imply that the pioneers of AI were stupid people. That would be silly. Von Neumann and Alan Turing, for example, were exceptionally brilliant individuals, and that goes for many others in the field up to today.

Rather, what I have in mind is the stupidity of asserting or believing that human cognition is essentially algorithmic in nature, and/or is based on elementary neural processes of a digital sort, whose outcomes could be exactly reproduced by a large enough computer. More concisely, that the brain is a biological version of a digital computer.

Why was it stupid to make this sort of assertion? Why is it stupid to keep doing that today?

In installments to come I shall focus on two main reasons.

Neurobiology:

Real living neurons behave completely differently from the switching elements that make up a digital computer. Among many, many other things, living neurons like all other living cells have their own spontaneous activity. Like birds singing in the trees, neurons often emit pulses and rhythmic bursts of pulses in the absence of any signals from neurons connected with them.

Real networks of neurons in the brains of humans and animals display nothing of the rigidly algorithmic behavior implied by the early mathematical models of neural networks. Nor do they behave anything like the artificial neural nets upon which todays deep learning AI systems are based.

As we continue this series I shall present fascinating discoveries in neurobiology in recent decades, discoveries that demolish the remains of the biological computer concept of the brain.

The modern data were of course not available to AI pioneers like John von Neumann and Alan Turing, nor to McCullock and Pitts authors of the first mathematical neural net models. The conceptual basis for AI was laid in the 1940s and 1950s.

But there was plenty of evidence of the spontaneous behavior of neurons, their pulsatile bursts, so-called frequency coding in neuromotor control, the existence of chemical forms of communication within the nervous system, the presence of membrane oscillations, etc.

However interesting and fruitful it was for the early development of AI, from a biological point of view the neural net model was nonsense from the very start. Nevertheless, the AI pioneers kept on keeping on with the stupid notion (S 1; see the first installment for the list of four) that the brain is essentially a digital computing system.

In believing they were about to solve the mysteries of the brain and mind, they grossly overestimated the power of their own mathematical methods and ways of thinking (S 3) methods that, it is true, had been successful in building the first atomic bomb, electronic computing and code-breaking machines, radar, automatic guidance systems and so forth during World War II and the immediate postwar period.

The nature of meaning

The meanings of essential concepts, as they actually occur in human cognitive activity, cannot be adequately defined or represented in formal, combinatorial terms. They cannot be stored in a computer base or incorporated into a software architecture.

The pioneers of artificial intelligence should have recognized this fact, even without the 1930s results of Kurt Gdel in mathematical logic, with which von Neumann, Turing and others were thoroughly familiar. But Gdels arguments leave no reasonable doubt concerning the inexhaustibility of meaning, even for such supposedly simple concepts of mathematics as that of a finite set or truth as it applies to propositions of mathematics.

Judging from their writings, one gets the impression that the pioneers of AI did not understand (S 4) the significance of Gdel work.

It is not necessary, however, to study mathematical logic in order to recognize that meaning lies outside the universe of combinatorial relationships. No one has to be a frog at the bottom of the well.

Next: Neurobiological discoveries demolish the remains of the biological computer concept of the brain.

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, hes a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.

Asia Times Financialis now live. Linking accurate news, insightful analysis and local knowledge with the ATF China Bond 50 Index, the world'sfirst benchmark cross sector Chinese Bond Indices.Read ATFnow.

See the original post here:

AIs future lies in its truly human past - Asia Times

AI and Privacy Line: AI as a Helper and as a Danger – ReadWrite

As AI becomes increasingly adopted in more industries, its users attempt to achieve the delicate balance of making efficient use of its utility while striving to protect the privacy of its customers. A common best practice of AI is to be transparent about its use and how it reaches certain outcomes. However, there is a good and bad side to this transparency. Here is what you should know about the pros and cons of AI transparency, and possible solutions to achieve this difficult balance.

AI increases efficiency, leverages innovation, and streamlines processes. Being transparent about how it works and how it calculates results can lead to several societal and business advantages, including the following:

The number of uses of AI has continued to expand over the last several years. AI has even extended into the justice system, with AI doing everything from fighting traffic tickets to being considered as a fairer outcome than a jury.

When companies are transparent about their use of AI, they can increase users access to justice. People can see how AI gathers key information and reaches certain outcomes. They can have access to greater technology and more information than they would typically have access to without the use of AI.

One of the original drawbacks of AI was the possibility of discriminatory outcomes when the AI was used to detect patterns and make assumptions about users based on the data it gathers.

However, AI has become much more sophisticated today and has even been used to detect discrimination. AI can ensure that all users information is included or that their voice is heard. In this regard, AI can be a great equalizer.

When AI users are upfront about their use of AI and explain this use to their client base, they are more likely to instill trust. People need to know how companies reach certain results, and being transparent can help bridge the gap between businesses and their customers.

Customers are willing to embrace AI. 62% of consumers surveyed in Salesforces State of the Connected Consumer reported that they were open to AI that improved their experiences, and businesses are willing to meet this demand.

72% of executives say that they try to gain customer trust and confidence in their product or service by being transparent about their use of AI, according to a recent Accenture survey. Companies that are able to be transparent about their use of AI and the security measures they have put in place to protect users data may be able to benefit from this increased transparency.

When people know that they are interacting with an AI system instead of being tricked into believing it is a human, they can often adapt their own behavior to get the information they need.

For example, people may use keywords in a chat box instead of completed sentences. Users may have a better understanding of the benefits and limitations of these systems and make a conscious decision to interact with the AI system.

While transparency can bring about some of the positive outcomes discussed above, it also has several drawbacks, including the following:

A significant argument against AI and its transparency is the potential lack of privacy. AI often gathers big data and uses a unique algorithm to assign a value to this data.

However, to obtain results, AI often tracks every online activity, (you can get free background checks), AI tracks keystrokes, search, and use of the business website. Some of this information may also be sold to third parties.

Additionally, AI is often used to track peoples online behavior, from which they may be able to discern critical information about a person, including his or her:

Even when people choose not to give anyone online this sensitive information, they may still experience its loss due to AI capabilities.

Additionally, AI may track publicly available information. However, when there is not a human to check the accuracy of this information, one persons information may be confused with anothers.

When companies publish their explanations of AI, hackers may use this information to manipulate the system. For example, hackers may be able to make slight changes to the code or input to achieve an inaccurate outcome.

When hackers understand the reasoning behind AI, they may be able to influence the algorithm. This type of technology is not typically encouraged to detect fraud. Therefore, the system may be easier to manipulate when stakeholders do not put additional safeguards in place.

Another potential problem that may arise when a company is transparent about its use of AI is the possibility that its proprietary trade secrets or intellectual property are stolen by these hackers. These individuals may be able to look at a companys explanations and recreate the proprietary algorithm, to the detriment of the business.

With so much information readily available online, 78 million Americans say they are concerned about cybersecurity. When companies spell out how they use AI, this may make it easier for hackers to access consumers information or create a data breach which can lead to identity theft, such as the notorious Equifax data breach that compromised 148 million Americans private records.

Disclosures about AI may bring about additional risks, such as more stringent regulation. When AI is confusing and inaccessible, regulators may not understand it or be able to regulate it. However, when businesses are transparent about the role of AI, this may bring about a more significant regulatory framework about AI and how it can be used. In this manner, innovators may be punished for their innovation.

When businesses are clear about how they are protecting consumers data in the interest of being transparent, they may unwittingly make themselves more vulnerable to legal claims by consumers who allege that their information was not used properly. Clever lawyers can carefully review AI transparency information and then develop creative legal theories about the business use of AI.

They may focus on what the business did not do to protect a consumers privacy, for example. They may then use this information to allege the business was negligent in its actions or omissions.

Additionally, many AI systems operate from a simpler model. Companies that are transparent about their algorithms may use less sophisticated algorithms that may omit certain information or cause errors in certain situations.

Experienced lawyers may be able to identify additional problems that the AI causes to substantiate their legal claims against the business.

Anyone who has seen a Terminator movie or basically any apocalyptic movie knows that even technology that was developed only for the noblest of reasons can potentially be weaponized or used as something that ultimately damages society.

Due to the potential for harm, many laws have already been passed that require certain companies to be transparent about their use of AI. For example, financial service companies are required to disclose major factors they use in determining a persons creditworthiness and why they make an adverse action in a lending decision.

If passed, these laws may establish new obligations that businesses must adhere to regarding how they collect information, how they use AI, and whether they will first need to express consent from a consumer.

In 2019, an executive order was signed into law that directs federal agencies to devote resources to the development and maintenance of AI and calls for guidelines and standards that would allow federal agencies to regulate AI technology in a way that would protect the privacy and national security.

Even if a business is not yet required to be transparent about its use of AI, the time may soon come when it does not have a choice in the matter. In response to this likely outcome, some businesses are being proactive and establishing internal review boards that test the AI and identify ethical issues surrounding it.

They may also collaborate with their legal department and developers to create solutions to problems they identify. By carefully assessing their potential risk and establishing solutions to problems before disclosure becomes mandatory, businesses may be better situated to avoid the risks associated with AI transparency.

Image Credit: cottonbro; Pexels

Ben is a Web Operations Director at InfoTracer who takes a wide view from the whole system. He authors guides on entire security posture, both physical and cyber. Enjoys sharing the best practices and does it the right way!

Excerpt from:

AI and Privacy Line: AI as a Helper and as a Danger - ReadWrite

Verint’s June Events Feature the Latest AI and Automation Advances in Customer and Workforce Engagement and New Banking Customer Experience Insights -…

MELVILLE, N.Y.--(BUSINESS WIRE)--Verint Systems Inc. (Nasdaq: VRNT), The Customer Engagement Company, today announced this months event line up showcasing the latest advancements in Interactive Voice Response (IVR) and Robotic Process Automation (RPA) as well as the latest results from Verints Experience Index for the banking market.

Destination CRM Webinar Event: Smart IVRs: For Better Customer Experiences June 17, 2 p.m. ET; Online Webinar

In this webinar, attendees will learn about the advances in IVRa list that now includes chatbots, callback integration, omnichannel support, visual IVR, natural language processing and artificial intelligence. Verints Michael Southworth, General Manager, Intelligent Self-Service, is among the presenters.

Forrester CX North America June 18, 1 p.m. ET; Online Webinar

Verints Eric Head, VP, Experience Management, and Anna Marie Redmond, VP, Client Experience Director, Sterling National Bank, will present New Banking CX Data: Journeys and Changing Values. Attendees will hear how customer experience in banking has changed based on insights from the latest Verint Experience Index and learn how to connect data and collaborate to meet expectations.

RPA LIVE 2020 June 23, noon ET; Online Webinar

Verints Craig Seebach, VP, Strategy, Workforce Engagement Solutions, will present Measure and Manage Your RPA Workforce Seamlessly with Your Employees. Session attendees will learn how to manage a hybrid virtual workforce seamlessly, bots along with staff, to ensure they are capturing the speed and capacity gains from RPA and creating the right balance between resources, costs and service.

Banking CX: Agile Strategies for Strange Times June 24; Online Webinar, 1 pm ET

Verints Karly Szczepkowski, Research Analyst and Verint Experience Index (VXI) author, and Eric Head, VP, Experience Management, will discuss solutions banks can implement now to stay agile throughout the rest of 2020and keep listening for a successful long-term VoC strategy based on insights from the latest Verint Experience Index for the banking industry.

To learn more about the solutions featured in the events, click the following links: Verint Experience Cloud, Verint Voice Self-Service, Verint Robotic Process Automation.

About Verint Systems Inc.

Verint (Nasdaq: VRNT) is a global leader in Actionable Intelligence solutions with a focus on customer engagement optimization and cyber intelligence. Today, over 10,000 organizations in more than 180 countriesincluding over 85 percent of the Fortune 100count on intelligence from Verint solutions to make more informed, effective and timely decisions. Learn more about how were creating A Smarter World with Actionable Intelligence at http://www.verint.com.

This press release contains forward-looking statements, including statements regarding expectations, predictions, views, opportunities, plans, strategies, beliefs, and statements of similar effect relating to Verint Systems Inc. These forward-looking statements are not guarantees of future performance and they are based on management's expectations that involve a number of risks, uncertainties and assumptions, any of which could cause actual results to differ materially from those expressed in or implied by the forward-looking statements. For a detailed discussion of these risk factors, see our Annual Report on Form 10-K for the fiscal year ended January 31, 2020, our Quarterly Report on Form 10-Q for the quarter ended April 30, 2020, and other filings we make with the SEC. The forward-looking statements contained in this press release are made as of the date of this press release and, except as required by law, Verint assumes no obligation to update or revise them or to provide reasons why actual results may differ.

VERINT, ACTIONABLE INTELLIGENCE, THE CUSTOMER ENGAGEMENT COMPANY, CUSTOMER ENGAGEMENT SOLUTIONS, CYBER INTELLIGENCE SOLUTIONS, GI2, FIRSTMILE, OMNIX, WEBINT, LUMINAR, RELIANT, VANTAGE, STAR-GATE, TERROGENCE, SENSECY, and VIGIA are trademarks or registered trademarks of Verint Systems Inc. or its subsidiaries. Verint and other parties may also have trademark rights in other terms used herein.

Visit link:

Verint's June Events Feature the Latest AI and Automation Advances in Customer and Workforce Engagement and New Banking Customer Experience Insights -...

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI – Singularity Hub

As obstinate and frustrating as we are sometimes, humans in general are pretty flexible when it comes to learningespecially compared to AI.

Our ability to adapt is deeply rooted within our brains chemical base code. Although modern AI and neurocomputation have largely focused on loosely recreating the brains electrical signals, chemicals are actually the prima donna of brain-wide neural transmission.

Chemical neurotransmitters not only allow most signals to jump from one neuron to the next, they also feedback and fine-tune a neurons electrical signals to ensure theyre functioning properly in the right contexts. This process, traditionally dubbed neuromodulation, has been front and center in neuroscience research for many decades. More recently, the idea has expanded to also include the process of directly changing electrical activity through electrode stimulation rather than chemicals.

Neural chemicals are the targets for most of our current medicinal drugs that re-jigger brain functions and states, such as anti-depressants or anxiolytics. Neuromodulation is also an immensely powerful way for the brain to flexibly adapt, which is why its perhaps surprising that the mechanism has rarely been explicitly incorporated into AI methods that mimic the brain.

This week, a team from the University of Liege in Belgium went old school. Using neuromodulation as inspiration, they designed a new deep learning model that explicitly adopts the mechanism to better learn adaptive behaviors. When challenged on a difficult navigational task, the team found that neuromodulation allowed the artificial neural net to better adjust to unexpected changes.

For the first time, cognitive mechanisms identified in neuroscience are finding algorithmic applications in a multi-tasking context. This research opens perspectives in the exploitation in AI of neuromodulation, a key mechanism in the functioning of the human brain, said study author Dr. Damien Ernst.

Neuromodulation often appears in the same breath as another jargon-y word, neuroplasticity. Simply put, they just mean that the brain has mechanisms to adapt; that is, neural networks are flexible or plastic.

Cellular neuromodulation is perhaps the grandfather of all learning theories in the brain. Famed Canadian psychologist and father of neural networks Dr. Donald Hebb popularized the theory in the 1900s, which is now often described as neurons that fire together, wire together. On a high level, Hebbian learning summarizes how individual neurons flexibly change their activity levels so that they better hook up into neural circuits, which underlie most of the brains computations.

However, neuromodulation goes a step further. Here, neurochemicals such as dopamine dont necessarily directly help wire up neural connections. Rather, they fine-tune how likely a neuron is to activate and link up with its neighbor. These so-called neuromodulators are similar to a temperature dial: depending on context, they either alert a neuron if it needs to calm down so that it only activates when receiving a larger input, or hype it up so that it jumps into action after a smaller stimuli.

Cellular neuromodulation provides the ability to continuously tune neuron input/output behaviors to shape their response to external stimuli in different contexts, the authors wrote. This level of adaptability especially comes into play when we try things that need continuous adjustments, such as how our feet strike uneven ground when running, or complex multitasking navigational tasks.

To be very clear, neuromodulation isnt directly changing synaptic weights. (Ughwhat?)

Stay with me. You might know that a neural network, either biological or artificial, is a bunch of neurons connected to each other through different strengths. How readily one neuron changes a neighboring neurons activityor how strongly theyre linkedis often called the synaptic weight.

Deep learning algorithms are made up of multiple layers of neurons linked to each other through adjustable weights. Traditionally, tweaking the strengths of these connections, or synaptic weights, is how a deep neural net learns (for those interested, the biological equivalent is dubbed synaptic plasticity).

However, neuromodulation doesnt directly act on weights. Rather, it alters how likely a neuron or network is to be capable of changing their connectionthat is, their flexibility.

Neuromodulation is a meta-level of control; so its perhaps not surprising that the new algorithm is actually composed of two separate neural networks.

The first is a traditional deep neural net, dubbed the main network. It processes input patterns and uses a custom method of activationhow likely a neuron in this network is to spark to life depends on the second network, or the neuromodulatory network. Here, the neurons dont process input from the environment. Rather, they deal with feedback and context to dynamically control the properties of the main network.

Especially important, said the authors, is that the modulatory network scales in size with the number of neurons in the main one, rather than the number of their connections. Its what makes the NMN different, they said, because this setup allows us to extend more easily to very large networks.

To gauge the adaptability of their new AI, the team pitted the NMN against traditional deep learning algorithms in a scenario using reinforcement learningthat is, learning through wins or mistakes.

In two navigational tasks, the AI had to learn to move towards several targets through trial and error alone. Its somewhat analogous to you trying to play hide-and-seek while blindfolded in a completely new venue. The first task is relatively simple, in which youre only moving towards a single goal and you can take off your blindfold to check where you are after every step. The second is more difficult in that you have to reach one of two marks. The closer you get to the actual goal, the higher the rewardcandy in real life, and a digital analogy for AI. If you stumble on the other, you get punishedthe AI equivalent to a slap on the hand.

Remarkably, NMNs learned both faster and better than traditional reinforcement learning deep neural nets. Regardless of how they started, NMNs were more likely to figure out the optimal route towards their target in much less time.

Over the course of learning, NMNs not only used their neuromodulatory network to change their main one, they also adapted the modulatory networktalk about meta! It means that as the AI learned, it didnt just flexibly adapt its learning; it also changed how it influences its own behavior.

In this way, the neuromodulatory network is a bit like a library of self-help booksyou dont just solve a particular problem, you also learn how to solve the problem. The more information the AI got, the faster and better it fine-tuned its own strategy to optimize learning, even when feedback wasnt perfect. The NMN also didnt like to give up: even when already performing well, the AI kept adapting to further improve itself.

Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems, the authors said.

The study is just the latest in a push to incorporate more biological learning mechanisms into deep learning. Were at the beginning: neuroscientists, for example, are increasingly recognizing the role of non-neuron brain cells in modulating learning, memory, and forgetting. Although computational neuroscientists have begun incorporating these findings into models of biological brains, so far AI researchers have largely brushed them aside.

Its difficult to know which brain mechanisms are necessary substrates for intelligence and which are evolutionary leftovers, but one thing is clear: neuroscience is increasingly providing AI with ideas outside its usual box.

Image Credit: Image by Gerd Altmann from Pixabay

Excerpt from:

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI - Singularity Hub

Jack Ma Calls for Wisdom and Innovation at World AI… – Alizila

Machine intelligence must go hand-in-hand with human wisdom, Alibaba Group founder Jack Ma said Thursday.

Speaking at the World Artificial Intelligence Conference, Ma said that humans should strive to better understand themselves and the Earth, especially in the face of a global crisis like the Covid-19 outbreak.

This pandemic has shown us how little we know about ourselves and how little we know about the Earth. Because we dont know ourselves, dont know the world we are living in, dont understand the Earth and dont know how to cherish and preserve the Earth, we have created many troubles and disasters, Ma said via video message. He added that, despite the resources, wealth, knowledge and technological prowess enjoyed by society today, human wisdom was still the key to addressing the worlds challenges and was needed to enhance communication and cooperation to find impactful and long-lasting solutions.

With the theme of Intelligent World, Indivisible Community, the three-day conference features presentations, keynote speeches and panel discussions with prominent global figures from the realm of science and technology, including Turing Award winners Yoshua Bengio and Andrew Yao, Director General of the United Nations Industrial Development Organization Li Yong and Tesla CEO Elon Musk.

In his speech during the opening ceremony, Ma said that while the technologies of the past improved our way of living, the technologies of today and tomorrow should help humankind survive better.

During the pandemic, people have used internet technology to survive, not just for themselves, but also for others, he said. There are many cases: going to school, having a meeting, shopping, visiting a doctor all of these activities rely on digital technology. To innovate in order to survive is the strongest and most irresistible force.

He also pointed to an AI algorithm that Alibabas research and innovation institute DAMO Academy developed to help diagnose Covid-19. Informed by data from thousands of computed-tomography scans and trained by deep learning, the algorithm can accurately detect the virus in 20 seconds, vastly shortening the time it takes for doctors to review CT scans, confirm cases and move on to treatment and supportive measures.

Ma said that such innovations were indicative of the quickening pace of digitalization in the world.

Technological transformation will come earlier and its speed will accelerate. We need to be ready, he said.

Sign up for our newsletterto receive the latest Alibaba updates in your inbox every week.

View original post here:

Jack Ma Calls for Wisdom and Innovation at World AI... - Alizila

An invisible hand: Patients aren’t being told about the AI systems advising their care – STAT

Since February of last year, tens of thousands of patients hospitalized at one of Minnesotas largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care.

Thats because frontline clinicians at M Health Fairview generally dont mention the AI whirring behind the scenes in their conversations with patients.

At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools many of them unproven to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether theyre at risk of readmission, and whether theyre likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.

advertisement

The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic.

Hospitals and clinicians are operating under the assumption that you do not disclose, and thats not really something that has been defended or really thought about, Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates.

advertisement

In some cases, theres little room for harm: Patients may not need to know about an AI system thats nudging their doctor to move up an MRI scan by a day, like the one deployed by M Health Fairview, or to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations. But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention.

Thats a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely havent yet been shown to improve patient outcomes. Some hospitals dont share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them.

The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value but plenty of downside in raising the subject.

They worry that bringing up AI will derail clinicians conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI systems recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patients care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.

Internist Karyn Baum, whos leading M Health Fairviews rollout of the tool, said she doesnt bring up the AI to her patients in the same way that I wouldnt say that the X-ray has decided that youre ready to go home. She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally dont bring it up either.

Four of the health systems 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.

Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule. A screenshot of the tool provided by Qventus lists a hypothetical 76-year-old patient, N. Griffin, who is scheduled to leave the hospital on a Tuesday but the tool prompts clinicians to consider that he might be ready to go home Monday, if he can be squeezed in for an MRI scan by Saturday.

Baum said she sees the system as a tool to help me make a better decision just like a screening tool for sepsis, or a CT scan, or a lab value but its not going to take the place of that decision, she said. To her, it doesnt make sense to mention to patients. If she did, Baum said, she could end up in a lengthy discussion with patients curious about how the algorithm was created.

That could take valuable time away from the medical and logistical specifics that Baum prefers to spend time talking about with patients flagged by the Qventus tool. Among the questions she brings up with them: How are the patients vital signs and lab test results looking? Does the patient have a ride home? How about a flight of stairs to climb when they get there, or a plan for getting help if they fall?

Some doctors worry that while well-intentioned, the decision to withhold mention of these AI systems could backfire.

I think that patients will find out that we are using these approaches, in part because people are writing news stories like this one about the fact that people are using them, said Justin Sanders, a palliative care physician at Dana-Farber Cancer Institute and Brigham and Womens Hospital in Boston. It has the potential to become an unnecessary distraction and undermine trust in what were trying to do in ways that are probably avoidable.

Patients themselves are typically excluded from the decision-making process about disclosure. STAT asked four patients who have been hospitalized with serious medical conditions kidney disease, metastatic cancer, and sepsis whether theyd want to be told if an AI-powered decision support tool were used in their care. They expressed a range of views: Three said they wouldnt want to know if their doctor was being advised by such a tool. But a fourth patient spoke out forcefully in favor of disclosure.

This issue of transparency and upfront communication must be insisted upon by patients, said Paul Conway, a 55-year-old policy professional who has been on dialysis and received a kidney transplant, both consequences of managing kidney disease since he was a teenager.

The AI-powered decision support tools being introduced in clinical care are often novel and unproven but does their rollout constitute research?

Many hospitals believe the answer is no, and theyre using that distinction as justification for the decision not to inform patients about the use of these tools in their care. As some health systems see it, these algorithms are tools being deployed as part of routine clinical care to make hospitals more efficient. In their view, patients consent to the use of the algorithms by virtue of being admitted to the hospital.

At UCLA Health, for example, clinicians use a neural network to pinpoint primary care patients at risk of being hospitalized or frequently visiting the emergency room in the next year. Patients are not made aware of the tool because it is considered a part of the health systems quality improvement efforts, according to Mohammed Mahbouba, who spoke to STAT in February when he was UCLA Healths chief data officer. (He has since left the health system.)

This is in the context of clinical operations, Mahbouba said. Its not a research project.

Oregon Health and Science University uses a regression-powered algorithm to monitor the majority of its adult hospital patients for signs of sepsis. The tool is not disclosed to patients because it is considered part of hospital operations.

This is meant for operational care, it is not meant for research. So similar to how youd have a patient aware of the fact that were collecting their vital sign information, its a part of clinical care. Thats why its considered appropriate, said Abhijit Pandit, OHSUs chief technology and data officer.

But there is no clear line that neatly separates medical research from hospital operations or quality control, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison. And researchers and bioethicists often disagree on what constitutes one or the other.

This has been a huge issue: Where is that line between quality control, operational control, and research? Theres no widespread agreement, Ossorio said.

To be sure, there are plenty of contexts in which hospitals deploying AI-powered decision support tools are getting patients explicit consent to use them. Some do so in the context of clinical trials, while others ask permission as part of routine clinical operations.

At Parkland Hospital in Dallas, where the orthopedics department has a tool designed to predict whether a patient will die in the next 48 hours, clinicians inform patients about the tool and ask them to sign onto its use.

Based on the agreement we have, we have to have patient consent explaining why were using this, how were using it, how well use it to connect them to the right services, etc., said Vikas Chowdhry, the chief analytics and information officer for a nonprofit innovation center incubated out of Parkland Health System in Dallas.

Hospitals often navigate those decisions internally, since manufacturers of AI systems sold to hospitals and clinics generally dont make recommendations to their customers about what, if anything, frontline clinicians should say to patients.

Jvion a Georgia-based health care AI company that markets a tool that assesses readmission risk in hospitalized patients and suggests interventions to prevent another hospital stay encourages the handful of hospitals deploying its model to exercise their own discretion about whether and how to discuss it with patients. But in practice, the AI system usually doesnt get brought up in these conversations, according to John Frownfelter, a physician who serves as Jvions chief medical information officer.

Since the judgment is left in the hands of the clinicians, its almost irrelevant, Frownfelter said.

When patients are given an unproven drug, the protocol is straightforward: They must explicitly consent to enroll in a clinical study authorized by the Food and Drug Administration and monitored by an institutional review board. And a researcher must inform them about the potential risks and benefits of taking the medication.

Thats not how it works with AI systems being used for decision support in the clinic. These tools arent treatments or fully automated diagnostic tools. They also dont directly determine what kind of therapy a patient may receive all of which would make them subject to more stringent regulatory oversight.

Developers of AI-powered decision support tools generally dont seek approval from the FDA, in part because the 21st Century Cures Act, which was signed into law in 2016, was interpreted as taking most medical advisory tools out of the FDAs jurisdiction. (That could change: In guidelines released last fall, the agency said it intends to focus its oversight powers on AI decision-support products meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors a definition that lines up with many of the AI models that patients arent being informed about.)

The result, for now, is that disclosure around AI-powered decision support tools falls into a regulatory gray zone and that means the hospitals rolling them out often lack incentive to seek informed consent from patients.

A lot of people justifiably think there are many quality-control activities that health care systems should be doing that involve gathering data, Wisconsins Ossorio said. And they say it would be burdensome and confusing to patients to get consent for every one of those activities that touch on their data.

In contrast to the AI-powered decision support tools, there are a few commonly used algorithms subject to the regulation laid out by the Cures Act, such as the type behind the genetic tests that clinicians use to chart a course of treatment for a cancer patient. But in those cases, the genetic test is extremely influential in determining what kind of therapy or drug a patient may receive. Conversely, theres no similarly clear link between an algorithm designed to predict whether a patient may be readmitted to the hospital and the way theyll be treated if and when that occurs.

If it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

Still, Ossorio would support an ultra-cautious approach: I do think people throw a lot of things into the operations bucket, and if it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Further complicating matters is the lack of publicly disclosed data showing whether and how well some of the algorithms work, as well as their overall impact on patients. The public doesnt know whether OHSUs sepsis-prediction algorithm actually predicts sepsis, nor whether UCLAs admissions tool actually predicts admissions.

Some AI-powered decision support tools are supported by early data presented at conferences and published in journals, and several developers say theyre in the process of sharing results: Jvion, for example, has submitted to a journal for publication a study that showed a 26% reduction in readmissions when its readmissions risk tool was deployed; that paper is currently in review, according to Jvions Frownfelter.

But asked by STAT for data on their tools impact on patient care, several hospital executives declined or said they hadnt completed their evaluations.

A spokesperson from UCLA said it had yet to complete an assessment of the performance of its admissions algorithm.

Before you use a tool to do medical decision-making, you should do the research.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

A spokesperson from OHSU said that according to its latest report, run before the Covid-19 pandemic began in March, its sepsis algorithm had been used on 18,000 patients, of which it had flagged 1,659 patients as at-risk with nurses indicating concern for 210 of them. He added that the tools impact on patients as measured by hospital death rates and length of time spent in the facility was inconclusive.

Its disturbing that theyre deploying these tools without having the kind of information that they should have, said Wisconsins Ossorio. Before you use a tool to do medical decision-making, you should do the research.

Ossorio said it may be the case that these tools are merely being used as an additional data point and not to make decisions. But if health systems dont disclose data showing how the tools are being used, theres no way to know how heavily clinicians may be leaning on them.

They always say these tools are meant to be used in combination with clinical data and its up to the clinician to make the final decision. But what happens if we learn the algorithm is relied upon over and above all other kinds of information? she said.

There are countless advocacy groups representing a wide range of patients, but no organization exists to speak for those whove unknowingly had AI systems involved in their care. They have no way, after all, of even identifying themselves as part of a common community.

STAT was unable to identify any patients who learned after the fact that their care had been guided by an undisclosed AI model, but asked several patients how theyd feel, hypothetically, about an AI system being used in their care without their knowledge.

Conway, the patient with kidney disease, maintained that he would want to know. He also dismissed the concern raised by some physicians that mentioning AI would derail a conversation. Woe to the professional that as you introduce a topic, a patient might actually ask questions and you have to answer them, he said.

Other patients, however, said that while they welcomed the use of AI and other innovations in their care, they wouldnt expect or even want their doctor to mention it. They likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years.

Any of those statistics or algorithms are not going to change how you confront your disease so why burden yourself with them, is my philosophy, said Stacy Hurt, a patient advocate from Pittsburgh who received a diagnosis of metastatic colorectal cancer in 2014, on her 44th birthday, when she was working as an executive at a pharmaceutical company. (She is now doing well and is approaching five years with no evidence of disease.)

Katy Grainger, who lost the lower half of both legs and seven fingertips to sepsis, said she would have supported her care team using an algorithm like OHSUs sepsis model, so long as her clinicians didnt rely on it too heavily. She said she also would not have wanted to be informed that the tool was being used.

I dont monitor how doctors do their jobs. I just trust that theyre doing it well.

Katy Grainger, patient who developed sepsis

I dont monitor how doctors do their jobs. I just trust that theyre doing it well, she said. I have to believe that Im not a doctor and I cant control what they do.

Still, Grainger expressed some reservations about the tool, including the idea that it may have failed to identify her. At 52, Grainger was healthy and fairly young when she developed sepsis. She had been sick for days and visited an urgent care clinic, which gave her antibiotics for what they thought was a basic bacterial infection, but which quickly progressed to a serious case of sepsis.

I would be worried that [the algorithm] could have missed me. I was young well, 52 healthy, in some of the best shape of my life, eating really well, and then boom, Grainger said.

Dana Deighton, a marketing professional from Virginia, suspects that if an algorithm scanned her data back in 2012, it would have made a dire prediction about her life expectancy: She had just been diagnosed with metastatic esophageal cancer at age 43, after all. But she probably wouldnt have wanted to know that at such a tender and sensitive time.

If a physician brought up AI when you are looking for a warmer, more personal touch, it might actually have the opposite and worse effect, Deighton said. (Shes doing well now her scans have turned up no evidence of disease since 2015.)

Harvards Cohen said he wants to see hospital systems, clinicians, and AI manufacturers come together for a thoughtful discussion around whether they should be disclosing the use of these tools to patients and if were not doing that, then the question is why arent we telling them about this when we tell them about a lot of other things, he said.

Cohen said he worries that uptake and trust in AI and machine learning could plummet if patients were to find out, after the fact, that theres a rash of this being used without anyone ever telling them.

Thats a scary thing, he said, if you think this is the way the future is going to go.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

More:

An invisible hand: Patients aren't being told about the AI systems advising their care - STAT

AI can automatically rewrite outdated text in Wikipedia articles – Engadget

The machine learning-based system is trained to recognize the differences between a Wikipedia article sentence and a claim sentence with updated facts. If it sees any contradictions between the two sentences, it uses a "neutrality masker" to pinpoint both the contradictory words that need deleting and the ones it absolutely has to keep. After that, an encoder-decoder framework determines how to rewrite the Wikipedia sentence using simplified representations of both that sentence and the new claim.

The system can also be used to supplement datasets meant to train fake news detectors, potentially reducing bias and improving accuracy.

As-is, the technology isn't quite ready for prime time. Humans rating the AI's accuracy gave it average scores of 4 out of 5 for factual updates and 3.85 out of 5 for grammar. That's better than other systems for generating text, but that still suggests you might notice the difference. If researchers can refine the AI, though, this might be useful for making minor edits to Wikipedia, news articles (hello!) or other documents in those moments when a human editor isn't practical.

Continued here:

AI can automatically rewrite outdated text in Wikipedia articles - Engadget

Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

Theres no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

This content was produced by Nuance. It was not written by MIT Technology Review's editorial staff.

Joe Petro is CTO at Nuance.

Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they dont trust?

Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with black box perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the users perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.

Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But its no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the drivers ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. Its our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

Treat user data as sensitive intellectual property (IP). It is the proprietary source code used to build AI models that solve specific problems, create bespoke experiences, and achieve targeted desired outcomes. This data is derived from personal user interactions, such as conversations between consumers and call agents, doctors and patients, and banks and customers. It is sensitive because it creates intimate, highly detailed digital user profiles based on private financial, health, biometric, and other information.

User data needs to be protected and used as carefully as any other IP, especially for AI systems in highly regulated industries such as health care and financial services. Doctors use AI speech, natural-language understanding, and conversational virtual agents created with patient health data to document care and access diagnostic guidance in real time. In banking and financial services, AI systems process millions of customer transactions and use biometric voiceprint, eye movement, and behavioral data (for example, how fast you type, the words you use, which hand you swipe with) to detect possible fraud or authenticate user identities.

Health-care providers and businesses alike are creating their own branded digital front door that provides efficient, personalized user experiences through SMS, web, phone, video, apps, and other channels. Consumers also are opting for time-saving real-time digital interactions. Health-care and commercial organizations rightfully want to control and safeguard their patient and customer relationships and data in each method of digital engagement to build brand awareness, personalized interactions, and loyalty.

Every AI vendor and developer not only needs to be aware of the inherently sensitive nature of user data but also of the need to operate with high ethical standards to build and maintain the required chain of trust.

Here are key questions to consider:

Who has access to the data? Have a clear and transparent policy that includes strict protections such as limiting access to certain types of data, and prohibiting resale or third-party sharing. The same policies should apply to cloud providers or other development partners.

Where is the data stored, and for how long? Ask where the data lives (cloud, edge, device) and how long it will be kept. The implementation of the European Unions General Data Protection Regulation, the California Consumer Privacy Act, and the prospect of additional state and federal privacy protections should make data storage and retention practices top of mind during AI development.

How are benefits defined and shared? AI applications must also be tested with diverse data sets to reflect the intended real-world applications, eliminate unintentional bias, and ensure reliable results.

How does the data manifest within the system? Understand how data will flow through the system. Is sensitive data accessed and essentially processed by a neural net as a series of 0s and 1s, or is it stored in its original form with medical or personally identifying information? Establish and follow appropriate data retention and deletion policies for each type of sensitive data.

Who can realize commercial value from user data? Consider the potential consequences of data-sharing for purposes outside the original scope or source of the data. Account for possible mergers and acquisitions, possible follow-on products, and other factors.

Is the system secure and compliant? Design and build for privacy and security first. Consider how transparency, user consent, and system performance could be affected throughout the product or service lifecycle.

Biometric applications help prevent fraud and simplify authentication. HSBCs VoiceID voice biometrics system has successfully prevented the theft of nearly 400 million (about $493 million) by phone scammers in the UK. It compares a persons voiceprint with thousands of individual speech characteristics in an established voice record to confirm a users identity. Other companies use voice biometrics to validate the identities of remote call center employees before they can access proprietary systems and data. The need for such measures is growing as consumers conduct more digital and phone-based interactions.

Intelligent applications deliver secure, personalized, digital-first customer service. A global telecommunications company is using conversational AI to create consistent, secure, and personalized customer experiences across its large and diverse brand portfolio. With customers increasingly engaging across digital channels, the company looked to technology partners to expand its own in-house expertise while ensuring it would retain control of its data in deploying a virtual assistant for customer service.

A top-three retailer uses voice-powered virtual assistant technology to let shoppers upload photos of items theyve seen offline, then presents items for them to consider buying based on those images.

Ambient AI-powered clinical applications improve health-care experiences while alleviating physician burnout. EmergeOrtho in North Carolina is using the Nuance Dragon Ambient eXperience (DAX) application to transform how its orthopedic practices across the state can engage with patients and document care. The ambient clinical intelligence telehealth application accurately captures each doctor-patient interaction in the exam room or on a telehealth call, then automatically updates the patient's health record. Patients have the doctors full attention while streamlining the burnout-causing electronic paperwork physicians need to complete to get paid for delivering care.

AI-driven diagnostic imaging systems ensure that patients receive necessary follow-up care. Radiologists at multiple hospitals use AI and natural language processing to automatically identify and extract recommendations for follow-up exams for suspected cancers and other diseases seen in X-rays and other images. The same technology can help manage a surge of backlogged and follow-up imaging as covid-19 restrictions ease, allowing providers to schedule procedures, begin revenue recovery, and maintain patient care.

As digital transformation accelerates, we must solve the challenges we face today while preparing for an abundance of future opportunities. At the heart of that effort is the commitment to building trust and data stewardship into our AI development projects and organizations.

See more here:

Beyond the AI hype cycle: Trust and the future of AI - MIT Technology Review

Scaling AI While Navigating the Current Uncertainty – Dice Insights

The amount of uncertainty and complexity the recent economic difficulties have introduced into the business landscape has left many businesses reeling. While trying to adjust to the new normal, businesses are pressured to find new efficiencies and discover previously untapped sources of economic opportunity, makingA.I. and machine learning modelsmore important than ever to making critical and often timely business decisions.

The time for A.I. experimentation is over. We have arrived at the point where A.I. has to produce results and drive real revenue, while safeguarding the business from all of the potential risks that can jeopardize the bottom line. This expectation only becomes more challenging at a time when data is changing by the hour and previous historical patterns are not reliable. Furthermore, the complexities compound as businesses decide to rely more on A.I. in these trying times as away to stay ahead of the competition.

Newly emerging best practices, commonly referred to as MLOps (ML Operations), underpinned by a new layer of technologies with the same name, are the missing piece of the puzzle for many organizations looking to fast-track and scale their A.I. capabilities without putting their businesses at risk during this time of economic uncertainty. With MLOps technology and practices in place, businesses can bridge the inherent gap between data and operations teams, and get a scalable and governed means to deploy and manage A.I. applications in real-world production environments.

MLOps can be broken down into four key areas of the process required to derive value from machine learning, that must be well-resourced and well-understood to work in your business:

With MLOps, the goal is to make model deployment easy, regardless of which platform or language those models were created in, or where they need to eventually be deployed. MLOps essentially serves as an abstraction layer of automation whereby data teams point their models to, and where they can become managed by MLOps or Ops teams, while providing role-based visibility and actionability based on the needs of your organization.

The notion of removing ownership from the data teams as pertains to production environments, while providing them with the required visibility, allows for taking a lot of work off their plates, freeing them up to conduct their jobswhich is solving complex business problems using data.

To ensure the visibility and removal of unnecessary risk resulting from models going haywire, MLOps solutions need to deploy unique monitoring that is designed from the ground-up to monitor ML models. Such monitoring includes data drift, concept drift, feature importance, model accuracy, as well as overall service health, coupled with proactive alerting to be sent to various stakeholders using a variety of channels such as email, Slack and PagerDuty (based on severity and role). With MLOps monitoring in place, teams can deploy and manage thousands of models, and businesses will be ready to scale production A.I.

MLOps recognizes that models need to be updated frequently and seamlessly. Model lifecycle management supports the testing and warm-up of replacement models, A/B testing of new models against older versions, seamless rollout of updates, failover procedures, and full version control for simple rollback to prior model versions, all wrapped in designable approval workflows.

MLOps provides the integrations and capabilities you need to ensure consistent, repeatable and reportable processes for your models in production.Key capabilities include access control for production models and systems, such as integration to LDAP and role-based access control systems (RBAC), as well as approval flows, logging, version storage of each version of each model, and traceability of results for legal and regulatory compliance.

With the right processes, tools and training in place, businesses will be able to reap many benefits from MLOps. Itll provide insight into areas where the data might be skewed.One of the many frustrating parts of running A.I. modelsespecially right nowis that the data is constantly shifting. With MLOps, businesses can quickly identify and act on that information in order to retrain production models on newer data, using the data pipeline, algorithms, and code leveraged to create the original.

Users can also scale production while minimizing risk.Scaling A.I. across the enterprise is easier said than done. There can be numerous roadblocks that stand in the way, such as lack of communication between the IT and data science teams, or lack of visibility into A.I. outcomes. With MLOps, you can support multiple types of machine learning models created by different tools, as well as software dependencies needed by models.

Sivan Metzger is Managing Director, MLOps and Governance at DataRobot.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Visit link:

Scaling AI While Navigating the Current Uncertainty - Dice Insights

Samsung wants to spend $1bn on AI – TrustedReviews

Samsung is ready to make an even bigger splash in the artificial intelligence space in 2017, beyond introducing its upcoming "Bixby" AI assistant.

The South Korean company has earmarked around $1 billion for AI acquisitions, according to a Samsung US employee quoted here.

Related: MWC 2017

AI is an area Samsung is increasingly placing a great deal of emphasis on. It was only last year the company acquired AI specialists Viv Labs, and its soon-to-be-released virtual assistant should be based on technology from that very company.

Bixby, which is likely to launch on the Galaxy S8, will have eight languages from launch, putting it ahead of the Google Assistant, which currently supports five languages.

AI assistant 'Bixby' is said to be included on the upcoming Galaxy S8

One tipped feature that has a lot of potential to cause a stir is the ability for S8 users to scan objects using the onboard camera and receive useful information as feedback.

Smartphones wont be the only devices to benefit from Bixby, with the company explaining last November that the plan is to bring voice assistant services to connected home appliances and wearables.

All the rhetoric from Samsung points to a concerted push into everything AI, and it would hardly be a surprise to see an Amazon Echo/Google Home rival in the near future.

Watch: What to expect from the Galaxy S8

Do you plan to upgrade to the Galaxy S8? Let us know in the comments.

Go here to see the original:

Samsung wants to spend $1bn on AI - TrustedReviews