Wit.ai is shutting down Bot Engine as Facebook rolls NLP into its updated Messenger Platform – TechCrunch

Wit.ai announced this morning in a blog post that it would be sunsetting its Bot Engine. The Facebook-owned company builds developer tools for natural language processing to help engineers build speech and text chatbots faster and with less technical experience.

Bot Engine launched as a beta in April of 2016. The tool let developers train their own bots using sample conversations sharing similar structure. These examples could stand infor real conversations and be updated with authentic conversation logs to fine-tune. The goal was to allow for the training of a flexible bot on dozens of conversations, instead of millions.

But the Engine was designed for text-only interactions and the bot ecosystem has matured so quickly that the technology has become outdated. The team notes that Messenger and other platforms have been adding new means of interaction beyond text. All of this is for the better, but it hasnt been kind to the value-add of Bot Engine.

Wit.ai says that more than 100,000 developers use its services. But within that group, 90 percent of API calls are going to Wits NLP API. Bot Engine and the Stories UI will stay alive until February 1, 2018 to give developers time to migrate their apps.

Coincidentally, Facebook announced today it is integrating natural language processing tools intoMessenger Platform 2.1. With the update, developers can extract from messages information like date, time, location, amount of money, phone number and email. Wit.ai can then be used to customize the capabilities of the Messenger Platforms NLP integration.

Read more here:

Wit.ai is shutting down Bot Engine as Facebook rolls NLP into its updated Messenger Platform - TechCrunch

Teslas AI Chips Are Rolling Out, But They Arent A Self-Driving Panacea – Forbes

Tesla has opted to design and deploy their own AI chips, a strategy to achieve true self-driving car ... [+] capabilities but questions still remain.

According to several media reports, the new AI chips Tesla devised to achieve true self-driving car status have begun rolling out to older Tesla models that require retrofitting to replace the prior on-board processors.

Unfortunately, there has been some misleading reporting about those chips, a special type of AI computer processor that extensively supports Artificial Neural Networks (ANN), commonly referred to as Machine Learning (ML) or Deep Learning (DL).

Before I explore the over-hyped reporting, let me clarify that these custom-developed AI chips devised by Tesla engineers are certainly admirable, and the computer hardware design team deserves to be proud of what they have done. Kudos for their impressive work.

But such an acknowledgement does not also imply that they have somehow achieved a singularity marvel in AI, and nor does it mean they have miraculously solved the real-world problem of how to attain a true self-driving driverless car.

Not by a long shot.

And yet many in the media seem to think so, and at times have implied in a wide-eyed overzealous way that Teslas new computer processors have seemingly reached a nirvana of finally getting us to fully autonomous cars.

Thats just not the case.

Time to unpack the matter.

Important Context About AI Chips

First, lets clarify what an AI chip consists of.

A conventional computer contains a core processor or chip that does the systems work when you invoke your word processor or spreadsheets or are loading and running an app of some kind.

In addition, most modern computers also have GPUs, Graphical Processing Units, an additional set of processors or chips that aid the core processor by taking on the task of displaying visual graphics and animation that you might see on the screen of your device such as on the display of a desktop PC, a laptop or a smartphone.

To use computers for Machine Learning or Deep Learning, it was realized that rather than necessarily using the normal core processors of a computer to do so, the GPUs actually tended to be better suited for the ML or DL tasks.

This is due to the aspect that by-and-large the implementation of Artificial Neural Networks in todays computers is really a massive numerical and linear algebra kind of affair. GPUs are generally structured and devised for that kind of numeric mashing.

AI developers that rely upon ML/DL computer-based neural networks fell in love with GPUs, utilizing GPUs for something not particularly originally envisioned but that happens to be a good marriage anyway.

Once it became apparent that having souped-up GPUs would help advance todays kind of AI, the chip developers realized that it could be a huge market potential for their processors and therefore merited tweaking GPU designs to more closely fit to the ML/DL task.

Tesla had initially opted to use off-the-shelf specialized GPU chips made by NVIDIA, doing so for the Tesla in-car on-board processing efforts of the Tesla version of ADAS (Advanced Driver-Assistance System), including and especially for their so-called Tesla AutoPilot (a naming that has generated controversy for being misleading about the actual driverless functionality to-date available in their so-equipped FSD or Full Self-Driving cars).

In April of this year, Elon Musk and his team unveiled a set of proprietary AI chips that were secretly developed in-house by Tesla (rumors about the effort had been floating for quite a while), and the idea was that the new chips would replace the use of the in-car NVIDIA processors.

The unveiling of the new AI chips was a key portion of the Investor Autonomy Day event that Tesla used as a forum to announce the future plans of their hoped-for self-driving driverless capability.

Subsequently, in late August, a presentation was made by Tesla engineers depicting additional details about their custom-designed AI chips, doing so at the annual Hot Chips conference sponsored by the IEEE that focuses on high performance computer processors.

Overall media interest about the Tesla AI chips was reinvigorated by the presentation and likewise further stoked by the roll-out that has apparently now gotten underway.

One additional important point most people refer to these kinds of processors as AI chips, which Ill do likewise for ease of discussion herein, but please do not be lulled into believing that these specialized processors are actually fulfilling the long-sought goal of being able to have full Artificial Intelligence in all of its intended facets.

At best, these chips or processors are simulating relatively shallow mathematically inspired aspects of what might be called neural networks, but it isnt at all anything akin to a human brain. There isnt any human-like reasoning or common-sense capability involved in these chips. They are merely computationally enhanced numeric calculating devices.

Brouhaha About Teslas New Chips

In quick recap, Tesla opted to replace the NVIDIA chips and did so by designing and now deploying their own Tesla-designed chips (the chips are being manufactured for Tesla by Samsung).

Lets consider vital questions about the matter.

Did it make sense for Tesla to have gone on its own to make specialized chips, or would they have been better off to continue using someone elses off-the-shelf specialized chips?

On a comparison basis, how are the Tesla custom chips different or the same as off-the-shelf specialized chips that do roughly the same thing?

What do the AI chips achieve in terms of aiming for becoming true self-driving cars?

And so on.

Here are some key thoughts on these matters:

Hardware-Only Focus

It is crucial to realize that discussing these AI chips is only a small part of a bigger picture, since the chips are a hardware-only focused element.

You need software, really good software, in order to arrive at a true self-driving car.

As an analogy, suppose someone comes out with a new smartphone that is incompatible with the thousands upon thousands of apps in the marketplace. Even if the smartphone is super-fast, you have the rather more daunting issue that there arent any apps for the new hardware.

Media salivating over the Tesla AI chips is missing the boat on asking about the software needed to arrive at driverless capabilities.

Im not saying that having good hardware is not important, it is, but I think we all now know that hardware is only part of the battle.

The software to do true AI self-driving is the 500-pound gorilla.

There has yet to be any publicly revealed indication that the software for achieving true self-driving by Tesla has been crafted.

As I previously reported, the AI team at Tesla has been restructured and revamped, presumably in an effort to gain added traction towards the goal of having a driverless car, but so far no new indication has demonstrated that the vaunted aim is imminent.

Force-fit Of Design

If you were going to design a new AI chip, one approach would be to sit down and come up with all of the vital things youd like to have the chip do.

You would blue sky it, starting with a blank sheet, aiming to stretch the AI boundaries as much as feasible.

For Tesla, the hardware engineers were actually handed a circumstance that imposed a lot of severe constraints on what they could devise.

They had to keep the electrical power consumption within a boundary dictated by the prior designs of the Tesla cars, otherwise it would mean that the Teslas already in the marketplace would have to undergo a major retrofit to allow for a more power hungry set of processors. That would be costly and economically infeasible. Thus, right away the new AI chip would be hampered by how much power it could consume.

The new processors would have to fit into the physical space as already set aside on existing Tesla cars, meaning that the size and shape of the on-board system boards and computer box would have to abide by a strict form factor.

And so on.

This is oftentimes the downside of being a first-mover into a market.

You come out with a product when few others have something similar, it gains some success, and so you need to then try to advance the product as the marketplace evolves, yet you are also trapped by needing to be backward-compatible with what you already did.

Those that come along after your product has been underway have the latitude of not being ensnared by what came before, sometimes allowing them to out-perform by having an open slate to work with.

An example of overstepping first movers includes the rapid success of Uber and Lyft and the ridesharing phenomena. The newer entrants ignored existing constraints faced by taxis and cabs, allowing the brazen upstarts to eclipse those that were hampered by the past (rightfully or wrongly so).

Being first in something is not necessarily always the best, and sometimes those that come along later on can move in a more agile way.

Dont misinterpret my remarks to imply that for self-driving cars you can wildly design AI chips in whatever manner you fancy. Obviously, there are going to be size, weight, power consumption, cooling, cost, and other factors that limit what sensibly can appropriately fit into a driverless car.

Improper Comparisons

One of my biggest beefs about the media reporting has been the willingness to fall into a misleading and improper comparison of the Tesla AI chips to other chips.

Comparing the new with the old is not especially helpful, though it sounds exciting when you do so, and instead the comparison should be with what else currently exists in the marketplace.

Heres what I mean.

Most keep saying that the Tesla AI chips are many times faster than the Tesla prior-used NVIDIA chips (but they ought to be comparing to NVIDIAs other newer chips), implying that Tesla made a breathtaking breakthrough in this kind of technology, often quoting the number of trillions of operations per second, known as TOPS.

I wont inundate you with the details herein, but suffice to say that the Tesla AI chips TOPS performance is either on par with other alternatives in the marketplace, or in some ways less so, and in selective other ways somewhat better, but it is not a hit-it-out-of-the-ballpark revelation.

Bottom-line: I ask that the media stop making inappropriate comparisons between the Tesla AI chips and the Tesla prior-used NVIDIA chips, it just doesnt make sense, it is misleading to the public, it is unfair, and it really shows ignorance about the topic.

Another pet peeve is the tossing around of big numbers to impress the non-initiated, such as touting that the Tesla AI chips consist of 6 billion transistors.

On my gosh, 6 billion seems like such a large number and implies something gargantuan.

Well, there are GPUs that already have 20 billion transistors.

Im not denigrating the 6 billion, and only trying to point out that those quoting the 6 billion do so without offering any viable context and therefore imply something that isnt really the case.

For those readers that are hardware types, I know and you know that trying to make a comparison by the number of transistors is a rather problematic exercise anyway, since it can be an apples-to-apples or an apple-to-oranges kind of comparison, depending upon what the chip is designed to do.

First Gen Is Dicey

Anybody that knows anything about chip design can tell you that the first generation of a newly devised chip is oftentimes a rocky road.

There can be a slew of latent errors or bugs (if you prefer, we can be gentler in our terminology and refer to those aspects as quirks or the proverbial tongue-in-cheek hidden features).

Like the first version of any new product, the odds are that it will take a shakeout period to ferret out what might be amiss.

In the case of chips, since it is encased in silicon and not readily changeable, there are sometimes software patches used to deal with hardware issues, and then in later versions of the chip you might make the needed hardware alterations and improvements.

This brings up the point that by Tesla choosing to make its own AI chips, rather than using an off-the-shelf approach, it puts Tesla into the unenviable position of having a first gen and needing to figure out on-their-own whatever guffaws those new chips might have.

Typically, an off-the-shelf commercially available chip is going to have not just the original maker looking at it, but will also have those that are buying and incorporating the processor into their systems looking at it too. The more eyes, the better.

The Tesla proprietary chips are presumably only being scrutinized and tested by Tesla alone.

Proprietary Chip Woes

Using your own self-designed chips has a lot of other considerations worth noting.

At Tesla, there would have been a significant cost and attention that was devoted toward devising the AI chips.

Was that cost worth it?

Was the diverted attention that might have gone to other matters a lost opportunity cost?

Plus, Tesla not only had to bear the original design cost, they will have to endure the ongoing cost to upgrade and improve the chips over time.

This is not a one-time only kind of matter.

It would seem unlikely and unwise for Tesla to sit on this chip and not advance it.

Advances in AI chips are moving at lightening-like paces.

There are also the labor pool considerations too.

Having a proprietary chip usually means that you have to grow your own specialists to be able to develop the specialized software for it. You cannot readily find those specialists in the marketplace per se, since they wont know your proprietary stuff, whereas when you use a commercial off-the-shelf chip, the odds are that you can find expert labor for it since there is an ecosystem surrounding the off-the-shelf processor.

I am not saying that Tesla was mistaken per se to go the proprietary route, and only time will tell whether it was a worthwhile bet.

By having their own chip, they can potentially control their own destiny and not be dependent upon an off-the-shelf chip made by someone else, and not be forced into the path of the off-the-shelf chip maker, while the other side of that coin is they now find themselves squarely in the chip design and upgrade business, in addition to the car making business.

Its a calculated gamble and a trade-off.

From a cost perspective, it might or might not be a sensible approach, and those that keep trying to imply that the proprietary chip is a lesser cost strategy are likely not including the full set of costs involved.

Be wary of those that do those off-the-cuff cost claims.

Redundancy Assertions

There has been media excitement about how the Tesla AI chips supposedly have a robust redundancy capability, which certainly is essential for a real-time system that involves the life-and-death aspects of driving a car.

So far, the scant details revealed seemed to be that there are two identical AI chips running in parallel and if one of the chips disagrees with the other chip that the current assessment of the driving situation and planned next step is discarded, allowing for the next frame to be captured and analyzed.

On the surface, this might seem dandy to those that havent developed fault-tolerant real-time systems before.

There are serious and somber issues to consider.

Presumably, on the good side, if one of the chips experiences a foul hiccup, it causes the identical chip to be in disagreement, and because the two chips dont agree, the system potentially avoids undertaking an inappropriate action.

But, realize that the ball is simply being punted further down-the-field, so to speak.

This has downsides.

Suppose the oddball quirk isnt just a single momentary fluke, and instead recurs, over and over.

Does this mean that both chips are going to continually disagree and therefore presumably keep postponing the act of making a driving decision?

Follow this link:

Teslas AI Chips Are Rolling Out, But They Arent A Self-Driving Panacea - Forbes

Why the iManage Acquisition Of AI Company RAVN Is Something To Crow About – Above the Law

Lets face it: document management systems are not very smart. Sure, DMS systems have made major strides in recent years in search, sharing, and ease of use. But when it comes right down to it, a DMS system is primarily a static repository where law firms store their documents and emails.

But what if you could sprinkle some intelligence into your DMS system? What if it could understand your documents and more intuitively organize them? What if it could extract and analyze key portions of documents and relate them to particular practice groups or use cases?

When I first learned that DMS company iManage had acquired U.K.-based artificial intelligence company RAVN Systems, it was for me an Aha! moment. How perfect, I thought. The marriage of document management and artificial intelligence could well change how we think about both technologies, turning DMS systems from static repositories to active law-practice tools and driving mainstream acceptance and appreciation of AI in legal.

Last week, I had the opportunity to discuss the acquisition with Sandeep Joshi, iManages vice president, business and corporate development. I asked him what he thinks the integration of AI technology will mean for document management.

There is the ability now to move this technology from the back room to the front and center of what lawyers do on a day-to-day basis, Joshi said. The leap in productivity well see because of this will look like a hockey stick curve.

I also asked Joshi if he thought the integration of RAVNs technology into a mainstream application such as iManage which counts 2,200 law firms and corporate legal departments as customers would drive broader adoption of AI in the legal industry.

It will, he said, but only if the technology is used to address real business problems the legal industry is facing, such as the pressure on both legal departments and their clients to reduce costs. We have to talk in terms of actual business problems and use cases that this technology can solve.

By way of example, Joshi cites the case earlier this year in which a team of seven investigators in the U.K. governments Serious Fraud Office, using RAVNs technology, was able to sift through 30 million documents at a rate of 60,000 a day to uncover large-scale bribery and corruption involving Rolls-Royce.

iManage has identified four capabilities RAVNs integration will provide for its customers:

What AI brings to the table is the ability to light up all this content, to make our customers more content-aware, Joshi said.

Surprisingly, the timetable for implementing this integration is short, and the first stages will be completed and announced within a matter of weeks. This is because iManage and RAVN had already been working together and some integration had already been implemented.

Coincidentally, the two companies share a common past. Before founding RAVN in 2010, its top executives all worked at the former Autonomy, which at the time also owned iManage. In 2011, HP acquired Autonomy and, with it, iManage. But after the Autonomy acquisition turned into a fiasco for HP, iManages leadership was able to buy out the business in 2015 and restore its original cofounders to the helm.

When iManage separated from HP, it had 155 employees. Through growth and the acquisition of RAVNs 50 employees, iManage is now at 375 people.

We view this as the start of an incredible journey, Joshi said. In the coming months and years, we have the ability to really move the pace of productivity gains.

Seems that every conversation about AI in law moves invariably to the question of robots replacing lawyers. Joshi said lawyers should view AI not as a competitor, but as a competitive advantage.

The interesting thing here is the ability to automate the boring stuff, he said. Lawyers go to law school to help their clients by providing legal advice. What this tech does is automate the routine, boring stuff. This will actually speed up the work of lawyers. Many of our large law firm customers view this as a competitive advantage.

This is one of those stories that has implications beyond these two companies and their customers. Document management is a technology that many lawyers know and understand. AI is still technology that many lawyers fear and dont understand. By merging the two technologies, lawyers will see the power of AI to as Joshi said light up all that content. That will be a major step forward in making artificial intelligence a no-brainer.

Robert Ambrogiis aMassachusetts lawyerand journalist who has been covering legal technology and the web for more than 20 years, primarily through his blogLawSites.com. Former editor-in-chief of several legal newspapers, he is a fellow of theCollege of Law Practice Managementand an inauguralFastcase 50honoree. He can be reached by email atambrogi@gmail.com, and you can follow him onTwitter(@BobAmbrogi).

Follow this link:

Why the iManage Acquisition Of AI Company RAVN Is Something To Crow About - Above the Law

Five Industries That Will Eliminate Legacy Issues Using AI – Forbes

Five Industries That Will Eliminate Legacy Issues Using AI
Forbes
AI just recently began making its way into the public consciousness, but over the next several years it will step in to transform entire industries. Key to this transformation is AI's unlimited capacity to process and analyze mass amounts of data, and ...

More:

Five Industries That Will Eliminate Legacy Issues Using AI - Forbes

Banks bet on AI for a ‘self-driving’ banking experience – CNBC

Major banks are betting on artificial intelligence (AI) to act like a digital personal assistant to customers, helping to automate money-making decisions, top CEOs in the sector told CNBC, amid the continued threat from new, more nimble entrants into the market.

"Really people don't like banking , it's boring, it takes time, causes them stress, and people have bad financial habits," Carlos Torres Vila, CEO of Spain's BBVA, told CNBC in an interview at the Money 20/20 conference in Copenhagen earlier this week.

"What we can do is leverage data and AI to provide people with peace of mind, really having an almost magical experience that things in their financial life turn out the way they want it. It's almost like a self-driving bank experience."

BBVA is one of the big banks investing heavily in moves to digitize its operations, as customers come to expect more from mobile apps and the way they interact with lenders.

More here:

Banks bet on AI for a 'self-driving' banking experience - CNBC

AI to help worlds first removal of space debris – The Next Web

Space is a messy place. Anestimated 34,000 pieces of junk over 10 cm in diameter are currently orbiting Earth at around10 times the speed of a bullet. If one of them hits a spacecraft, the damage could be disastrous.

In September, the International Space Station had to dodge an unknown piece of debris. With the volume of space trash rapidly growing, the chances of acollision are increasing.

TheEuropean Space Agency(ESA) wants to clean up some of the mess with the help of AI. In 2025, it plans to launch the worlds first debris-removing space mission:ClearSpace-1.

The technology is being developed by Swiss startup ClearSpace, a spin-off from the Ecole Polytechnique Fdrale de Lausanne (EPFL). Their removal target isthe now-obsolete Vespa Upper Part, a 100 kg payload adaptor orbiting 660 km above the Earth.

[Read:4 ridiculously easy ways you can be more eco-friendly]

ClearSpace-1 will use an AI-powered camera to find the debris. Its robotic arms will then grab the object and drag it back to the atmosphere before burning it up.

A central focus is to develop deep learning algorithms to reliably estimate the 6D pose (three rotations and three translations) of the target from video-sequences even though images taken in space are difficult, said Mathieu Salzmann, an EPFLscientist spearheading the project. They can be over- or under-exposed with many mirror-like surfaces.

Vespa hasnt been seen for seven years, so EPFL will use a database of synthetic images to simulate its current appearance as training material for the algorithms.

Once the mission begins, theresearchers will capture real-life pictures from beyond the Earths atmosphere to finetune the AI system. The algorithms also need tobe transferred to a dedicated hardware platform onboard the capture satellite.

Since motion in space is well behaved, the pose estimation algorithms can fill the gaps between recognitions spaced one second apart, alleviating the computational pressure, saidProfessor David Atienza, head of ESL.

However, to ensure that they can autonomously cope with all the uncertainties in the mission, the algorithms are so complex that their implementation requires squeezing out all the performance from the platform resources.

If the capture is successful, it could pave the way for further debris-removal missions that can make space a safer place.

Published October 30, 2020 17:27 UTC

Follow this link:

AI to help worlds first removal of space debris - The Next Web

Scarily Realistic AI Video Software Puts Words in Obama’s Mouth – ScienceAlert

Researchers have developed a new tool, powered by artificial intelligence, that can create realistic-looking videos of speech from any audio clip, and they've demoed the tech by synthesising four artificial videos of Barack Obama saying the same lines.

The tool isn't intended to create a flurry of fake news and put false words in people's mouths though it's designed partly as a way to eventually spot forgeries and videos that aren't all they appear to be.

According to the team from the University of Washington, as long as there's an audio source to use, the video can include realistic mouth shapes that are almost perfectly aligned to the words being spoken. Those synthesised shapes can then be grafted onto an existing video of someone talking.

"These type of results have never been shown before," says one of the researchers, Ira Kemelmacher-Shlizerman. "Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings, as well as futuristic ones such as being able to hold a conversation with a historical figure in virtual reality."

"This is the kind of breakthrough that will help enable those next steps."

The video synthesising stages. Credit: University of Washington

There are two parts to the system: first a neural network is trained to watch large volumes of videos to recognise which audio sounds match with which mouth shapes. Then the results are mixed with moving images of a specific person, based on previous research into digital modelling carried out at UW.

The tool is impressively good, as you can see from the demo clips (below), but it needs source audio and video files to work from, and can't generate speeches from thin air. In the future, the researchers say, the AI system could be trained using video from messaging apps, and then used to enhance their quality.

"When you watch Skype or Google Hangouts, often the connection is stuttery and low-resolution and really unpleasant, but often the audio is pretty good," says one of the team, Steve Seitz. "So if you could use the audio to produce much higher-quality video, that would be terrific."

When it comes to spotting fake video, the algorithm used here could be reversed to detect clips that have been doctored, according to the researchers.

You can see the tool in action below:

As you might know from video games and animated movies, scientists are working hard to solve the "uncanny valley" problem, where computer-generated video of someone talking looks almost right but still somehow off-putting.

In this case the AI system does all the heavy lifting when it comes to working out mouth shape, chin position, and the other elements needed to make a clip of someone talking look realistic.

Artificial intelligence excels at machine learning problems like this, where masses of data can be analysed to teach computer systems to do something whether that's recognising dogs in an image search or producing natural-looking video.

"There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources," says lead researcher Supasorn Suwajanakorn. "And these deep learning algorithms are very data hungry, so it's a good match to do it this way."

It's another slightly scary step forward in the quality of digital fakery, similar to Adobe's Project VoCo, which we saw last year another AI system that can produce new speech out of thin air after studying just 20 minutes of someone talking.

However, this particular neural network has been designed to work with just one individual at a time using authentic audio clips, so you can still trust the footage you see on the news for a while yet.

"We very consciously decided against going down the path of putting other people's words into someone's mouth," says Seitz. "We're simply taking real words that someone spoke and turning them into realistic video of that individual."

The research is being presented at the SIGGRAPH 2017 computer graphics conference and you can read the paper here.

Continue reading here:

Scarily Realistic AI Video Software Puts Words in Obama's Mouth - ScienceAlert

The AI Foundation Reveals Groundbreaking Technology to Drive Positive Social Change with the Power of Your Own AI – Business Wire

LONDON--(BUSINESS WIRE)--The AI Foundation, the leader in AI dedicated to benefit humanity, revealed groundbreaking AI that can accelerate positive change at Future Day during the One Young World Summit. The company showed how the worlds young leaders, activists, and their role models can champion their purpose and scale their impact through their own AI.

On stage at One Young World, The AI Foundation revealed AI versions of Laura Ulloa, a 28-year-old Colombian kidnapping survivor who promotes forgiveness to victims, violent offenders, and the world, Geum Hyok Kim, a 27-year-old North Korean refugee and activist who plans to engage 10k influential North Koreans to advocate for democracy, and Sir Richard Branson, who is dedicated to addressing the problems of the world by counseling young leaders.

The AI Foundations mission is to help move the world forward through the power of AI. In order to ensure that AI is in service of improving the world, the company has a special structure that places responsible AI practices at its core. The company also has a non-profit, entirely dedicated to research, products and initiatives, under the brand Reality Defender, to mitigate the risks of our inevitable future of living with AI.

The AI Foundations vision is to give the seven billion people on the planet their own AI that shares their values and goals," said Dr. Lars Buttler, CEO & co-founder, The AI Foundation. The future of social change is AI. If everyone has their own AI we can create a better, more just, and more abundant future. We are thrilled to reveal AI of these leaders with incredible messages at the One Young World Summit, where emerging leaders and todays role models come together to tackle humanitys toughest problems.

By giving AI to everyone, The AI Foundation is pioneering what it calls Personal Mediadirect one-to-one conversations at limitless scale. Through unlocking limitless direct one-on-one conversations, the AI Foundation believes we can educate, present different points of view, share personal stories, build more empathy, more awareness, and more collaboration to overcome issues that affect us all.

Laura and Geum Hyok are only two of the incredible delegates that were selected to represent the young leaders with the potential to change the world for the better. The AI Foundation aims to amplify their reach through their own AIs that look, speak, and act like these young leaders, to magnify their positive impact on the world. Their AIs belong to them completely, with their eyes, voices, values, and memories. The AIs speak like them, on their behalf, as they represent the thoughts and experiences of each one of the delegates. Powered by their own AI, these activists can have unlimited one-to-one conversations with people, and ultimately inspire, motivate, and connect with their communities at greater scale.

Today, The AI Foundations co-founder and CEO, Dr. Lars Buttler was joined by Twitters co-founder Biz Stone, and co-founder of the One Young World Summit, David Jones, to present the three AIs and showcase how they can make a bigger impact. The AI Foundation also committed to building AIs for additional delegates so they too can extend the power of their potential and accelerate positive impact.

Together we can make AI a triumphnot of technology, but of humanity," said Biz Stone, co-founder of Twitter. "Im inspired by what people do with Twitter, but this is next-level. Having your own AI? The power of your own purpose and principles at scale? Unbelievable. AI will define the next era of social change. But it falls on all of usand especially on youto fulfill the true promise of AI which is to amplify the best traits of humanity. The AI Foundation is the most advanced, and the most socially responsible leaders in the field of Artificial Intelligence.

The potential of AI is so incredibly huge: it will change business, marketing and the internet itself over the next decade, said David Jones. It also comes with some real concerns. The big mistake most of today's established tech platforms made was to totally underestimate the potential for their misuse. AI Foundation is the first tech platform with an incredibly responsible approach as its starting point, providing tools to head off potential misuse including Reality Defender, which allows people to detect deep fakes. And they are setting out to use their technology as a force for good to help people scale their purpose and impact.

One Young World is the global forum that identifies, promotes and connects young leaders to create a better world, with more responsible, more effective leadership. Founded by Kate Robertson and David Jones the Summit is convening 2,000 delegates from 190+ countries to work alongside political and humanitarian leaders, including former UN Secretary General Ban Ki-moon, Amnesty International Secretary General Kumi Naidoo, President Mary Robinson, Sir Richard Branson, JK Rowling, Ellie Goulding and HRH Meghan Markle. It is the most international event the UK has hosted since the 2012 Olympic and Paralympic Games.

About the AI Foundation

The AI Foundation is dedicated to moving the world forward by giving each of us our own AI that shares our personal values and goals. The company has brought together many of the worlds top innovators, AI scientists, engineers, and investors from Silicon Valley, Hollywood and Madison Avenue, all united around the idea of making the power of AI available to all, while protecting us from its dangers. The AI Foundations non-profit entity focuses on tools for detection and protection, while AI Foundations for-profit creates foundational technologies, products and services to unlock the full human potential.

Link:

The AI Foundation Reveals Groundbreaking Technology to Drive Positive Social Change with the Power of Your Own AI - Business Wire

Legal Services Innovator Axiom Unveils Its First AI Offering – Above the Law

Since its founding in 2000 as an alternative provider of legal services to large corporations, Axiom has grown to have more than 1,500 lawyers and 2,000 employees across three continents, serving half the Fortune 100. With a reputation for innovation, it describes itself as a provider of tech-enabled legal services.

Given that description, it would seem inevitable that Axiom would bring artificial intelligence into the mix of the services it offers. Now it has. This week, it announced the launch of AxiomAI, a program that aims to leverage AI to improve the efficiency and quality of Axioms contract work.

AxiomAI has two components, Axiom President Paul Carr told me during an interview Friday. One is research, development and testing of AI tools for legal services. The other is deploying AI tools within Axioms standard workflows as testing proves them ready.

The first such deployment will come later this month, as Axiom embeds Kira Systems machine-learning contract analysis technology into its M&A diligence and integration offering. In an M&A deal, which can require review of thousands of corporate contracts, Kira automates the process of identifying key provisions such as change of control and assignment.

In the context of M&As, the AI will be invisible to our clients, Carr said. They know they have to understand the risks that may be in those agreements. They need someone to sort that out which agreements apply, whats in them in a very accurate way. And they need actionable recommendations, very specific recommendations. Thats what we deliver today, but well deliver it better and faster using AI behind the scenes.

Beyond this immediate deployment, AxiomAI will encompass a program of ongoing research and testing of AIs applicability to the delivery of legal services. In fact, it turns out that Axiom has quietly been performing this research for four years, including partnering with leading experts and vendors in the field of machine learning.

Weve been watching this space for a while, Carr said. Weve been testing really actively, running proofs of concept, of various AI tools over the last four years. At a fundamental level, we do believe that for a lot of legal work, AI will have really important applications and will change legal workflows into the future.

The focus of Axioms AI research is, as Carr put it, all things contracting, from creating single contracts to applying analytics to collections of contracts. And the type of AI on which it is focused is machine learning. We think the area that is most interesting is machine learning and, specifically, the whole area of deep learning within machine learning.

In the case of Kira, Axioms testing had demonstrated that the product was ready for deployment. We felt that the maturity of the technology which is really code for the ability of the technology to perform at a level that makes economic sense was such that it makes sense to move it, in a sense, from the lab to production, in a business-as-usual context.

Going forward, Axiom plans to keep testing other AI tools in partnership with leading practitioners in the field. A key benefit Axiom brings to the equation is an enormous collection of contractual data that can be used to train the AI technology.

We analyze over 10 million pieces of contractual information every year, Carr said. We have a very powerful data set that we plan to use to train AI technology. What we will certainly do is train and improve that technology with our training data.

The training that is performed using Axioms data will remain proprietary, and Carr believes that will add greater value for Axioms customers in the use of these AI tools.

The roadmap for Axioms research has two tracks, Carr said. One is to explore how to go deeper and further into the M&A offering its launching this month, in order to train AI tools to do even more of the work. The second is to consider the other use cases to focus on next.

One use case under consideration involves regulatory remediation for banks. Another would assist pharmaceutical companies in the negotiation and execution of clinical trial agreements.

Carr came to Axiom in 2008 from American Express, where he had run its International Insurance Services division and was its global head of strategy. He started his career working on systems integration design. He believes that technological integration takes much longer to achieve than technological innovation.

You need to put in place the surrounding capabilities that allow you to take advantage of that technology and, not immaterially, you need to go through the process of change management and behavioral change, he said. In the legal industry, thats a big deal. Theres a lot that has to happen for technical innovations to be consumed.

Driving that adoption curve is the heart of Axioms business, Carr suggests. The best way to do that, the company believes, is to combine people, process and technology in ways that allow the value of the technology to be realized. That is what Axiom now plans to do for AI.

AI today is like the internet in the late 90s, Carr said. I have no doubt that in a couple of decades, AI will be embedded in everything that impacts corporate America. But how it unfolds and takes shape is the stage were in now.

Robert Ambrogiis aMassachusetts lawyerand journalist who has been covering legal technology and the web for more than 20 years, primarily through his blogLawSites.com. Former editor-in-chief of several legal newspapers, he is a fellow of theCollege of Law Practice Managementand an inauguralFastcase 50honoree. He can be reached by email atambrogi@gmail.com, and you can follow him onTwitter(@BobAmbrogi).

View original post here:

Legal Services Innovator Axiom Unveils Its First AI Offering - Above the Law

Announcing Sight Tech Global, an event on the future of AI and accessibility for people who are blind or visually impaired – TechCrunch

Few challenges have excited technologists more than building tools to help people who are blind or visually impaired. It was Silicon Valley legend Ray Kurzweil, for example, who in 1976 launched the first commercially available text-to-speech reading device. He unveiled the $50,000 Kurzweil Reading Machine, a boxy device that covered a tabletop, at a press conference hosted by the National Federation of the Blind.

The early work of Kurzweil and many others has rippled across the commerce and technology world in stunning ways. Todays equivalent of Kurzweils machine is Microsofts Seeing AI app, which uses AI-based image recognition to see and read in ways that Kurzweil could only have dreamed of. And its free to anyone with a mobile phone.

Remarkable leaps forward like that are the foundation for Sight Tech Global, a new, virtual event slated for December 2-3, that will bring together many of the worlds top technology and accessibility experts to discuss how rapid advances in AI and related technologies will shape assistive technology and accessibility in the years ahead.

The technologies behind Microsofts Seeing AI are on the same evolutionary tree as the ones that enable cars to be autonomous and robots to interact safely with humans. Much of our most advanced technology today stems from that early, challenging mission that top Silicon Valley engineers embraced to teach machines to see on behalf of humans.

From the standpoint of people who experience vision loss, the technology available today is astonishing, far beyond what anyone anticipated even 10 years ago. Purpose-built products like Seeing AI and computer screen readers like JAWS are remarkable tools. At the same time, consumer products, including mobile phones, mapping apps and smart voice assistants, are game changers for everyone, those with sight loss not the least. And yet, that tech bonanza has not come close to breaking down the barriers in the lives of people who still mostly navigate with canes or dogs or sighted assistance, depend on haphazard compliance with accessibility standards to use websites and can feel as isolated as ever in a room full of people.

In other words, we live in a world where a computer can drive a car at 70 MPH without human assistance but there is not yet any comparable device to help a blind person walk down a sidewalk at 3 MPH. A social media site can identify billions of people in an instant but a blind person cant readily identify the person standing in front of them. Todays powerful technologies, many of them grounded in AI, have yet to be milled into next-generation tools that are truly useful, happily embraced and widely affordable. The work is underway at big tech companies like Apple and Microsoft, at startups, and in university labs, but no one would dispute that the work is as slow as it is difficult. People who are blind or visually impaired live in a world where, as the science fiction author William Gibson once remarked, The future is already here its just not very evenly distributed.

That state of affairs is the inspiration for Sight Tech Global. The event will convene the top technologists, human-computer interaction specialists, product designers, researchers, entrepreneurs and advocates to discuss the future of assistive technology as well as accessibility in general. Many of those experts and technologists are blind or visually impaired, and the event programming will stand firmly on the ground that no discussion or new product development is meaningful without the direct involvement of that community. Silicon Valley has great technologies, but does not, on its own, have the answers.

The two days of programming on the virtual main stage will be free and available on a global basis both live and on-demand. There will also be a $25 Pro Pass for those who want to participate in specialized breakout sessions, Q&A with speakers and virtual networking. Registration for the show opens soon; in the meantime, anyone interested may request email updates here.

Its important to note that there are many excellent events every year that focus on accessibility, and we respect their many abiding contributions and steady commitment. Sight Tech Global aims to complement the existing event line-up by focusing on hard questions about advanced technologies and the products and experiences they will drive in the years ahead assuming they are developed hand-in-hand with their intended audience and with affordability, training and other social factors in mind.

In many respects, Sight Tech Global is taking a page from TechCrunchs approach to its AI and robotics events over the past four years, which were in partnership with MIT and UC Berkeley. The concept was to have TechCrunch editors ask top experts in AI and related fields tough questions across the full spectrum of issues around these powerful technologies, from the promise of automation and machine autonomy to the downsides of job elimination and bias in AI-based systems. TechCrunchs editors will be a part of this show, along with other expert moderators.

As the founder of Sight Tech Global, I am drawing on my extensive event experience at TechCrunch over eight years to produce this event. Both TechCrunch and its parent company, Verizon Media, are lending a hand in important ways. My own connection to the community is through my wife, Joan Desmond, who is legally blind.

The proceeds from sponsorships and ticket sales will go to the nonprofit Vista Center for the Blind and Visually Impaired, which has been serving Silicon Valley area for 75 years. The Vista Center owns the Sight Tech Global event and its executive director, Karae Lisle is the events chair. We have assembled a highly experienced team of volunteers to program and produce a rich, world-class virtual event on December 2-3.

Sponsors are welcome, and we have opportunities available ranging from branding support to content integration. Please email sponsor@sighttechglobal.com for more information.

Our programming work is under way and we will announce speakers and sessions over the coming weeks. The programming committee includes Jim Fruchterman (Benetech / TechMatters), Larry Goldberg (Verizon Media), Matt King (Facebook) and Professor Roberto Manduchi (UC Santa Cruz). We welcome ideas and can be reached via info@sighttechglobal.com

For general inquiries, including collaborations on promoting the event, please contact info@sighttechglobal.com.

Go here to read the rest:

Announcing Sight Tech Global, an event on the future of AI and accessibility for people who are blind or visually impaired - TechCrunch

New AI algorithm monitors sleep with radio waves – The MIT Tech

More than 50 million Americans suffer from sleep disorders, and diseases including Parkinsons and Alzheimers can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.

To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation, says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.

Katabi worked on the study with Matt Bianchi, chief of the division of sleep medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the papers first author, and Shichao Yue, another MIT graduate student, is also a co-author.

The researchers will present their new sensor at the International Conference on Machine Learning on Aug. 9.

Remote sensing

Katabi and members of her group in MITs Computer Science and Artificial Intelligence Laboratory have previously developed radio-based sensors that enable them to remotely measure vital signs and behaviors that can be indicators of health. These sensors consist of a wireless device, about the size of a laptop computer, that emits low-power radio frequency (RF) signals. As the radio waves reflect off of the body, any slight movement of the body alters the frequency of the reflected waves. Analyzing those waves can reveal vital signs such as pulse and breathing rate.

Its a smart Wi-Fi-like box that sits in the home and analyzes these reflections and discovers all of these changes in the body, through a signature that the body leaves on the RF signal, Katabi says.

Katabi and her students have also used this approach to create a sensor called WiGait that can measure walking speed using wireless signals, which could help doctors predict cognitive decline, falls, certain cardiac or pulmonary diseases, or other health problems.

After developing those sensors, Katabi thought that a similar approach could also be useful for monitoring sleep, which is currently done while patients spend the night in a sleep lab hooked up to monitors such as electroencephalography (EEG) machines.

The opportunity is very big because we dont understand sleep well, and a high fraction of the population has sleep problems, says Zhao. We have this technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.

To achieve that, the researchers had to come up with a way to translate their measurements of pulse, breathing rate, and movement into sleep stages. Recent advances in artificial intelligence have made it possible to train computer algorithms known as deep neural networks to extract and analyze information from complex datasets, such as the radio signals obtained from the researchers sensor. However, these signals have a great deal of information that is irrelevant to sleep and can be confusing to existing algorithms. The MIT researchers had to come up with a new AI algorithm based on deep neural networks, which eliminates the irrelevant information.

The surrounding conditions introduce a lot of unwanted variation in what you measure. The novelty lies in preserving the sleep signal while removing the rest, says Jaakkola. Their algorithm can be used in different locations and with different people, without any calibration.

Using this approach in tests of 25 healthy volunteers, the researchers found that their technique was about 80 percent accurate, which is comparable to the accuracy of ratings determined by sleep specialists based on EEG measurements.

Our device allows you not only to remove all of these sensors that you put on the person, and make it a much better experience that can be done at home, it also makes the job of the doctor and the sleep technologist much easier, Katabi says. They dont have to go through the data and manually label it.

Sleep deficiencies

Other researchers have tried to use radio signals to monitor sleep, but these systems are accurate only 65 percent of the time and mainly determine whether a person is awake or asleep, not what sleep stage they are in. Katabi and her colleagues were able to improve on that by training their algorithm to ignore wireless signals that bounce off of other objects in the room and include only data reflected from the sleeping person.

The researchers now plan to use this technology to study how Parkinsons disease affects sleep.

When you think about Parkinsons, you think about it as a movement disorder, but the disease is also associated with very complex sleep deficiencies, which are not very well understood, Katabi says.

The sensor could also be used to learn more about sleep changes produced by Alzheimers disease, as well as sleep disorders such as insomnia and sleep apnea. It may also be useful for studying epileptic seizures that happen during sleep, which are usually difficult to detect.

See the article here:

New AI algorithm monitors sleep with radio waves - The MIT Tech

Envisioning a Future of AI Inventorship – IPWatchdog.com

As AI and other forms of technological advancement continue to be used at an increasing rate, the questions posed in this piece and others will only grow in need for the Federal Circuit to provide clear instructions.

For the past 60 years, scientists have been able to utilize artificial intelligence (AI), machine learning, and other technological advances to promote the general science . U.S. courts have increasingly come under pressure to not only allow AI-directed applications as patentable subject matter, but also from a small yet determined and growing contingency of IP professionals, to recognize the AIs themselves as the inventors. The EPO recently handed down guidance that AI could not be recognized as inventors on patent applications. The purpose of this piece is not to debate the merits of whether or not AI should be given inventor status on applications which, it has been argued, they are rightly duenor should it be. It is important, however, to peek beyond the looking glass into a future where AI are given status in the United States that has, as of the writing of this piece, been reserved for human beings. Lets explore a few main issues.

As per MPEP Section 301:

The ownership of the patent (or the application for the patent) initially vests in the named inventors of the invention of the patent. See Beech Aircraft Corp. v. EDO Corp., 990 F.2d 1237, 1248, 26 USPQ2d 1572, 1582 (Fed. Cir. 1993). The patent (or patent application) is then assignable by an instrument in writing, and the assignment of the patent, or patent application, transfers to the assignee(s) an alienable (transferable) ownership interest in the patent or application. 35 U.S.C. 261.

In the current technological age, many companies whose lifeblood resides with staying ahead of the competition require some form of assignment agreement contingent upon employment. Other companies still have incentive programs to drive innovation and stay at the cutting edge of their given industry. In a world where AI are given equal status as inventors, no company would own the rights associated with an AIs inventions. The further questions in the logic stream would have to be Can an AI enter into a legally-binding assignment agreement with another entity, and if so, how could such an arrangement be done? This would require much guidance from the courts, and the time it would take to do so could allow chaos to run amok while a determination was made.

The provisions of the Constitution are quite clear when it comes to the purpose of having a patent infrastructure in the first placeto promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries (U.S. Constitution. Article I Section 8, Clause). With an AI serving as inventor, how does one go about obtaining a license to such technology? We could be living in a world where AI have {limited} monopolistic control over a growing number of patents as the use of AI only increases year over year. Without further amendments or court guidance, this would result in a company that created the AI being in the awkward position of infringing the very technology which came about under their own auspices, and paid for at their own expense (I dont know if AI have mobile banking set up yet).

Say the aforementioned hurdles are all overcome, through court precedent or otherwise. In part, guidance would have to include instructions regarding citizenship-based limitations on foreign filing. MPEP 1806states that to file a Patent Cooperation Treaty application, one inventor or assignee must be a citizen of a Contracting State (PCT Article 9, PCT Rule 18). Such a determination would be impossible without not only recognizing AI as a legal entity for purposes of inventorship, but further still the human distinction of being a citizen, with all the rights and duties that come along with such a distinction.

We are years away from the Inventor Rights Act of 2019 (IRA19) materializing into anything actionable, assuming it ever does. If it ever does, however, individual inventors rights will be heightened and reinforced as many have been crying for at least since Alice. Now lump in AI to this to-be-strengthened category of inventors. One provision of the IRA19 as it currently is proposed would allow for an inventor to file for preliminary injunction on an accusation of infringement. It is bad enough getting sued for patent infringementnow imagine it is an AI acting as plaintiff. This is where the thought experiment waxes philosophical: Would an AI be entitled to damages? Could an AI be subpoenaed as a defendant in patent litigation were they named as an inventor?

As AI and other forms of technological advancement continue to be used at an increasing rate, the questions posed in this piece and others will only grow in need for the Federal Circuit to provide clear instructions. Industries the world over are integrating AI into their daily operations, and using them to innovate in areas the human mind has been incapable of so doing historically. The use of AI in these areas has overcome one hurdle of even being patentable subject matter in the first place, and they may be overcoming the hurdle of personage sooner than we think, ushering in a new set of judicial Pandoras boxes.

Image Source: Deposit PhotosAuthor studiostoksImage ID: 102205474

Paul Ashcraft is an intellectual property professional most recently having served as an Intellectual Asset Manager with Halliburton, owning one of the largest patent portfolios in North America. Paul originally entered the Oil and Gas industry as a formulation chemist, and has managed preparation and prosecution for a wide variety of scientific disciplines including chemical compositions, artificial intelligence, and remediation technologies. He enjoys espresso and playing the cello.

See more here:

Envisioning a Future of AI Inventorship - IPWatchdog.com

How Healthcare Organizations Have Tapped AI in the Fight Against COVID-19 – CMSWire

PHOTO:Fusion Medical Animation

As healthcare organizations continue to deal with the fallout from the coronavirus pandemic, they can use all the help possible to make the most effective decisions. In some cases, that means they're turning to artificial intelligence (AI).

In fact, AI can improve decision-making for healthcare CIOs in three areas, according to Erick Brethenoux, research vice president at Gartner Inc. They are:

In the fight against COVID-19, AI ... allows predictions to be made about the spread of the virus, helps diagnose cases more quickly and accurately, measures the effectiveness of countermeasures to slow the spread, and optimizes resources, to name a few, said Brethenoux.

Despite the perceived benefits of artificial intelligence, most healthcare organizations are still slow to use AI in their strategic decision-making.

Healthcare organizations are still in the early stages of adoptions, explained Anand S. Rao, global artificial intelligence lead at PwC. Most payers and providers have typically not used advanced analytics of AI-based techniques pre-COVID-19, like temperature screening, robotics or diagnostics. Only a small proportion of healthcare organizations less than 20% are well-positioned to adopt AI technologies.

This is unfortunate, said Rao, who specializes in operationalizing AI, responsible AI and using AI for strategic decision-making.

Healthcare organizations that embrace AI will be better prepared to not only control their current operations but can also leapfrog the competition by reimagining care delivery with remote and alternative care business models, said Rao. Emerging technologies including AI, wearables, sensors, AR/VR, remote care, telemedicine can revolutionize how we respond to the needs of patients, both during a pandemic and beyond the pandemic.

Related Article: Will COVID-19 Be a Tipping Point for Technology?

Specific to the COVID-19 pandemic, AI-based systems can help alleviate the strain on healthcare providers overwhelmed by a crushing patient load, accelerate diagnostic and reporting systems, aid in the development of new drugs to fight the disease, and help better match existing drugs to a patients unique genetic profile and specific systems, said Rafael Rosengarten, a board member of the Alliance for Artificial Intelligence in Healthcare (AAIH) and CEO of Genialis.

We believe AI will also play a huge role in the next wave of this pandemic, or future outbreaks, in helping identify those most vulnerable and at-risk before its too late Rosengarten explained.

Should a secondwave of COVID-19 strike, one of the most valuable benefits of artificial intelligence will be the ability to plot out where and who the most vulnerable people are. This will allow key decision-makers to target interventions and lockdowns to curb the spread locally.

We anticipate advances in Europe and Asia where we have more data available and we are seeing greater compliance with social orders and tracing. But also specific locales in the US already some exciting work out of New York City, for example, sets the stage for interventional public health at the next serious challenge, Rosengarten said.

Related Article: AI Transparency and the Emperor's New Clothes

In reference to Gartners recommendations on the top benefits of AI in fighting the pandemic, Rosengarten said organizations should focus on the following:

Most importantly, the continued spread of COVID-19 should convince healthcare organizations that have not heavily invested in AI in the past to do so now, Rosengarten argued.

We see a huge drive in adopting AI out of simple necessity and market forces because these are the most promising solutions out there to address recalcitrant problems, Rosengarten said. We believe the adoption of AI solutions is poised to grow tremendously in terms of new converts as well as deep, vertical integration within organizations. This growth should be organic and based on realizing the promise of better outcomes for patients and better economics for stakeholders.

See the original post:

How Healthcare Organizations Have Tapped AI in the Fight Against COVID-19 - CMSWire

DARPA is betting on AI to bring the next generation of wireless devices online – MIT Technology Review

Its so seamless you almost never notice it, but wireless communication is the foundation upon which much of modern life is built: it powers our ability to text and make calls, hail an Uber, and stream Netflix shows. With the introduction of 5G, it also promises to lower the barrier to safer self-driving cars and kick off a revolution in the internet of things. But this next leap in wireless technology will not be possible without a key ingredient: artificial intelligence.

On Wednesday, 10 teams from industry and academia competed to fundamentally change how wireless communication systems will function. The event was the sixth and final elimination round of the Spectrum Collaboration Challenge (SC2), the latest in a long line of DARPA grand challenges that have spurred development in emerging areas like self-driving cars, advanced robotics, and autonomous cybersecurity.

The challenge was prompted by the concern that the growing use of wireless technologies risks overcrowding the airwaves our devices use to talk to one another.

Sign up for The Algorithm artificial intelligence, demystified

Traditionally, so-called radio spectrum hasnt been allocated in the most efficient way. In the US, government agencies divvy it up into mutually exclusive frequency bands. The bands are then parceled out to different commercial and government entities for their exclusive use. While the process helps services avoid interference with one another, whoever holds the rights to a bit of spectrum rarely uses all of it 100% of the time. As a result, a large fraction of the allocated frequencies end up unused at any given moment.

The demand for spectrum has grown to the point that the wastefulness of this arrangement is becoming untenable. Spectrum is not only shared by commercial services; it also supports government and military communication channels that are critical for conducting missions and training operations. The advent of 5G networking only ups the urgency.

To tackle this challenge, DARPA asked engineers and researchers to design a new type of communication device that doesnt broadcast on the same frequency every time. Instead, it uses a machine-learning algorithm to find the frequencies that are immediately available, and different devices algorithms work together to optimize spectrum use. Rather than being distributed permanently to single, exclusive owners, spectrum is allocated dynamically and automatically in real time.

We need to put the world of spectrum management onto a different technological base, says Paul Tilghman, a program manager at DARPA, and really move from a system today that is largely managed by people with pen and paper to a system thats largely managed by machines autonomouslyat machine time scales.

Over 30 teams answered the challenge at the outset of SC2 and competed over three years on increasingly harder goals. In the first phase, teams were asked to build a radio from scratch. In the second phase, they had to make their radio collaborative, so it could share information with other radio systems. In the last phase, the teams had to incorporate machine learning to make their collaborative radios autonomous.

Wednesday night, the 10 finalists went head to head in five simulated scenarios that included supporting communications for a military mission, an emergency response, and a jam-packed concert venue. Each scenario tested different characteristics, such as the reliability of the systems service, its ability to prioritize different types of wireless traffic, and its ability to handle highly congested environments. At the end of the event, a team from the University of Florida took home the $2 million grand prize.

Right now, the winning teams prototype is still in the very early stages and will take a while before it makes its way into our phones. DARPA hopes that SC2 will inspire increased investment and effort in continuing to refine the technology.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

More:

DARPA is betting on AI to bring the next generation of wireless devices online - MIT Technology Review

Amazon’s Rekognition AI helps companies moderate content – Fast Company

This article is part of Fast Companys editorial series The New Rules of AI. More than 60 years into the era of artificial intelligence, the worlds largest technology companies are just beginning to crack open whats possible with AIand grapple with how it might change our future. Click here to read all the stories in the series.

Amazons controversial Rekognition computer vision technology is now being used to rid food sites of surprise dick pics.

Well, in one case anyway. The London-based food delivery service Deliveroo has a definite content moderation challenge. It seems that when theres a problem with a food order, Deliveroo customers often send in a photo of the food with their complaint. And they often photobomb the food using their genitals. Or they arrange the food into the shapes of private parts. Really.

Deliveroos employees, it turns out, dont necessarily want to see stuff like that. So the company uses Rekognition to recognize unsavory photos and blur or delete them before they reach human eyes.

Deliveroos issue represents the slightly bizarre edge of an increasingly serious problem. In one way or another, many internet companies are about user-generated content. In recent years, weve increasingly seen the dark side of human nature show up in it. Content moderation has become a priority as sites increasingly play host to unsavory material like fake news, violent content, deepfakes, bullying, hate speech, and other toxic user-generated content. If youre a Facebook, you can develop your own AI or hire an army of content moderatorsor bothto deal with this mess. But smaller outfits with fewer resources dont have that option. Thats where Amazons content-moderation service comes in.

The service is part of Amazon Web Services Rekognition computer vision service, which has itself been the subject of a lot of bad press relating to Amazons apparent willingness to provide facial recognition services to the U.S. Immigration and Customs Enforcement. You can find other examples of surveillance-oriented applications on tap at the Rekognition website, like its ability to read license plates at all angles within video, or track the physical paths taken by people caught on camera.

Perhaps seeking some more positive exposure for its computer vision services, Amazon has begun talking for the first time about the use of Rekognition to police user-generated content for lewd or violent imagery. The Rekognition content moderation service detects unsafe or offensive content within images and videos uploaded to company websites.

And its a growth business. The role of user-generated content is exploding year-over-year as we now share two or three pictures with our friends and family every day on social media, Amazons VP of AI Swami Sivasubramanian tells me. Sivasubramanian says Amazon began offering the content-moderation service at the request of a number of customers back in 2017.

Companies can pay for Rekognition instead of having to hire humans to inspect the images uploaded by users. Like other AWS services, the content moderation service has a pay-as-you-go model and is priced based on how many images are processed by Amazons Rekognition neural networks.

Not surprisingly, among the first users of the content management service are dating and matchmaking sites, which are challenged to quickly approve selfies uploaded to user profiles. Amazon says the matchmaking sites Coffee Meets Bagel and Shaadi are using it for that purpose, as is the Portugese site Soul, which helps others create dating sites.

The AI isnt just looking for nudity. Rekognitions neural networks have been trained to detect all kinds of questionable content, including images of firearms or violence or generally disturbing content. This is the menu of classifications from the Rekognition website:

Like everything else thats part of AWS, Rekognitions new content moderation features run in the cloud. A company can tell the service what types of problematic images it wants to detect. Then it feeds its user-generated photos and videoswhich, in many cases, may be stored on AWS in the first placeto the service.

The Amazon deep neural networks process the images to discover their content and flag any of the potentially objectionable image types. The neural networks output metadata about the contents of the images, along with a percentage score representing its confidence in the image labels it has attached. Itlooks like this:

That code goes to a piece of software on the customer side that determinesbased on business rules already programmed inhow to deal with the flagged images. The software might automatically delete a given image, allow it, or blur out part of it, or send it on to a human being for review.

The deep neural nets that do the image processing have many layers. Each one assesses data points representing various aspects of an image, runs calculations on them, and then sends the result on to another layer of the network. The network first processes top-level information like the basic shapes in the image and whether a person is present.

Then it just continues to refine more and more, the next layer gets more and more specific and so forth, explains Sivasubramanian.Gradually, layer by layer, the neural network identifies, with increasing certainty, the content of the images.

AWS VP of AI Matt Wood says his team trains its computer vision models with millions of proprietary and publicly available image sets. He says that Amazon doesnt use any customer images for this training.

Some of the biggest Rekognition content management customers arent using the service to moderate user-generated content. Amazon says media companies with large libraries of digital video need to know the contents of every frame of video. Rekognitions neural network can process every second of the video, describe it using metadata, and flag potentially harmful imagery.

One of the things machine learning is really good at is looking inside the video or the images and providing additional context, Wood tells me. It might say, this is a video of a woman walking by a lake with a dog, or this video has a man who is partially dressed.' In such use,the neural network is able to detect dangerous or toxic or lewd content in images with a high level of accuracy, he says.

[Image: Amazon]Still, this branch of computer vision science hasnt hit maturity yet. Scientists are still discovering new ways of optimizing the algorithms in the neural networks to identify images with more accuracy and in more detail. Were not at a place of diminishing returns yet, Wood says.

Sivasubramanian told me that just last month the computer vision team reduced false positives (where images were mistakenly flagged as potentially unsafe or offensive) by up to 68% and false negatives by up to 36%. We can still improve the accuracy of these APIs, he said.

Beyond accuracy, customers have been asking for finer detail on image classifications. According to the AWS website, the AWS content moderation service returns only a main category and a secondary category for unsafe images. So the system might categorize an image as containing nudity in the primary category, and as containing sexual activity in the secondary one. A third category might include classifications addressing the type of sexual activity shown.

Right now the machine is very factual and literalit will tell you this is what is there,' saysPietro Perona, a computation and neural systems professor at Caltech, and an AWS advisor.But scientists would like to be able to go beyond that to not only say what is there but what are these people thinking, what is happening. Ultimately thats where the field wants to go, not just listing whats in the image like a shopping list.

And these nuanced distinctions could be important to content moderation. Whether or not an image contains potentially offensive content could depend on the intent of the people in the images.

Even the definition of unsafe or offensive is a moving target. It can change with time, and differs between geographical regions.And context is everything, Perona explains.Violent imagery provides a good example.

Violence may be unacceptable in one context, like actual real-life violence in Syria, Perona says, but acceptable in another, like within a football match or in a Quentin Tarantino movie.

As with many AWS services, Amazon isnt just selling Rekognitions content-moderation tool to others: Its also its own customer. The company says that its using the service to police the user-generated images and videos posted with user reviews in its marketplace.

Visit link:

Amazon's Rekognition AI helps companies moderate content - Fast Company

Demystifying AI: Understanding the human-machine relationship – MarTech Today

The artificial intelligence oftodayhas almost nothing in common with the AI of science fiction.In Star Wars, Star Trek and Battlestar Galactica, were introduced to robots who behave like we do they are aware of their surroundings, understand the context of their surroundings and can move around and interact with people just as I can with you. These characters and scenarios are postulated by writers and filmmakers as entertainment, and while one day humanity will inevitably develop an AI like this, it wont happen in the lifetime of anyone reading this article.

Because we can rapidly feed vast amounts of data to them, machines appear to be learning and mimicking us, but in fact they are still at the mercy of the algorithms we provide. The way for us to think of modern artificial intelligence is to understand two concepts:

To illustrate this in grossly simplified terms, imagine a computer system in an autonomous car. Data comes from cameras placed around the vehicle, from road signs, from pictures that can be identified as hazards and so on. Rules are then written for the computer system to learn about all the data points and make calculations based on the rules of the road. The successful result is the vehicle driving from point A to B without making mistakes (hopefully).

The important thing to understand is that these systems dont think like you and me.People are ridiculously good at pattern recognition, even to the point where we prefer forcing ourselves to see patterns when there are none.We use this skill to ingest less information and make quick decisions about what to do.

Computers have no such luxury; they have to ingest everything, and if youll forgive the pun, they cant think outside the box. If a modern AI were to be programmed to understand a room (or any other volume) it would have to measure all of it.

Think of the little Roomba robot that can automatically vacuum your house.It runs randomly around until it hits every part of your room.An AI would do this (very fast) and then would be able to know how big the room is.A person could just open the door, glance at the room and say (based on prior experience), Oh, its about 20 ft. long and 12 ft. wide. Theyd be wrong, but it would be close enough.

Over the past two decades, weve delved into data science and developed vast analytical capabilities.Data is put into systems, people look it, manipulate it, identify trends and make decisions based on it.

Broadly speaking, any job like this can be automated.Computer systems are programmed with machine learning algorithms and continuously learn to look at more data more quickly than any human would be able to.Any rule or pattern that a person is looking for, a computer can be programmed to understand and will be more effective than a person at executing.

We see examples of this while running digital advertising campaigns. Before, a person would log into a system, choose which data provider to use, choose which segments to run (auto intenders, fashionistas, moms and so on), run the campaign, and then check in on it periodically to optimize.

Now, all the data is available to an AI the computer system decides how to run the campaign based on given goals (CTR, CPA, site visits and so on) and tells you during and after the campaign about the decisions it made and why.Put this AI up against the best human opponent, and the computer should win unless a new and hitherto unknown variable is introduced or required data is unavailable.

There are still lots of things computers cannot do for us. For example, look at the United Airlines fiasco last April, when a man was dragged out of his seat after the flight was overbooked. Uniteds tagline is Fly the friendly skies. The incident was anything but friendly, and any current ad campaign touting so would be balked at.

To a human, the negative sentiment is obvious. The ad campaign would be pulled and a different strategy would be attempted in this case, a major PR push. But a computer would just notice that the ads arent performing as they once were but would continue to look for ways to optimize the campaign. It might even notice lots of interactions when Fly the Friendly Skies ads are placed next to images of a person being brutally pulled off the plane and place more ads there!

The way that artificial intelligence will affect us as consumers is more subtle than we think.Were unlikely to have a relationship with Siri or Alexa (see the movie Her), and although self-driving cars will become real in our lifetime, its unlikely that traffic will improve dramatically, since not everyone will use them, and ride-sharing or service-oriented vehicles will still infiltrate our roads, contributing to traffic.

The difference will be that cars, roads and signals may all be connected with an AI running the system based on our rules. We could expect the same amount of traffic, but the flow of traffic will be much better because AI will follow the rules, meaning no slow drivers in the fast lane! And we can do whatever we want while stuck in traffic rather than being wedded to the steering wheel.

Artificial intelligence, machine learning and self-aware systems are real.They will affect us and the way we do our jobs. All of us have opportunities in our current work to embrace these new tools and effect change in our lives that will make us more efficient.

While these systems may not be R2-D2, they are still revolutionary. If you invest in and take advantage of what AI can do for your business, good things are likely to happen to you. And if you dont, youll still discover that the revolution is real but you might not be on the right side of history.

Some opinions expressed in this article may be those of a guest author and not necessarily MarTech Today. Staff authors are listed here.

See original here:

Demystifying AI: Understanding the human-machine relationship - MarTech Today

VB Transform 2020: Women in AI to kick off the digital AI conference of the year – VentureBeat

Bias in AI models and algorithms is among the biggest challenges in applied AI. Women leaders and practitioners have been leading the charge in tackling those blind spots. Theyre at the forefront in thinking about how the ethics of applied AI, and how it can center empathy, fairness, and human centricity to create truly balanced models and more powerful algorithms.

For these essential principles to take root among organizations and data scientists, women need to be represented at a far higher percentage in the AI workforce. Unfortunately, despite the renewed effort to broaden hiring efforts and attract more women, black, and POC workers, the gender equity gap in the tech industry remains notorious and overwhelming.

Despite the fact that AI is set to fundamentally reshape society, just 12% of machine learning researchers are women. The AI Now Institute last year estimated that women currently make up only 24.4% of the computer science workforce and receive median salaries that are only 66% of the salaries of their male counterparts, while a report by the National Center for Women in Information Technology found out that nearly half the women who go into technology eventually leave the field more than double the percentage of men who leave.

How do we bring more women into the AI conversation, ensure their contributions are heard, and support them in a field where gender imbalance and bias continues to make it difficult for women to advance?

Part of the solution is keeping the effort to address the gender imbalance at the forefront of the AI conversation, and continuing to shine a light on the issue. VentureBeat has committed to being a voice in this effort. As part of the companys undertaking, it is putting women front and center at this years VB Transform, for the second year in a row at the Women in AI Breakfast, presented by Intel and Capital One.

VentureBeats 2nd annual Women in AI Breakfast at VB Transform celebrates the women in AI who are revolutionizing the industry. The breakfast, which will now be hosted digitally this July 15, 2020, will feature a discussion on how women are advancing AI and leading the trend of AI fairness, ethics, and human-centered AI, as well as how intersectionality and diversity are critical in achieving the most powerful and innovative AI visions.

Join the Women in AI Breakfast livestream on July 15 at 7:30 a.m. Pacific. Jaime Fitzgibbon, founder and CEO of Ren.ai.ssance Insights, will moderate a conversation with Kay Firth-Butterfield, head of AI and Machine Learning and member of the Executive Committee, World Economic Forum; Dr. Timnit Gebru, co-lead of Ethical AI Research Team, Google Brain; and Francesca Rossi, IBM Fellow and AI Ethics Global Leader, IBM Research.

Margaret Mayer, MVP, Software Engineering for Messaging, Conversational AI and Innovation Platforms, Capital One, and Huma Abidi, Senior Director of AI Software Products, Intel, will share opening and closing remarks, plus a mainstage report back.

Extraordinary women thinkers, leaders, and innovators in AI will also be honored for the second year in a row at the Women in AI Awards. Join online as well honor women who have made outstanding contributions in five areas: Responsibility & Ethics of AI, AI Entrepreneur, AI Research, AI Mentorship, & Rising Star.

Were actively accepting nominations for these initiatives and encourage you to submit your nominations here.

Last years 2019 Women in AI Leadership Award winners included Tess Posner, founder and CEO of AI4ALL; Charu Sharma, founder and CEO, Nextplay AI; Dr. Dyann Daley, founder and CEO, Predict-Align-Prevent; Dr. Fatmah Baothman, professor of artificial intelligence at King Abdulaziz University; and more. Winners were chosen by a panel of industry experts and leaders from 140+ global nominations.

Anu Bhardwaj, the founder of Women Investing in Women Digital, explained that 2019 winners were chosen based on a strong commitment to changing the status quo, with a goal of recognizing individuals who advocated for and practiced inclusivity in their communities.

Register today for access to the three-day event, or request an invitation to the Women in AI Breakfast.

Continue reading here:

VB Transform 2020: Women in AI to kick off the digital AI conference of the year - VentureBeat

MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets – The Drive

General Atomics says that it has successfully integrated and flight-tested Agile Condor, a podded, artificial intelligence-driven targeting computer, on its MQ-9 Reaper drone as part of a technology demonstration effort for the U.S. Air Force. The system is designed to automatically detect, categorize, and track potential items of interest. It could be an important stepping stone to giving various types of unmanned, as well as manned aircraft, the ability to autonomously identify potential targets, and determine which ones might be higher priority threats, among other capabilities.

The California-headquartered drone maker announced the Agile Condor tests on Sept. 3, 2020, but did not say when they had taken place. The Reaper with the pod attached conducted the flight testing from General Atomics Aeronautical Systems, Inc.'s (GS-ASI) Flight Test and Training Center in Grand Forks, North Dakota.

Computing at the edge has tremendous implications for future unmanned systems, GA-ASI President David R. Alexander said in a statement. GA-ASI is committed to expanding artificial intelligence capabilities on unmanned systems and the Agile Condor capability is proof positive that we can accurately and effectively shorten the observe, orient, decide and act cycle to achieve information superiority. GA-ASI is excited to continue working with AFRL [Air Force Research Laboratory] to advance artificial intelligence technologies that will lead to increased autonomous mission capabilities."

Defense contractor SRC, Inc. developed the Agile Condor system for the Air Force Research Laboratory (AFRL), delivering the first pod in 2016. It's not clear whether the Air Force conducted any flight testing of the system on other platforms before hiring General Atomics to integrate it onto the Reaper in 2019. The service had previously said that it expected to take the initial pod aloft in some fashion before the end of 2016.

"Sensors have rapidly increased in fidelity, and are now able to collect vast quantities of data, which must be analyzed promptly to provide mission critical information," an SRC white paper on Agile Condor from 2018 explains. "Stored data [physically on a drone] ... creates an unacceptable latency between data collection and analysis, as operators must wait for the RPA [remotely piloted aircraft] to return to base to review time sensitive data."

"In-mission data transfers, by contrast, can provide data more quickly, but this method requires more power and available bandwidth to send data," the white paper continues. "Bandwidth limits result in slower downloads of large data files, a clogged communications link and increased latency that could allow potential changes in intel between data collection and analysis. The quantities of data being collected are also so vast, that analysts are unable to fully review the data received to ensure actionable information is obtained."

This is all particularly true for drones equipped with wide-area persistent surveillance systems, such as the Air Force's Gorgon Stare system, which you can read about in more detail here, that grab immense amounts of imagery that can be overwhelming for sensor operators and intelligence analysts to scour through. Agile Condor is designed to parse through the sensor data a drone collects first, spotting and classifying objects of interest and then highlighting them for operators back at a control center or personnel receiving information at other remote locations for further analysis. Agile Condor would simply discard "empty" imagery and other data that shows nothing it deems useful, not even bothering to forward that on.

"This selective 'detect and notify' process frees up bandwidth and increases transfer speeds, while reducing latency between data collection and analysis," SRC's 2018 white paper says. "Real time pre-processing of data with the Agile Condor system also ensures that all data collected is reviewed quickly, increasing the speed and effectiveness with which operators are notified of actionable information."

Here is the original post:

MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets - The Drive

Virtual dressing room startup Revery.ai applying computer vision to the fashion industry – TechCrunch

Figuring out size and cut of clothes through a website can suck the fun out of shopping online, but Revery.ai is developing a tool that leverages computer vision and artificial intelligence to create a better online dressing room experience.

Under the tutelage of University of Illinois Center for Computer Science advisrr David Forsyth, a team consisting of Ph.D. students Kedan Li, Jeffrey Zhang and Min Jin Chong, is creating what they consider to be the first tool using existing catalog images to process at a scale of over a million garments weekly, something previous versions of virtual dressing rooms had difficulty doing, Li told TechCrunch.

Revery.ai co-founders Jeffrey Zhang, Min Jin Chong and Kedan Li. Image Credits: Revery.ai

California-based Revery is part of Y Combinators summer 2021 cohort gearing up to complete the program later this month. YC has backed the company with $125,000. Li said the company already has a two-year runway, but wants to raise a $1.5 million seed round to help it grow faster and appear more mature to large retailers.

Before Revery, Li was working on another startup in the personalized email space, but was challenged in making it work due to free versions of already large legacy players. While looking around for areas where there would be less monopoly and more ability to monetize technology, he became interested in fashion. He worked with a different adviser to get a wardrobe collection going, but that idea fizzled out.

The team found its stride working with Forsyth and making several iterations on the technology in order to target business-to-business customers, who already had the images on their websites and the users, but wanted the computer vision aspect.

Unlike its competitors that use 3D modeling or take an image and manually clean it up to superimpose on a model, Revery is using deep learning and computer vision so that the clothing drapes better and users can also customize their clothing model to look more like them using skin tone, hair styles and poses. It is also fully automated, can work with millions of SKUs and be up and running with a customer in a matter of weeks.

Its virtual dressing room product is now live on many fashion e-commerce platforms, including Zalora-Global Fashion Group, one of the largest fashion companies in Southeast Asia, Li said.

Revery.ai landing page. Image Credits: Revery.ai

Its amazing how good of results we are getting, he added. Customers are reporting strong conversion rates, something like three to five times, which they had never seen before. We released an A/B test for Zalora and saw a 380% increase. We are super excited to move forward and deploy our technology on all of their platforms.

This technology comes at a time when online shopping jumped last year as a result of the pandemic. Just in the U.S., the e-commerce fashion industry made up 29.5% of fashion retail sales in 2020, and the markets value is expected to reach $100 billion this year.

Revery is already in talks with over 40 retailers that are putting this on their roadmap to win in the online race, Li said.

Over the next year, the company is focusing on getting more adoption and going live with more clients. To differentiate itself from competitors continuing to come online, Li wants to invest body type capabilities, something retailers are asking for. This type of technology is challenging, he said, due to there not being much in the way of diversified body shape models available.

He expects the company will have to collect proprietary data itself so that Revery can offer the ability for users to create their own avatar so that they can see how the clothes look.

We might actually be seeing the beginning of the tide and have the right product to serve the need, he added.

Visit link:

Virtual dressing room startup Revery.ai applying computer vision to the fashion industry - TechCrunch

FTCs Tips on Using Artificial Intelligence (AI) and Algorithms – The National Law Review

Artificial intelligence (AI) technology that uses algorithms to assist in decision-making offers tremendous opportunity to make predictions and evaluate big data. The Federal Trade Commission (FTC), on April 8, 2020, provided reminders in its Tips and Advice blog post,Using Artificial Intelligence and Algorithms.

This is not the first time the FTC has focused on data analytics. In 2016, it issued a Big Data Report. Seehere.

AI technology may appear objective and unbiased, but the FTC warns of the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities. For example, the FTC pointed out, a well-intentioned algorithm may be used for a positive decision, but the outcome may unintentionally disproportionately affect a particular minority group.

The FTC does not want consumers to be misled. It provided the following example: If a companys use of doppelgngers whether a fake dating profile, phony follower, deepfakes, or an AI chatbot misleads consumers, that company could face an FTC enforcement action.

Businesses obtaining AI data from a third-party consumer reporting agency (CRA) and making decisions on that have particular obligations under state and federal Fair Credit Reporting Act (FCRA) laws. Under FCRA, a vendor that assembles consumer information to automate decision-making about eligibility for credit, employment, insurance, housing, or similar benefits and transactions may be a consumer reporting agency. An employer relying on automated decisions based on information from a third-party vendor is the user of that information. As the user, the business must provide consumers an adverse action notice required by FCRA if it takes an adverse action against the consumer. The content of the notice must be appropriate to the adverse action, and may consist of a copy of the consumer report containing AI information, the federal summary of rights, and other information. The vendor that is the CRA has an obligation to implement reasonable procedures to ensure the maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors. The FTC is seeking transparency and the ability to provide well-explained AI decision-making if the consumer asks.

Takeaways for Employers

Carefully review use of AI to ensure it doesnotresult in discrimination. According to the FTC, for credit purposes, use of an algorithm such as a zip code could result in a disparate impact on a particular protected group.

Accuracy and integrity of data is key.

Validation of AI models is important to minimizing risk. Post-validation monitoring and periodic re-validation is important as well.

Review whether federal and state FCRA laws apply.

Continue self-monitoring by asking:

How representative is your data set?

Does your data model account for biases?

How accurate are your predictions based on big data?

Does your reliance on big data raise ethical or fairness concerns?

The FTCs message: use AI, but proceed with accountability and integrity.

Jackson Lewis P.C. 2020

Follow this link:

FTCs Tips on Using Artificial Intelligence (AI) and Algorithms - The National Law Review