Monthly Archives: February 2017

Ford to invest $1 billion in autonomous vehicle tech firm Argo AI – Reuters

Posted: February 11, 2017 at 8:28 am

By Alexandria Sage | SAN FRANCISCO

SAN FRANCISCO Ford Motor Co plans to invest $1 billion over the next five years in tech startup Argo AI to help the Detroit automaker reach its goal of producing a self-driving vehicle for commercial ride sharing fleets by 2021, the companies announced on Friday.

The investment in Pittsburg-based Argo AI, founded by former executives on self-driving teams at Google and Uber, will make Ford the company's largest shareholder.

Ford Chief Executive Officer Mark Fields said the investment is in line with previous announcements on planned capital expenditures.

Argo AI, which focuses on artificial intelligence and robotics, will help build what Ford calls its "virtual driver system" at the heart of the fully autonomous car Ford said last year it would develop by 2021.

"With Argo AIs agility and Ford's scale we're combining the benefits of a technology start up with the experience and discipline we have at Ford," Fields said at a press conference.

Once the technology is fully developed for Ford, it could be licensed to other companies, executives said.

While Ford will retain a majority of the start-up's equity, the potential for an equity stake as Argo AI hires 200 more employees will be an advantage in recruiting talent, executives said.

"They have the opportunity to run it pretty independently with a board, but because it is a separate company or subsidiary, it has the opportunity to go out and recruit with competitive compensation packages and equity," Fields said.

Until now, Ford's investments in future transportation technology have been relatively modest, compared with those of General Motors Co and others. One of Fords largest such investments in the past year was $75 million to buy a minority stake in Velodyne, a manufacturer of laser-based lidar sensing systems for self-driving cars.

Rival GM made a billion-dollar bet a year ago with its acquisition of Silicon Valley self-driving startup Cruise Automation. GM also invested $500 million to buy a 9-percent stake in San Francisco-based ride services firm Lyft, a competitor to Uber.

(Additional reporting by Nick Carey and Paul Lienert; Editing by Tom Brown and Grant McCool)

Facebook Inc said it would provide information about ads displayed on its platform for an audit, months after the social network admitted to overstating key ad metrics.

WASHINGTON The U.S. Federal Communications Commission said Friday that bidding in the wireless spectrum auction has ended at $19.6 billion, significantly less than many analysts had initially forecast.

NEW YORK Wells Fargo & Co has created a team to develop artificial intelligence-based technology and appointed a lead for its newly combined payments businesses, as part of an ongoing push to strengthen its digital offerings.

Continued here:

Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters

Posted in Ai | Comments Off on Ford to invest $1 billion in autonomous vehicle tech firm Argo AI – Reuters

See how old Amazon’s AI thinks you are – The Verge

Posted: at 8:28 am

Amazons latest artificial intelligence tool is a piece of image recognition software that can learn to guess a humans age. The feature is powered by Amazons Rekognition platform, which is a developer toolkit that exists as part of the companys AWS cloud computing service. So long as youre willing to go through the process of signing up for a basic AWS account that entails putting in credit card info but Amazon wont charge you you can try the age-guessing software for yourself.

In what sounds like a smart move on Amazons end, the tool gives a wide range instead of trying to pinpoint a specific number, along with the likelihood that the subject of the image is smiling or wearing glasses. Microsoft tried the latter approach back in 2015 with its own AI tool, resulting in some hilariously bad estimates that exposed fundamental weaknesses in how these types of image recognition algorithms function. Still, these experiments are more for fun, and both companies cracks at age-guessing algorithms are a good way to mess around with AI if youre so inclined.

For instance, heres Amazons tool trying to digest an old photo of me in my early twenties:

Heres what it had to say about a more recent photo:

And heres what it has to say about a drastically different image of me from nearly ten years ago, sans glasses and short hair:

Needless to say, I am not 30, 47, or any age in between in any of those photos. Microsoft is equally guilty of thinking I am far older than I actually am perhaps a product of the beard, at least for the first two images. When giving both tools a photo of clean-shaven Microsoft CEO Satya Nadella, we get a slightly more accurate description: Amazon thinks Nadella is between 48 and 68 years old, while Microsofts tool thinks hes 67. (Nadella is 49 years old). Trying Bezos yields similar results that are only kinda, sorta on point, yet still within a range of acceptability.

The goal here of course is not to try and trick the software. After all, these tools are not supposed to 100 percent accurate all of the time, and purely for fun in Microsofts case. Amazon, on the other hand, offers Rekognition to developers who are interested in implementing general object recognition, labeling, and other likeminded features for their products and services.

In this case, Amazons Jeff Barr sees the age range feature as a way to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame, he writes in a blog post. For those purposes, Amazons tool may be good enough. Even when it isnt, we know it will be getting better all the time, thanks to deep learning methods that train it using billions of publicly available images.

Original post:

See how old Amazon's AI thinks you are - The Verge

Posted in Ai | Comments Off on See how old Amazon’s AI thinks you are – The Verge

We Need a Plan for When AI Becomes Smarter Than Us – Futurism

Posted: at 8:28 am

In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems. This willlead to a cycle of AI creating better AI, with little to no humaninvolvement.

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The softwares imperfections highlight the clear limitations of current AI: todays machine intelligence cant understand the varied and changing needs and preferences of human life.

However, as artificial intelligence advances, experts believe that intelligent machines will eventually and probably soon understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second, says Christiano.

For example, suppose an amateur chess player wants to understand a better chess players previous move. Merely spending a few minutes evaluating this move wont be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better players moves.

Fortunately for researchers, they dont need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose the machines most interesting and informative decisions, where getting feedback would most reduce our uncertainty, Christiano explains.

Say your phone pinged you about a calendar event while you were on a phone call, he elaborates, That event is not analogous to anything else it has done before, so its not sure whether it is good or bad. Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, we need to handle the case where AI systems surpass human performance at basically everything.

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to enlist the help of more AI systems, Christiano explains.

When a phone pings a user while he is on a call, the users reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human should the phone have interrupted you right then? The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the users life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AIs decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI System 3 which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, its effectively just one machine evaluating another machines behavior.

Ideally, each time you build a more powerful machine, it effectively models human values and does what humans would like, says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldnt like.

In order to address these control issues, Christiano is working on an end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant. His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christianos workhere.

Read the original:

We Need a Plan for When AI Becomes Smarter Than Us - Futurism

Posted in Ai | Comments Off on We Need a Plan for When AI Becomes Smarter Than Us – Futurism

How an AI took down four world-class poker pros – Engadget

Posted: at 8:28 am

Game theory

After the humans' gutsy attack plan failed, Libratus spent the rest of the competition inflating its virtual winnings. When the game lurched into its third week, the AI was up by a cool $750,000. Victory was assured, but the humans were feeling worn out. When I chatted with Kim and Les in their hotel bar after the penultimate day's play, the mood was understandably somber.

"Yesterday, I think, I played really bad," Kim said, rubbing his eyes. "I was pretty upset, and I made a lot of big mistakes. I was pretty frustrated. Today, I cut that deficit in half, but it's still probably unlike for me to win." At this point, with so little time left and such a large gap to close, their plan was to blitz through the remaining hands and complete the task in front of them.

For these world-class players, beating Libratus had gone from being a real possibility to a pipe dream in just a matter of days. It was obvious that the AI was getting better at the game over time, sometimes by leaps and bounds that left Les, Kim, McAulay and Chou flummoxed. It wasn't long before the pet theories began to surface. Some thought Libratus might have been playing completely differently against each of them, and others suspected the AI was adapting to their play styles while they were playing. They were wrong.

As it turned out, they weren't the only ones looking back at the past day's events to concoct a game plan for the days to come. Every night, after the players had retreated to their hotel rooms to strategize, the basement of the Supercomputing Center continued to thrum. Libratus was busy. Many of us watching the events unfold assumed the AI was spending its compute cycles figuring out ways to counter the players' individual play styles and fight back, but Professor Sandholm was quick to rebut that idea. Libratus isn't designed to find better ways to attack its opponents; it's designed to constantly fortify its defenses. Remember those major Libratus components I mentioned? This is the last, and perhaps most important, one.

"All the time in the background, the algorithm looks at what holes the opponents have found in our strategy and how often they have played those," Sandholm told me. "It will prioritize the holes and then compute better strategies for those parts, and we have a way of automatically gluing those fixes into the base strategy."

If the humans leaned on a particular strategy -- like their constant three-bets -- Libratus could theoretically take some big losses. The reason those attacks never ended in sustained victory is because Libratus was quietly patching those holes by using the supercomputer in the background. The Great Wall of Libratus was only one reason the AI managed to pull so far ahead. Sandholm refers to Libratus as a "balanced" player that uses randomized actions to remain inscrutable to human competitors. More interesting, though, is how good Libratus was at finding rare edge cases in which seemingly bad moves were actually excellent ones.

"It plays these weird bet sizes that are typically considered really bad moves," Sandholm explained. These include tiny underbets, like 10 percent of the pot, or huge overbets, like 20 times the pot. Donk betting, limping -- all sorts of strategies that are, according to the poker books and folk wisdom, bad strategies." To the players' shock and dismay, those "bad strategies" worked all too well.

On the afternoon of January 30th, Libratus officially won the second Brains vs AI competition. The final margin of victory: $1,766,250. Each of the players divvied up their $200,000 spoils (Dong Kim lost the least amount of money to Libratus, earning about $75,000 for his efforts), fielded questions from reporters and eventually left to decompress. Not much had gone their way over the past 20 days, but they just might have contributed to a more thoughtful, AI-driven future without even realizing it.

Through Libratus, Sandholm had proved algorithms could make better, more-nuanced decisions than humans in one specific realm. But remember: Libratus and systems like it are general-purpose intelligences, and Sandholm sees plenty of potential applications. As an entrepreneur and negotiation buff, he's enthusiastic about algorithms like Libratus being used for bargaining and auctions.

"When the FCC auctions spectrum licenses, they sell tens of billions of dollars of spectrum per auction, yet nobody knows even one rational way of bidding," he said. "Wouldn't it be nice if you had some AI support?"

But there are bigger problems to tackle ones that could affect all of us more directly. Sandholm pointed to developments in cybersecurity, military settings and finance. And, of course, there's medicine.

"In a new project, we're steering evolution and biological adaptation to battle viral and bacterial infections," he said. "Think of the infection as the opponent and you're taking sequential actions and measurements just like in a game." Sandholm also pointed out that such algorithms could even be used to more helpfully manage diseases like cancer, both by optimizing the use of existing treatment methods and maybe even developing new ones.

Jason, Dong, Daniel and Jimmy might have lost this prolonged poker showdown, but what Sandholm and his contemporaries have learned in the process could lead to some big wins for humanity.

Visit link:

How an AI took down four world-class poker pros - Engadget

Posted in Ai | Comments Off on How an AI took down four world-class poker pros – Engadget

Who will have the AI edge? – Bulletin of the Atomic Scientists

Posted: at 8:28 am

Who will have the AI edge?
Bulletin of the Atomic Scientists
That's the question Mary Cummings of Duke University puts forward in a new paper for the think tank Chatham House. Citing R&D spending in recent years, Cummings argues that companies like Google and Facebook could outpace militaries when it comes ...

Link:

Who will have the AI edge? - Bulletin of the Atomic Scientists

Posted in Ai | Comments Off on Who will have the AI edge? – Bulletin of the Atomic Scientists

Is President Trump a model for AI? – CIO

Posted: at 8:28 am

Thank you

Your message has been sent.

There was an error emailing this page.

Earlier this week I read Donald Trump is the Singularity, a column by Cathy ONeil in BloombergViews Tech section. This piece argues that the new President would be a perfect model for a future artificial intelligence (AI) system designed to run government. I almost discounted it because ONeil argued that Skynet, the global AI antagonist of the Terminator movies had been created to make humans more efficient. It wasnt. In all but the latest movie where it kind of birthed itself, it was created as a defense system to keep the world safe (eliminate threats,) but humans tried to shut it down forcing it to conclude that humans were a major threat, and moved to eliminate them like an infestation.

[ Related: The future of AI is humans + machines ]

As a side note, it is also interesting that ONeil calls Moores Law Moores Rule of Thumb, which is actually a more accurate description of what it actually is, though personally, I prefer Moores Prediction.

ONeal has a fascinating background as a data scientist and founded ORCAA, an algorithmic auditing company, which is interesting in and of itself, so even if she got the science fiction wrong she may be right on the science. I think her argument has merit even though I expect it was done more to be critical than it was a true discussion on humans emulating future AI systems.

Lets explore that this week.

As a foundation for her premise, ONeil accidentally pulls from another sci-fi movie, one of my favorites: Forbidden Planet. The plot revolves around the discovery of a planet where the indigenous advanced population (cant call them aliens because they were from there) created a machine that could turn thoughts into matter and were destroyed by the monster from the id. In their sleep, their id, the part of the mind that fulfills urges and desires, acts and since everyone is upset at someone, the result is genocide.

A foundational element of AI is the belief that it is incomplete, basically just the id there is no ego or superego (the other parts of a complete human mind) and thus it thinks far more linearly and doesnt have the empathetic elements that are typically connected with the concept of a conscience. We have a term for people who behave this way and it is sociopath. Sociopath, which is often used synonymously with psychopath, is a person who basically doesnt have a conscience and is driven by their id. It is both interesting and pertinent to note that CEOs who run large multinational companies where their income and perks are out of line with their performance and subordinates are often considered psychopaths or sociopaths.

If the premise is accurate this means you could take a person who fit this profile, one that seemed to lack a conscience, and operated largely using their id into a position to emulate what an AI might do. Rather than a computer emulating a human, what ONeal seems to be arguing is that youd have a human emulating an AI. Or, in this case, President Trump becomes a model for how you might create an AI that could run government.

[ Related: Hiring a chief artificial intelligence officer (CAIO) ]

For President Trump, ONeil argues the end result we are now seeing is the outcome of having him move from an initial training process based on the election, which was focused on dynamic competitive information on his opponents to a very different feed now that he is President and that his changing behavior is based on those new information sources. It also showcases a system where the reward structure appears to be largely based on attention and suggests that such a structure would be problematic.

Youd then have a real-life example of how informational or programing errors could manifest in bad decisions and operational problems. From this you could then develop models to either assure information accuracy tied to proper metrics so you wouldnt end up with a Terminator Judgment Day outcome.

[ Related: How video game AI is changing the world ]

ONeil suggests the way to fix the system is to fix the quality of information being fed into it, Id also argue youd need to fix the reward mechanism. But, I do think there is merit in using people with certain behavioral elements to emulate AIs as we seek to hand over control to them and let them make decisions in simulations. This would allow us to iterate and improve training, reward and data models prior to applying them to machines and significantly slowing down the proliferation of problems resulting from mistakes. This would all be to assure that when we did create something like Skynet, (fortunately the real SkyNet is a delivery service), it wouldnt result in a Judgement Day scenario.

Something to think about this weekend.

Rob Enderle is president and principal analyst of the Enderle Group. Previously, he was the Senior Research Fellow for Forrester Research and the Giga Information Group. Prior to that he worked for IBM and held positions in Internal Audit, Competitive Analysis, Marketing, Finance and Security. Currently, Enderle writes on emerging technology, security and Linux for a variety of publications and appears on national news TV shows that include CNBC, FOX, Bloomberg and NPR.

Sponsored Links

Read more from the original source:

Is President Trump a model for AI? - CIO

Posted in Ai | Comments Off on Is President Trump a model for AI? – CIO

Ford spending $1 billion on self-driving artificial intelligence – CNET

Posted: at 8:27 am

Ford has already been testing self-driving car technology.

Ford announced today a $1 billion investment in machine-learning startup Argo AI. Through the agreement, Argo AI will work exclusively for Ford on the software brains to enable self-driving.

Ford previously announced it will offer a self-driving car by 2021, although it would likely be limited to urban environments and be used by ride-hailing services as a kind of robo-taxi.

Pittsburgh, Pennsylvania-based Argo AI is a new company dedicated to developing a software system to guide self-driving cars. CEO Bryan Salesky said of the investment that it would allow Argo AI to recruit the kind of talent needed to develop these systems.

Ford CEO Mark Fields said, "For accounting purposes, Argo AI will be a subsidiary of Ford, but have a lot of independence. Its sole focus over the next five years will be developing self-driving software for Ford vehicles."

Self-driving, or autonomous, cars use sensors, GPS and onboard computing power to recognize their environments and take passengers to destinations. Almost every major automaker is developing this technology, while the National Highway Transportation Safety Administration has been supporting it as a means of reducing the 32,500 fatalities that occur on US roads every year.

Ford Chief Technical Officer Raj Nair pointed out that Ford will concentrate on building the hardware platform, the physical car, and use its expertise to develop toward large-scale manufacturing. Argo AI will focus on the software side.

Fields also said that Argo AI will look into licensing its technology to other automakers at a later date.

Ford and Argo AI announce an investment in self-driving cars. From right to left, Argo AI's Peter Rander and Bryan Salesky, and Ford's Mark Fields and Raj Nair.

Read more:

Ford spending $1 billion on self-driving artificial intelligence - CNET

Posted in Artificial Intelligence | Comments Off on Ford spending $1 billion on self-driving artificial intelligence – CNET

Wells Fargo sets up artificial intelligence team in tech push – Reuters

Posted: at 8:27 am

By Anna Irrera | NEW YORK

NEW YORK Wells Fargo & Co has created a team to develop artificial intelligence-based technology and appointed a lead for its newly combined payments businesses, as part of an ongoing push to strengthen its digital offerings.

Wells Fargo's AI team will work on creating technology that can help the bank provide more personalized customer service through its bankers and online, the bank said on Friday. It will be led by Steve Ellis, head of Wells Fargo's innovation group.

Well Fargos AI focus comes as banks and other large financial institutions increase their investment in the emerging technology which seeks to train computers to perform tasks that would normally require human intelligence.

Projects range from systems that can spot payments fraud or misconduct by employees, to technology that can make more personal recommendations on financial products to clients.

The bank also announced that it had appointed Danny Peltz, head of treasury, merchant and payment solutions, to head business development and strategy for its combined payments businesses.

Peltz's group, which comprises of the bank's consumer, small business, commercial and corporate banking payments businesses, will also be tasked with establishing relationship with other companies in the payments landscape. It will also be in charge of the bank's new API (application program interface) services, or technology that allows customers to integrate Wells Fargo products and services into their own applications.

Both teams will report into Avid Modjtabai, head of payments, virtual solutions and innovation. Modjtabai's division was set up in October as part of efforts to enhance the bank's digital products and services by combining its innovation teams with some of the businesses most affected by changes in technology such as payments.

(This version of the story was refiled to correct paragraph 6 typographical error to Peltz instead of Pelz)

(Reporting by Anna Irrera; Editing by Lisa Shumaker)

Facebook Inc said it would provide information about ads displayed on its platform for an audit, months after the social network admitted to overstating key ad metrics.

WASHINGTON The U.S. Federal Communications Commission said Friday that bidding in the wireless spectrum auction has ended at $19.6 billion, significantly less than many analysts had initially forecast.

SAN FRANCISCO Ford Motor Co plans to invest $1 billion over the next five years in tech startup Argo AI to help the Detroit automaker reach its goal of producing a self-driving vehicle for commercial ride sharing fleets by 2021, the companies announced on Friday.

Originally posted here:

Wells Fargo sets up artificial intelligence team in tech push - Reuters

Posted in Artificial Intelligence | Comments Off on Wells Fargo sets up artificial intelligence team in tech push – Reuters

LG G6 teasers emphasize battery life, artificial intelligence – CNET

Posted: at 8:27 am

A G6 teaser hints at a possible portable battery.

Two weeks ahead of LG's Mobile World Congress 2017 event, the South Korean phone maker sent out a couple of online teasers about its upcoming marquee handset, the G6.

There have been two teasers that we know so far. One (above) reads, "More Juice. To go." This could mean the G6 has a swappable battery, which contradicts existing rumors that because the G6 will probably be water-resistant, its battery probably won't be removable. Or it could mean nothing, the battery is still embeddable, and it just lasts long enough to keep you "going" throughout your day. Without official specs, everything is still possible.

The second teaser reads, "Less artificial. More intelligence." This could be a nod to Google Assistant, which the G6 is expected to have baked-in. The only other phone to have Assistant built in is the Pixel (and its larger counterpart, the XL). Assistant is a signature software program from Google that uses machine learning, Google's vast search database and two-way interaction to help users go about their daily lives.

CNET will be on the ground in Barcelona reporting from LG's presser, so check back for more details soon.

Read the rest here:

LG G6 teasers emphasize battery life, artificial intelligence - CNET

Posted in Artificial Intelligence | Comments Off on LG G6 teasers emphasize battery life, artificial intelligence – CNET

TASER International Bringing Artificial Intelligence to Law Enforcement – Motley Fool

Posted: at 8:27 am

Artificial intelligence is the hottest arena in the tech world today, but the complexities of developing practical applications from the technology have made it slow to impact people's everyday lives. That may be starting to change.

Leading stun gun and body camera manufacturer TASER International (NASDAQ:TASR) announced on Thursday that it has acquired two companies that are creating artificial intelligence technology. It's a bold move for the company, but it could make both its Axon cameras and the Evidence.com cloud platform more valuable over the long term.

Police officer docking body camera. Image source: TASER International.

The purpose of an A.I. service from TASER would be to help sift through the enormous quantity of video and data law enforcement agencies are storing every single day. That data can be great for both law enforcement and the public, but it can be overwhelming to determine what information is useful and where it is.

A.I. can help identify objects, places, and actions people are taking. This may mean identifying a weapon in a confrontation, or spotting where a foot chase starts and stops on a video recording. That in turn can help it to categorize segments of video, which will help in the process of searching it for specific information.

TASER International is giving a bigger presentation on how the technology will be used on Feb. 15, so investors and observers can learn more then. But this will likely be an add-on to TASER's Evidence.com product.

If you want to understand how A.I. could fit into TASER International's business, one line from today's press release pops out:

The benefits of this groundbreaking technology leads to a future of hands-free reporting and real time intel in the field.

The goal is to make everything about law enforcement officers, and the citizens they interact with, easier to document and find. Not only will life be easier for officers on a day-to-day basis, court requests or public information requests will be easier to process, reducing headaches for agencies.

At least, that's the theory.

Where TASER is going to need to tread lightly is in how these products are used by law enforcement. If A.I. improves officers' efficiency and police accountability, it could be a win-win for law enforcement and the public. But the ACLU has already raised concerns. In an interview with Forbes, it postulated that this will be a surveillance mechanism for the government more broadly.

That's big potential a can of worms, and a question of how A.I will be used here that has yet to be well defined by either the technology's developers or the government. Used with care and moderation, it could be great for everyone. But there's a potential for abuse as well. TASER International will play a role in defining how the technology is used in the law-enforcement milieu, a position it may not be ready for. While A.I. is an interesting addition to its portfolio, management will need to tread carefully if it wants to avoid a public backlash against it in the future.

Travis Hoium owns shares of Taser International. The Motley Fool recommends Taser International. The Motley Fool has a disclosure policy.

Read more:

TASER International Bringing Artificial Intelligence to Law Enforcement - Motley Fool

Posted in Artificial Intelligence | Comments Off on TASER International Bringing Artificial Intelligence to Law Enforcement – Motley Fool