Page 20«..10..19202122..3040..»

Category Archives: Ai

The US Army Wants to Reinvent Tank Warfare with AI – Defense One

Posted: October 20, 2019 at 10:32 pm

A new project aims to make the battlefield more transparent while relying on robots and software that think in unpredictable ways.

Tank warfare isnt aseasy to predict as hulkingmachines lumbering across open spaces would suggest. In July 1943, for instance, German military planners believed that their advance on the Russian city of Kursk would be over in ten days. In fact, that attempt lasted nearly two months and ultimately failed. Even the 2003 Battle of Baghdad, in which U.S. forces had air superiority, took a week. For the wars of the future, thats too slow. The U.S. Army has launched a new effort, dubbed Project Quarterback, to accelerate tank warfare by synchronizing battlefield data with the aid of artificialIntelligence.

The project, about a month old, aims for an AI assistant that can look out across the battlefield, taking in all the relevant data from drones, radar, ground robots, satellites, cameras mounted in soldier goggles, etc., and then output the best strategy for taking out the enemy(s) with whatever weapons available. Quarterback, in other words, would help commanders do two things better and faster, understand exactly whats on the battlefield and then select the most appropriate strategy based on the assets available and otherfactors.

Just the first part of that challenge is huge. The amount of potentially usable battlefield data is rapidly expanding, and it takes a long time to synchronizeit.

Simple map displays require 96 hours to synchronize a brigade or division targeting cycle, said Kevin McEnery, the deputy director of the Armys Next Generation Combat Vehicle Cross Functional Team, said on Thursday at an event at the National Robotics Engineering Center. One goal is to bring that down to 96 seconds, with the assistance of AI, hesaid.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

All the vast array of current and future military sensors, aviation assets, electronic warfare assets, cyber assets, unmanned aerial, unmanned ground systems, next generation manned vehicles and dismounted soldiers will detect and geolocate an enemy on our battlefield. We need an AI system to help identify that threat, aggregate [data on the threat] with other sensors and threat data, distribute it across our command and control systems and recommend to our commanders at echelon the best firing platform for the best effects, be it an F-35, an [extended range cannon] or an [remote controlled vehicle], McEnerysaid.

Ultimately, the Army is looking for a lot more than a data visualizer. They want AI to help with battle strategy, said Lt. Col. Jay Wisham, one of the program leaders. How do you want to make decisions based on [battlefield data]? How do you want to select the most efficient way to engage a target, based on probability of hit, probability of kill? Do you have indirect fire assets available to you that you can request? Do you have real assets that you can request? Can I send you my wingman or, does the computer then recommend, Red One, our wingman should take that target instead of you for x, y reasons? That goes back to that concept of how you make a more informed decision, faster. And who is making that decision could be a tank commander or it could be a battalion commander, hesaid.

The Armys future plans rely a lot not just on AI but also on ever-more-intelligent ground robots. Right now, a single U.S. Army operator can control about two ground robots. The Armys plans are to get that ratio to one human to a dozen robots. That will require those future ground robots to not just collect visual data but actually perceive the world around them, designating (though primitively) objects in their field of perception. Those robots will have to make decisions with minimal human oversight as well since the availability high-bandwidth networking is hardlycertain.

During the event, which was put on by the Army Research Lab, Carnegie Mellon researchers unveiled robotic experiments where ground robots demonstrated that they could collect intelligence, maneuver autonomously and even decipher what it meant to move covertly, with minimal human commands. The robot learns and applies labels to objects in its environment after watchinghumans.

Relying on those sorts of robots will require a deeper dependence on small and large artificially intelligent systems that reach conclusions via opaque, neural networked or deep learning reasoning. Both of these are sometimes referred to as black box learning processes because, unlike straight or simple statistical models, its difficult to tell how neural nets reach the decisions that they do. In other words, commanders and soldiers will have to become more comfortable with robots and software that produce outputs via processes that cant be easily explained, even by the programers that producedthem.

The way to develop that trust, said Wisham, is the same way humans develop trust in one another, slowly and with a lot of practice. Most humans are not as explainable as we like to think If you demonstrate to a soldier that the tool or the system that you are trying to enable them with generally functions relatively well and adds some capability to them they will grow trust very, veryrapidly.

But, he said, when it comes to big decision aids, that will be muchharder.

Anthony Stenz, director of software engineering at Ubers Advanced Technologies Group, said, You trust something because it works, not because you understand it. The way that you show it works is you run many, many, many tests, build a statistical analysis and build trust that way. Thats true not only of deep learning systems but other systems as well that are sufficiently complex. You are not going to prove them correct. You will need to put them through a battery of tests and then convince yourself that they meet thebar.

The surging availability of big data and exascale computing through enterprise cloud architectures is also hastening a new state of neural networks and deep learning solutions, one that is potentially more transparent. In machine learning, theres a lot of work going on precisely in this direction, said Dieter Fox, senior director of robotics research at NVIDIA. Techniques are being developed [to] inspect these networks and see why these networks might come up with a certain recognition or solution or something like that Theres also important emerging research in fencing off neural networks and deep learning systems while they learn, including neural networks in robots, How we can put this physical structure or constraints into these networks so that they learn within the confines of what we think is physicallyokay.

Go here to see the original:

The US Army Wants to Reinvent Tank Warfare with AI - Defense One

Posted in Ai | Comments Off on The US Army Wants to Reinvent Tank Warfare with AI – Defense One

How big data and AI work together – The Enterprisers Project

Posted: at 10:32 pm

Big data isnt quite the term de rigueur that it was a few years ago, but that doesnt mean it went anywhere. If anything, big data has just been getting bigger.

That once might have been considered a significant challenge. But now, its increasingly viewed as a desired state, specifically in organizations that are experimenting with and implementingmachine learningand other AI disciplines.

AI and ML are now giving us new opportunities to use the big data that we already had, as well as unleash a whole lot of new use cases with new data types, says Glenn Gruber, senior digital strategist atAnexinet. We now have much more usable data in the form of pictures, video, and voice [for example]. In the past, we may have tried to minimize the amount of this type of data that we captured because we couldnt do quite so much with it, yet [it] would incur great costs to store it.

[ Could AI solve that problem? Get real-world lessons learned from CIOs in thenew HBR Analytic Services report,An Executives Guide to Real-World AI.]

The more data we put through the machine learning models, the better they get. Its a virtuous cycle.

Theres a reciprocal relationship between big data and AI: The latter depends heavily on the former for success, while also helping organizations unlock the potential in their data stores in ways that were previously cumbersome or impossible.

Today, we want as much [data] as we can get not only to drive better insight into business problems were trying to solve, but because the more data we put through the machine learning models, the better they get, Gruber says. Its a virtuous cycle in that way.

Its not as if storage and other issues with big data and analytics have gone bye-bye. Gruber, for one, notes that the pairing of big data and AI creates new needs (or underscores existing ones) around infrastructure, data preparation, and governance, for example. But in some cases, AI and ML technologies might be a key part of how organizations address those operational complexities. (Again, theres a cyclical relationship here.)

[ Sort out the jargon jumble. Read:AI vs. machine learning: Whats the difference?]

About that better insight thing: How is AI and ML as its most prominent discipline in the business world at the moment helping IT leaders deliver that, whether now or in the future? Let us count some ways.

One of the fundamental business problems of big data could sometimes be summarized with a simple question: Now what? As in: Weve got all this stuff (thats the technical term for it) and plenty more of it coming so what do we do with it? In the once-deafening buzz around big data, it wasnt always easy to hear the answers to that question.

Moreover, answering that question or deriving insights from your data usually required a lot of manual effort. AI is creating new methods for doing so. In a sense, AI and ML are the new methods, broadly speaking.

Historically, when it comes to analyzing data, engineers have had to use a query or SQL (a list of queries). But as the importance of data continues to grow, a multitude of ways to get insights have emerged. AI is the next step to query/SQL, says Steven Mih, CEO atAlluxio. What used to be statistical models now has converged with computer science and has become AI and machine learning.

As a result, managing and analyzing data depends less on time-consuming manual effort than in the past. People still play a vital role in data management and analytics, but processes that might have taken days or weeks (or longer) are picking up speed thanks toAI.

AI and ML are tools that help a company analyze their data more quickly and efficiently than what could be done [solely] by employees, says Sue Clark, senior CTO architect atSungard AS.

Mathias Golombek, CTO atExasol, has observed a trend to a two-tier strategy when it comes to big data, as organizations contend with the massive scope of the information they must manage if theyre going to get any value from it: The storage layer and an operational analytics layer that sits on top of it. News flash: the operational analytics layer is the one the CEO cares about, even if it cant function without the storage layer.

For specific use cases, it revolutionizes the way you get rules, decisions, and predictions done.

Thats where insights are extracted out of data and data-driven decisions take place, Golombek says. AI is enhancing this analytics world with totally new capabilities to take semi-automatic decisions based on training data. Its not applicable for all questions you have for data, but for specific use cases, it revolutionizes the way you get rules, decisions, and predictions done without complex human know-how.

(In an upcoming post, well look at some use cases that illuminate how AI and big data combine forces, such as in predictive maintenance essentially predicting when a machine might fail, for example and other practical applications.)

In other words, insights and decisions can happen faster. Moreover, IT can apply similar principles using AI technologies to reduce manual, labor-intensive burdens and increase speed to the back-end stuff that, lets face it, few outside of IT want to hear about.

The real-time nature of data insights, coupled with the fact that it exists everywhere now siloed across different racks, regions, and clouds means that companies are having to evolve from the traditional methods of managing and analyzing [data], Mih from Alluxio says. Thats where AI comes in.Gone are the days of data engineers manually copying data around again and again, delivering datasets weeks after a data scientist requests it.

Like others, Elif Tutuk, associate VP ofQlik Research, sees AI and ML as powerful levers when it comes to big data.

AI and machine learning, among other emerging technologies, are critical to helping businesses have a more holistic view of all of that data, providing them with a way to make connections between key data sets, Tutuk says. But, she adds,its not a matter of cutting out human intelligence and insight.

Businesses need to combine the power of human intuition with machine intelligence to augment these technologies or augmented intelligence. More specifically, an AI system needs to learn from data, as well as from humans, in order to be able to fulfill its function, Tutuk says.

Businesses that successfully combined the power of human and technology are able to expand who has access to key insights from analytics beyond data scientists and business analystswhile saving time andreducing potential biasthat may result from business users interpreting data. This results in more efficient business operations, quicker insights gleaned from data and ultimately increased enterprise productivity.

Read the original:

How big data and AI work together - The Enterprisers Project

Posted in Ai | Comments Off on How big data and AI work together – The Enterprisers Project

Are we modeling AI on the wrong brain? – The Boston Globe

Posted: at 10:32 pm

Octopuses are cephalopods, related to oysters. They have personalities, interact with their surroundings, and have expressions and memories. It is their approach to solving problems that intrigues those looking for a model for machines.

Many believe that mimicking the human brain is the optimal way to create artificial intelligence. But scientists are struggling to do this, due to the substantial intricacies of the human mind. Billye reminds us that there is a vast array of nonhuman life that is worthy of emulation.

RELATED | Emily Kumler: Why artificial intelligence is far too human

Much of the excitement around state-of-the-art artificial intelligence research today is focused on deep learning, which utilizes layers of artificial neural networks to perform machine learning through a web of nodes that are modeled on interconnections between neurons in the vertebrate brain cortex. While this science holds incredible promise, given the enormous complexity of the human brain, it is also presenting formidable challenges, including that some of these AI systems are arriving at conclusions that cannot be explained by their designers.

Maybe this should be expected, since humans do not know exactly how we make decisions either. We do not fully understand how our own brains work, nor do we even have a universally accepted definition of what human intelligence is. We dont exactly know why we sleep or dream. We dont know how we process memories. We dont know whether we have free will, or what consciousness is (or who has it). And one of the main obstacles currently in the way of our creating a high level of nuanced intellectual performance in machines is our inability to code what we call common sense.

Some scientists, however, oppose the obvious archetype, suggesting that trying to pattern synthetic intelligence predominantly on our own is unnecessarily anthropocentric. Our world has a wondrous variety of sentient organisms that AI can train computers to model; why not think creatively beyond species and try to engineer intelligent technology that reflects our worlds prismatic diversity?

Roboticist Rodney Brooks thinks that nonhuman intelligence is what AI developers should be investigating. Brooks first began studying insect intelligence in the 1980s, and went on to build several businesses from the robots he developed (he co-invented the Roomba). When asked about his approach, Brooks said that its unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.

The range of skill, ingenuity, and creativity of our biological brethren on this planet is astounding. But a fixation on humans as the preeminent metric of intelligence discounts other species unique abilities. Perhaps the most humbling example for humans is slime mold (Physarum polycephalum), a brainless and neuron-less organism (more like a collective organism or superorganism) that can make trade-offs, solve labyrinthian mazes, take risks, and remember where it has been. Some say slime mold could be the key to more efficient self-driving cars.

Roboticists are intrigued by the swarm intelligence of termites as well as theirs and other creatures stigmergy a mechanism that allows them to collectively make decisions without directly communicating with one another by picking up signs left behind in the environment. Computer scientist Radikha Nagpal has been conducting research on the architectural feats of termites and the movement of schools of fish and flocks of birds. She thinks that we need to move away from a human on top mentality to design the next generation of robotics.

Octopuses like Billye possess what is called distributed intelligence, with two-thirds of their neurons residing in their eight arms, allowing them to perform various tasks both independently and at the same time. Researchers at Raytheon think that emulating octopuses multifaceted brilliance is better suited for the robots they are constructing for space exploration. In his book Other Minds, Peter Godfrey-Smith suggests that observing octopus intelligence is the closest we will ever get to studying alien intelligence. Taking cues from the vision of hawks, the dexterity of cats, or the sense of smell of bears can expand the technological horizons of whats possible.

Humans have long mimicked nature and nonhuman life for our inventions, from modeling X-ray machines on the reflective eyesight of lobsters, to creating an ultrasound cane for the visually impaired based on echolocation (the sensory mechanism of bats), to simulating the anatomy of sea lampreys to make tiny robots that could someday swim through our bodies detecting disease.

Much like humans had to first let go of having to fly exactly like birds fly in order to crack the code of flight, we must now look beyond the widely held belief that the human mind is singular and unique as an intellectual model, and that replicating it is the only way artificial neural networks could truly be deemed intelligent. Being open to holding all beings in esteem, respecting their complexities and gifts, is foundational to building and valuing future intelligent machines.

RELATED: What complex technology can learn from simple ants

Science continues to show us that we are not quite as sui generis as we may have thought; we are discovering now that certain attributes we assumed were reserved solely for humans moral judgment, empathy, emotions are also found across the spectrum of life on earth. Jessica Pierce and Marc Bekoff, in their book Wild Justice, establish that animals demonstrate nuanced emotions and moral behaviors, such as fairness and empathy. The authors maintain that animals are social beings that also have a sense of social justice.

Simply put: Humans are not the lone species whose study can serve as a guide for future forms of AI. We are but one form of intelligent life. Other living creatures exhibit incredible intelligence in a mosaic of mesmerizing ways. Spiders weave silk balloons to parachute and fly. Chimpanzees mourn their dead. So do orcas; as do elephants, who also have distinct personalities and can empathize and coordinate with each other. Crows create and use tools to gather food, and can also solve puzzles. Birds can form long-term alliances and display relationship intelligence. Bees can count and use dance to communicate complex information to the rest of their colonies. Pigeons have fantastic memories, can recognize words, perceive space and time, and detect cancer in image scans.

Humans have much to learn from the acumen of bees and termites, elephants and parrots; but some of us are still uncomfortable with the idea of nonhumans having thoughts and emotions. Is it because sanctioning their agency devalues our own? Our appreciation of animals does not follow the scientific evidence, and nonhumans remain mostly excluded from our notions of intelligence, justice, and rights.

As we strive to create machines that can think for themselves and possibly become self-aware, its time to take a look in the mirror and ask ourselves not only what kind of AI we want to build, but also what kind of humans we want to be. Modeling AI on myriad forms of intelligence, drawing from the vast panoply of intelligent life, is not only a potential solution to the conundrum of how to construct a digital mind; it could also be a gateway to a more inclusive, peaceful existence; to preserving the life that exists on our only home.

RELATED: Artificial intelligences diversity problem

For those speculating about how we may treat synthetically intelligent beings in the future, looking at how we have bestowed rights on other nonhumans is instructive. Our historical treatment of animals or in truth, of any being we are able to convince ourselves is other or less than human does not bode well for their treatment and acceptance. The root of the word robot comes from the Old Church Slavonic word rabota which means forced labor perhaps a prescient forecast that we may be prone to consider AI as nothing more than a tool to do our work and bidding. Creating a hierarchy of intelligence makes it easy to assign lesser dignities to other thinking things. Insisting on absolute human supremacy in all instances does not portend well for us in the Intelligent Machine Age.

I believe that our failure to model AI on the human mind may ultimately be our salvation. It will compel us to assign greater value to all types of intelligence and all living things; and to ask ourselves difficult questions about which beings are worthy of emulation, respect, and autonomy. This (possibly last) frontier of scientific invention may be our chance to embrace our human limitations, our weaknesses, the glitches and gaps in our systems, and to expand our worldview beyond ourselves. Being willing to admit other species are brilliant could be the smartest thing we can do.

Flynn Coleman is the author of A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are, available now from Counterpoint Press. Send comments to ideas@globe.com.

Read the original post:

Are we modeling AI on the wrong brain? - The Boston Globe

Posted in Ai | Comments Off on Are we modeling AI on the wrong brain? – The Boston Globe

Marc Benioff: We need to closely watch artificial intelligence to ensure it is a force for good – CNBC

Posted: at 10:32 pm

Artificial intelligence can be a force for good, but society needs to be careful to make sure its negative aspects do not outweigh its positives, Salesforce co-founder Marc Benioff told CNBC's Jim Cramer on Wednesday.

"AI has tremendous opportunity, but technology is never good or bad, it's what we do with the technology that matters," the billionaire entrepreneur and philanthropist said on "Mad Money."

Benioff, co-CEO and chairman of Salesforce, said there could be "dramatic consequences" as AI use in the military accelerates, for example. The Pentagon released its first AI strategy in February.

"But we can use AI for good as well," said Benioff, who is promoting "Trailblazer," the new book he co-authored with Salesforce executive Monica Langley.

Benioff pointed to a drone project that Salesforce is undertaking alongside experts from the University of California, Santa Barbara.

AI experts from Salesforce are working with university researchers to analyze drone footage, in almost real time, to identify great white sharks off the Southern California coast. Shark activity off the California coast has increased in recent years, prompting safety concerns for beachgoers.

"I just showed you how we have a drone running in Santa Barbara with AI, which spotted a great white shark heading for a surf camp," Benioff told Cramer. "And they called the beach, able to get the kids off the beach that's AI for good."

Benioff said he considers AI to be one of the central components of the Fourth Industrial Revolution, which is "characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres," according to the World Economic Forum.

"There's a lot of AI for good, there's gonna be AI for other things too," Benioff said. "We need to keep our eye on both."

Disclosure: Cramer's charitable trust owns shares of Salesforce.com.

Questions for Cramer?Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer's world? Hit him up!Mad Money Twitter - Jim Cramer Twitter - Facebook - Instagram

Questions, comments, suggestions for the "Mad Money" website? madcap@cnbc.com

See the article here:

Marc Benioff: We need to closely watch artificial intelligence to ensure it is a force for good - CNBC

Posted in Ai | Comments Off on Marc Benioff: We need to closely watch artificial intelligence to ensure it is a force for good – CNBC

Heres Whats Next At The Explosive Intersection Of AI And On-Line Education – Forbes

Posted: at 10:32 pm

null

Artificial Intelligence is poised to disrupt many industries, but education arena has not typically been at the forefront of such conversations. If it has been included at all, the narrative has been in a more abstract manner than actual application. And even though several companies such as Carnegie Learning and Content Technologies, Inc have taken either more adult learning approaches or those that are deeply rooted in tech, the space is still anyones game with new trends to be developed for Gen Z.

The industry is an important one not only for its ability to generate an entirely new level of learning but also because of the very real business opportunity in the space. Indeed, the artificial intelligence in education size is forecasted at a market size worth $6 billion dollars by 2024. Thus, the race is on now among companies that can consistently produce quality content with economical pricing supported by artificial intelligence and machine learning. While there are many startups entering the fray, an established company called UnfoldU has not only created a well-respected brand in India but is also now poised to bring it expertise to North America, the United Kingdom and Australia.

UnfoldU says that it the company is now nearing one million users per year and is currently in it fifth year of operation. The company began a successful business model by targeting middle class families which never had access to the online education but wanted it to supplement education in physical, traditional avenues. So founder Harish Bajaj actually decided to self-fund both the educational platform as well as the solid infrastructure that could support the Internet in India. Although Bajaj began his career in the marketing arena he was able to pivot and focused on hiring top tech developers in the region to convert his vision into reality. To finance the endeavor he borrowed funds from the people who believed in his business idea and invested his entire savings.

Once launched, the result was that the company began to see families use UnfoldU to either replace or supplement their childs educational needs. The business began to build over time because parents could actually track the growth of their childs formal educational in new ways, and they could see results.

UnfoldU is focusing on school education from first to twelfth grade, explains Bajaj. We are not providing any certificate courses or MOOC courses, we are only focused on primary and secondary school education, and we are now providing extension to the school studies by providing content which is backed with artificial intelligence and machine learning.

Student of Unfoldu supported by the power of AI

Bajaj says that young students in India are using this platform to enhance their studies with each users experience being personalized through such emerging tech capabilities. If a student is believed to be acquiring knowledge at a faster pace, the app automatically adjusts itself. If a student is taking time or has issues, the app also automatically adjusts itself and simplifies the course in an appropriate manner.

I started this journey from scratch, and in the early days, accumulating talent for developing quality online education content was very difficult, expensive and elusive. People have now started to realize the potential of the internet based education and in the coming few years it will be massively adopted, explains Bajaj.

Having overcome the hurdles successfully in this market, Bajaj is eager to move to additional territories and drive the brand as both an online school content provide and AI & ML technology company. The company has developed proprietary technology which focuses on the attitude and learning speed of a student. The codes are based on parameters such as learning speed and exam performance. However, Bajaj explains that such questions are neither built nor fetched from a common pool of questions, but instead created on the basis of several internal parameters that are more deeply tied to additional variables around behavior and cognitive responses. The additional competitive advantage in this system is that AI also informs parents about the performance via computer generated voice calls that actually the parents and share the feedback. The parents can then communicate with the AI which is then further integrated into the system.UnfoldUs focus is to eventually cross into the currently controversial realm of having AI can completely replace teachers to ensure a faster and more robust educational method.

Bajaj says that AI in education is beyond creating content. The real secret is using it to actually create smart content which is auto-generated. In the next 20-25 years, we may actually witness human brain directly being programmed, he adds.

In a twist the UnfoldU is also going to launch an IEO to take the company to the next level. The tokens will have real use case scenarios in that they can be used for buying courses. The company has already launched its whitepaper and will hit the markets next month. Tokens of the company are currently trading on BitLux OTC exchange. The additional funds are intended to help the company move deeper into its plans to mesh AR and VR with education to help its customers learn with greater impact and better retain information.

In 1980s faster computers were just a dream, now even the smartphones in our pockets are 50 times more powerful than the supercomputers of that era. Using smart education backed by artificial intelligence will become just as commonplace and advanced. We do not wish to replace teachers completely, but make quality education more accessible by those students who cannot afford it. Nothing can replace the human mind, but supporting it with advanced technology couldnt hurt.

Read more from the original source:

Heres Whats Next At The Explosive Intersection Of AI And On-Line Education - Forbes

Posted in Ai | Comments Off on Heres Whats Next At The Explosive Intersection Of AI And On-Line Education – Forbes

Ethical AI: What can the world learn from California? – World Economic Forum

Posted: at 10:32 pm

Amid growing concern over the threat of AI-enabled systems to perpetuate discrimination and bias and infringe upon privacy, California has introduced several bills intended to curb negative impacts. Primary among them are bills related to mitigating the negative impacts of specific AI-enabled technologies such as facial recognition systems. On May 14, 2019, San Francisco became the first major US city to ban the use of facial recognition technology by city agencies and law enforcement. Two months later, the neighbouring city of Oakland implemented similar restrictions.

These may be city-level laws, but their passing has influenced state and federal legislation. In California, a bill called the Body Camera Accountability Act seeks to prohibit the use of facial recognition in police body cameras, while another would require businesses to publicly disclose their use of facial recognition technology. At the federal level, four pieces of legislation are currently being proposed to limit the use of this technology, especially in law enforcement.

In the wake of the EUs transformative General Data Protection Regulation, California passed the US first domestic data privacy law. The California Consumer Privacy Act (CCPA) became law in 2018 and is set to go into effect in January 2020. The CCPA gives consumers the right to ask businesses to disclose the data they hold on them, request deletion of data, restrict the sale of their data to third parties, and sue for data breaches. This Act has made its influence felt at the federal level too, prompting the development of a federal data privacy law. These data privacy laws are particularly relevant to data-dependent fields like AI.

In response to the serious threat that AI-enabled bots and deepfakes pose for election integrity, the California government has pushed forward progressive pieces of legislation that have influenced federal and international efforts. Passed in 2018, the Bots Disclosure Act makes it unlawful to use a bot to influence a commercial transaction or a vote in an election without disclosure in California. This includes bots deployed by companies in other states and countries, which requires those companies to either develop bespoke standards for Californian residents or harmonize their strategies across jurisdictions to maintain efficiency. At the federal level, the Bots Disclosure and Accountability Act includes many of the same strategies proposed in California. The California Anti-Deepfakes Bill seeks to mitigate the spread and impact of malicious political deepfakes before an election and the federal Deepfakes Accountability Act seeks to do the same.

While California may be leading the implementation of responsible AI governance strategies, ill-conceived laws, especially those that influence similar strategies at federal and international levels, will cause more harm than good. Take for example the Bots Disclosure Act; some commentators have decried a lack of clarity in the Act around what is and is not determined to be a bot and the roles and responsibilities of parties, especially platforms, to identify and stem the influence of malicious bots. This weakens its implementability and impact. Federal initiatives modeled after Californias law will serve to only further erode accountability and public trust.

There is also the risk that beneficial legislation could become unhelpfully politicized. We are seeing increasing federal pushback against the California effect, as exemplified by recent efforts to revoke Californias ability to implement stricter emission standards than federal guidelines. Federal initiatives may seek to curtail the states impact on national and international standards for responsible AI governance. This is already being witnessed in federal efforts to preempt the CCPA.

California is quickly pushing forward AI legislation, ranging from oversight over discrimination and bias to protecting privacy and election integrity. Californias progressive AI legislation has already had a marked influence on federal efforts, and will likely have global reach if California-based AI companies, including Google, Facebook, and OpenAI, alter their practices. The state has an opportunity and obligation to lead the way in establishing effective standards and oversight that ensures AI systems are developed and deployed in a safe and responsible manner. California can provide guidance on responsible AI governance for the rest of the country and the world, but caution must be taken to implement due diligence in identifying and mitigating any negative impacts before its too late.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

More here:

Ethical AI: What can the world learn from California? - World Economic Forum

Posted in Ai | Comments Off on Ethical AI: What can the world learn from California? – World Economic Forum

OpenAIs AI-powered robot learned how to solve a Rubiks cube one-handed – The Verge

Posted: at 10:32 pm

Artificial intelligence research organization OpenAI has achieved a new milestone in its quest to build general purpose, self-learning robots. The groups robotics division says Dactyl, its humanoid robotic hand first developed last year, has learned to solve a Rubiks cube one-handed. OpenAI sees the feat as a leap forward both for the dexterity of robotic appendages and its own AI software, which allows Dactyl to learn new tasks using virtual simulations before it is presented with a real, physical challenge to overcome.

In a demonstration video showcasing Dactyls new talent, we can see the robotic hand fumble its way toward a complete cube solve with clumsy yet accurate maneuvers. It takes many minutes, but Dactyl is eventually able to solve the puzzle. Its somewhat unsettling to see in action, if only because the movements look noticeably less fluid than human ones and especially disjointed when compared to the blinding speed and raw dexterity on display when a human speedcuber solves the cube in a matter of seconds.

But for OpenAI, Dactyls achievement brings it one step closer to a much sought-after goal for the broader AI and robotics industries: a robot that can learn to perform a variety of real-world tasks, without having to train for months to years of real-world time and without needing to be specifically programmed.

Plenty of robots can solve Rubiks cubes very fast. The important difference between what they did there and what were doing here is that those robots are very purpose-built, says Peter Welinder, a research scientist and robotics lead at OpenAI. Obviously theres no way you can use the same robot or same approach to perform another task. The robotics team at OpenAI have very different ambitions. Were trying to build a general purpose robot. Similar to how humans and how our human hands can do a lot of things, not just a specific task, were trying to build something that is much more general in its scope.

Welinder is referencing a series of robots over the last few years that have pushed Rubiks cube solving far beyond the limitations of human hands and minds. In 2016, semiconductor maker Infineon developed a robot specifically to solve a Rubiks cube at superhuman speeds, and the bot managed to do so in under one second. That crushed the sub-five-second human world record at the time. Two years later, a machine developed by MIT solved a cube in less than 0.4 seconds. In late 2018, a Japanese YouTube channel called Human Controller even developed its own self-solving Rubiks cube using a 3D-printed core attached to programmable servo motors.

In other words, a robot built for one specific task and programmed to perform that task as efficiently as possible can typically best a human, and Rubiks cube solving is something software has long ago mastered. So developing a robot to solve the cube, even a humanoid one, is not all that remarkable on its own, and less so at the sluggish speed Dactyl operates.

But OpenAIs Dactyl robot and the software that powers it are much different in design and purpose than a dedicated cube-solving machine. As Welinder says, OpenAIs ongoing robotics work is not aimed at achieving superior results in narrow tasks, as that only requires you develop a better robot and program it accordingly. That can be done without modern artificial intelligence.

Instead, Dactyl is developed from the ground up as a self-learning robotic hand that approaches new tasks much like a human would. Its trained using software that tries, in a rudimentary way at the moment, to replicate the millions of years of evolution that help us learn to use our hands instinctively as children. That could one day, OpenAI hopes, help humanity develop the kinds of humanoid robots we know only from science fiction, robots that can safely operate in society without endangering us and perform a wide variety of tasks in environments as chaotic as city streets and factory floors.

To learn how to solve a Rubiks cube one-handed, OpenAI did not explicitly program Dactyl to solve the toy; free software on the internet can do that for you. It also chose not to program individual motions for the hand to perform, as it wanted it to discern those movements on its own. Instead, the robotics team gave the hands underlying software the end goal of solving a scrambled cube and used modern AI specifically a brand of incentive-based deep learning called reinforcement learning to help it along the path toward figuring it out on its own. The same approach to training AI agents is how OpenAI developed its world-class Dota 2 bot.

But until recently, its been much easier to train an AI agent to do something virtually playing a computer game, for example than to train it to perform a real-world task. Thats because training software to do something in a virtual world can be sped up, so that the AI can spend the equivalent of tens of thousands of years training in just months of real-world time, thanks to thousands of high-end CPUs and ultra-powerful GPUs working in parallel.

Doing that same level of training performing a physical task with a physical robot isnt feasible. Thats why OpenAI is trying to pioneer new methods of robotic training using simulated environments in place of the real world, something the robotics industry has only barely experimented with. That way, the software can practice extensively at an accelerated pace across many different computers simultaneously, with the hope that it retains that knowledge when it begins controlling a real robot.

Because of the training limitation and obvious safety concerns, robots used commercially today do not utilize AI and instead are programmed with very specific instructions. The way its been approached in the past is that you use very specialized algorithms to solve tasks, where you have an accurate model of both the robot and the environment in which youre operating, Welinder says. For a factory robot, you have very accurate models of those and you know exactly the environment youre working on. You know exactly how it will be picking up the particular part.

This is also why current robots are far less versatile than humans. It requires large amounts of time, effort, and money to reprogram a robot that assembles, say, one specific part of an automobile or a computer component to do something else. Present a robot that hasnt been properly trained with even a simple task that involves any level of human dexterity or visual processing and it would fail miserably. With modern AI techniques, however, robots could be modeled like humans, so that they can use the same intuitive understanding of the world to do everything from opening doors to frying an egg. At least, thats the dream.

Were still decades away from that level of sophistication, and the leaps the AI community has made on the software side like self-driving cars, machine translation, and image recognition has not exactly translated to next-generation robots. Right now, OpenAI is just trying to mimic the complexity of one human body part and to get that robotic analog to operate more naturally.

Thats why Dactyl is a 24-joint robotic hand modeled after a human hand, instead of the claw or pincer style robotic grippers you see in factories. And for the software that powers Dactyl to learn how to utilize all of those joints in a way a human would, OpenAI put it through thousands of years of training in simulation before trying the physical cube solve.

If youre training things on the real world robot, obviously whatever youre learning is working on what you actually want to deploy your algorithm on. In that way, its much simpler. But algorithms today need a lot of data. To train a real world robot, to do anything complex, you need many years of experience, Welinder says. Even for a human, it takes a couple of years, and humans have millions of years of evolution to have the learning capabilities to operate a hand.

In a simulation, however, Welinder says training can be accelerated, just like with game-playing and other tasks popular as AI benchmarks. This takes on the order of thousands of years to train the algorithm. But this only takes a few days because we can parallelize the training. You also dont have to worry about the robots breaking or hurting someone as youre training these algorithms, he adds. Yet researchers have in the past run into considerable trouble trying to get virtual training to work on physical robots. OpenAI says it is among the first organizations to really see progress in this regard.

When it was given a real cube, Dactyl put its training to use and solved it on its own, and it did so under a variety of conditions it had never been explicitly trained for. That includes solving the cube one-handed with a glove on, with two of its fingers taped together, and while OpenAI members continuously interfered with it by poking it with other objects and showering it with bubbles and pieces of confetti-like paper.

We found that in all of those perturbations, the robot was still able to successfully turn the Rubiks cube. But it did not go through that in training, says Matthias Plappert, Welinders fellow OpenAIs robotic team lead. The robustness that we found when we tried this on the physical robot was surprising to us.

Thats why OpenAI sees Dactyls newly acquired skill as equally important for both the advancement of robotic hardware and AI training. Even the most advanced robots in the world right, like the humanoid and dog-like bots developed by industry leader Boston Dynamics, cannot operate autonomously, and they require extensive task-specific programming and frequent human intervention to carry out even basic actions.

OpenAI says Dactyl is a small but vital step toward the kind of robots that might one day perform manual labor or household tasks and even work alongside humans, instead of in closed-off environments, without any explicit programming governing their actions.

In that vision for the future, the ability for robots to learn new tasks and adapt to changing environments will be as much about the flexibility of the AI as it is about the robustness of the physical machine. These methods are really starting to demonstrate that these are the solutions to handling all the inherent complication and the messiness of the physical world we live in, Plappert says.

Read more:

OpenAIs AI-powered robot learned how to solve a Rubiks cube one-handed - The Verge

Posted in Ai | Comments Off on OpenAIs AI-powered robot learned how to solve a Rubiks cube one-handed – The Verge

Vehicles passing through customs gate to be scanned via AI – Daily Sabah

Posted: at 10:32 pm

Turkey is about to add a new addition to its high technology investments and projects in order to increase efficiency in preventing smuggling and reducing formalities at customs.

A total of 68 X-Ray Vehicle and Container Scanning systems, which provide detailed analysis of all kinds of vehicles and containers entering and exiting from customs gates without requiring physical intervention, were installed in all customs gates and ports with high trade volume, Trade Minister Ruhsar Pekcan said.

"The Scanning Network Project will be completely developed with our local and national resources and will automatically identify illegal crime and threat elements in vehicle or cargo by using artificial intelligence (AI) technologies," Pekcan told Anadolu Agency (AA) in an interview yesterday.

These high-tech systems provide x-ray film of vehicles and containers and ensure that customs controls, which can last for hours, take place quickly and in a qualified way within minutes, she said.

She stressed that the ministry has established technical infrastructure to carry out customs controls in a manner that does not break the pace of international trade and to combat illegal trade in the most effective way by keeping public health and safety at the forefront.

Indicating that these devices are also an important instrument to combat smuggling and all the elements of illegal crime in national security, the minister continued, "The Trade Ministry is closely following the developments in detection technologies around the world and rapidly adding new equipment to its inventory that will contribute to increasing efficiency in the fight against smuggling."

According to Pekcan, the Scanning Network Project will be completely developed with Turkey's local and national resources by the Scientific and Technological Research Council of Turkey (TBTAK) and will be implemented by the General Directorate of Customs Protection.

By using artificial intelligence and machine learning technologies, the system will automatically identify illegal criminal and threatening elements with certain features in vehicles or cargo. Thus, the operator's margin of error during analysis and manual review times will be reduced to a minimum.

Pekcan noted that X-ray scanning systems will work in an integrated structure under the project and that X-ray images taken at customs gates will be collected in the Command and Control Center of the ministry, and will be dispatched, managed and analyzed from a single center.

Emphasizing that this will be particularly useful in preventing unregistered entry of goods into the country, Pekcan added, "In this way, for instance, X-ray images of a transit vehicle scanned as it enters our country from Habur [customs gate] can be examined by all relevant units and when the vehicle is scanned as it travels abroad from Kapkule [customs gate], the images can be compared. In this way, attempts to leave the transit goods inside our country without paying taxes will be determined and tax loss will be prevented to a great extent."

The minister further stated that the data and information generated in the X-ray systems will be shared with customs authorities of neighboring countries, preventing the same vehicle from being scanned again and again at border crossings, and a rapid logistics transit corridor will be established.

Pekcan highlighted that they aim to set an example to all the customs authorities in the world by implementing another modern customs application with the project being developed by TBTAK.

Follow this link:

Vehicles passing through customs gate to be scanned via AI - Daily Sabah

Posted in Ai | Comments Off on Vehicles passing through customs gate to be scanned via AI – Daily Sabah

AI Weekly: Why Google still needs the cloud even with on-device ML – VentureBeat

Posted: at 10:32 pm

Google held its big annual hardware event Tuesday in New York to unveil the Pixel 4, Nest Mini, Pixelbook Go, Nest Wifi, and Pixel Buds. It was mostly predictable because details about virtually every piece of hardware the company revealed at the event were leaked months in advance, but if Googles biggest hardware event of the year had an overarching theme, it was the many applications of on-device machine learning. Most of the hardware Google introduced includes a dedicated chip for running AI, continuing an industry-wide trend to power services consumers will no doubt enjoy, but there can be privacy implications too.

The new Nest Minis on-device machine learning recognizes your most commonly used voice commands to quicken Google Assistant response time compared to the first-generation Home Mini.

In Pixel Buds, due out next year, machine learning helps recognize ambient sound levels and increase or decrease sound the same way your smartphone dims or brightens when its in sunlight or shade.

Google Assistant on Pixel 4 is faster with an on-device language model. Pixel 4s Neural Core will power facial recognition for payment verification, Face Unlock, and Frequent Faces, which is AI that trains your camera to recognize the faces of people you photograph often and then coaches you on how to take the best picture.

Traditionally, edge deployment of on-device machine learning means an AI assistant can function without the need to maintain connection to the internet, an approach that can prevent the need to share user data online or collect the kind of voice recordings that became one of the most controversial privacy concerns for the better part of 2019.

Due to privacy concerns that stem from the routine recording of users voices, phrases like on-device machine learning and edge computing have become synonymous with privacy. Thats why a handful of edge assistants like Snips have made privacy a selling point.

For Googles many AI services, some like speech recognition powered by the Neural Core processor can entirely operate on-device, whereas others like the new Google Assistant require connecting to the cloud and sending your data back to the Google mothership.

Today, on-device AI for Google hardware is primarily meant to provide speed gains, Google Nest product manager Chris Chan told VentureBeat.

Tasks like speech recognition and natural language processing can be completed on-device, but they still need the cloud to deliver personalization and stitch together an ecosystem of smart home devices and streaming services like YouTube or Spotify.

Its a hybrid model, Chan said. If you focus too much on commands existing only on that single device, the user then doesnt benefit from the context of that usage to even other devices, let alone say Nest or Google services when theyre on the go, when theyre in the car, and other environments, Chan said.

In the case of on-device ML for Nest Mini, you still need an internet connection to complete a command, he said.

There are other architectures we could definitely explore over time that might be more distributed or based in the home, but were not there yet, Chan said.

The hybrid approach, as opposed to edge computing that can operate offline, raises the question: The package is powerful, but why not go all the way with an offline Google Assistant?

The answer may lie in that controversial collection of peoples voice data.

Leaders of the global smart speaker market and AI assistant market have moved in unison to address peoples privacy concerns.

In response to controversy over humans reviewing voice recordings from popular digital assistants like Siri, Cortana, Google Assistant, and Alexa, Google and Amazon both introduced voice commands to allow people to delete voice recordings every day. They also extended to users the ability to automatically remove voice data every three months or every 18 months.

So why make it easy to delete data but choose three months or 18 months?

When VentureBeat asked Alexa chief scientist Rohit Prasad this question, he said that Amazon wants to continue to track trends and follow seasonal changes in queries, and theres still more work to do to improve Alexas conversational AI models.

A Google spokesperson also said the company keeps data to understand seasonal or multi-season trends, but that this could be revisited in the future.

In our research, we found that these time frames were preferred by users as theyre inclusive of data from an entire season (a three-month period) or multiple seasons (18 months), the spokesperson said.

Chan said Google users may find more privacy benefits from on-device machine learning in the future.

Its our hope that over the coming years that things go entirely local, because then youre going to get a massive speed benefit, but were not there yet, he said.

As conversational computing becomes a bigger part of peoples lives, why and when tech giants connect assistants to the internet are likely to play a role in shaping peoples perceptions of edge computing and privacy with AI. But if the competition between tech giants ever becomes about making smart home usage more private to meet consumer demand, then consumers can win.

As always, if you come across a story that merits coverage, send news tips toKhari JohnsonandKyle Wiggers and be sure to bookmark our AI Channeland subscribe to theAI Weekly newsletter.

Thanks for reading,

Khari Johnson

Senior AI staff writer

Originally posted here:

AI Weekly: Why Google still needs the cloud even with on-device ML - VentureBeat

Posted in Ai | Comments Off on AI Weekly: Why Google still needs the cloud even with on-device ML – VentureBeat

Hyundai develops AI-based self-driving tech – The Investor

Posted: at 10:32 pm

Hyundai Motor Group said Oct. 21 it has developed an artificial intelligence-based autonomous driving technology and will apply it in its models.

The South Korean automaker has developed the smart cruise control-machine learning technology, in which the vehicle analyzes the drivers driving patterns and allows partial autonomous driving under the smart cruise control function, it said in a statement.

It is the first time for a carmaker to develop an AI-based self-driving technology in the worlds automobile industry, Hyundai said.

To use the SCC function, which is a core technology of the groups advanced driver assistance system, the driver has to set the driving speed, distance from other vehicles and other conditions, it said.

The SCC-ML technology allows Level 2.5 autonomous driving technology in a vehicle, the statement said.

Hyundai Motor and its affiliate Kia Motors said they plan to gradually apply the technology to their new models.

By Ram Garikipati and newswires (ram@heraldcorp.com)

Read the original post:

Hyundai develops AI-based self-driving tech - The Investor

Posted in Ai | Comments Off on Hyundai develops AI-based self-driving tech – The Investor

Page 20«..10..19202122..3040..»