AI is helping drone swarms fly in unknown locations – The Burn-In

Theres a good chance youve seen a drone swarm. Maybe not in person, but probably televised during a New Years celebration. A drone swarm occurs when a large number of the flying robots take to the skies in sync.

It isnt a coincidence that they almost always fly in open outdoor areas. For these robotic fliers, it can be difficult to navigate in tight spaces without running into each other or environmental obstacles.

Now, a new machine learning algorithm is helping solve that challenge. Developed by researchers at Caltech, the Global-to-Local Safe Autonomy Synthesis (GLAS) artificial intelligence (AI) allows drone swarms to navigate crowded, unmapped environments.

Typical drone swarm navigation systems work by relying on existing maps of an area. For instance, a group of drones flying to light up the sky for New Years would have a dedicated map of the area above Times Square (or wherever it was flying). It also counts on knowing the route of every drone in the swarm. This helps each unit stay on track and avoid collisions.

The new GLAS program works differently. Thanks to its machine learning capabilities, it lets drones navigate an unknown area on their own while simultaneously communicating with other drones in the swarm. Using this decentralized model makes it possible for the drones to improvise.

It also makes it easier to scale the drone swarm. Since the computing power is spread across many robots, adding more of them is actually helpful.

While the main AI helps the drones navigate their environment, a secondary tracking controller called Neural-Swarm helps them compensate for aerodynamic interactions. This could be something like the downward force of air coming from a drone flying overhead. The system is more sophisticated than many controllers available today since they dont account for aerodynamics.

While drone swarms are almost exclusively used today for entertainment, that will change in the future. Drones are simply too helpful and too versatile to only be used for light shows. As such, the new swarm monitoring technology has plenty of applications.

While light shows will certainly benefit from it, other areas have even more to gain.

Search and rescue is a field that would certainly see improvements. Equipped with the new software, first responders could deploy a swarm of drones to quickly and effectively cover an area. This is only possible because the GLAS program allows them to operate without relying on a map of a predetermined area.

Meanwhile, the tech could be helpful in areas that dont include flying. For example, it could be integrated into self-driving cars to help avoid crashes and traffic jams. If every car included such technology, they would be able to autonomously interact with each other and adjust accordingly.

Despite these exciting applications, it will probably be several years before systems like GLAS are integrated into real-world drone swarms. Much more testing is needed to ensure it is reliable. Still, thanks to innovations like this, drone swarms will one day be commonplace.

Continued here:

AI is helping drone swarms fly in unknown locations - The Burn-In

Should Human Perception and Artificial Intelligence be Compared? – Analytics Insight

Machine learning-fuelled artificial intelligence will match and surpass human capacities in the areas of computer vision and speech recognition within five to ten years says Facebook CEO Mark Zuckerberg.

Most of the huge web companies, like Facebook (NASDAQ: FB), utilizes machine learning technology to use its huge data set and deliver better services to its clients. Algorithms work in the background at Facebook to do things like prescribe recommend new connections to Facebook users, to introduce content that matches a users interest, and to block spam.

However, the organization is beginning to utilize machine learning in further advanced ways, for example, for facial recognition on pictures presented on the site. Subsequent to identifying an individual in a photograph, the new Moments application can even recommend to a user that she shares it with that individual.

In a recent report, a group of researchers from different German companies and colleges has highlighted the difficulties of assessing the performance of deep learning in processing visual data. In their paper, named, The Notorious Difficulty of Comparing Human and Machine Perception, the researchers highlight the issues in current methods that compare deep neural networks and the human vision system.

In their study, the researcher led a series of examinations that dig underneath the surface of deep learning results and compare them with the operations of the human visual framework. Their discoveries are a reminder that we should be wary when comparing AI with people, regardless of whether it shows equivalent or better performance on a similar task.

The vision of making machines that can think and act like people has evolved from film fiction to real-world facts. We have since quite a while ago endeavored to acquire Intelligence in Machines to facilitate our work. There are bots, humanoids, robots, and digital people that either outflank people or facilitate with us in many ways. These AI-driven applications have a higher speed of execution, have higher operational capacity and precision, while likewise exceptionally critical in dull and monotonous jobs compared to humans.

Unexpectedly, Human Intelligence relates to adaptive learning and experience. It doesnt generally rely upon pre-fed data like the ones required for AI. Human memory, its computing power, and the human body as an element may appear to be irrelevant compared to the machines hardware and software infrastructure. In any case, the profundity and layers present in our minds are unmistakably more complex and refined, that machines despite everything cant beat in the near future.

Well AI is ending up being an important device, and an intelligent work process will be the labor-saving norm within just a few years, said Scott Robinson, a SharePoint and business intelligence expert based in Louisville, Ky. In any case, business processes include smart ideas and intelligent behavior. Artificial intelligence is extraordinary at replicating intelligent behavior, however, intelligent thought is another issue. We dont completely see how intelligent human thoughts grow, so were not going to manufacture machines that can have them anytime soon.

Deep learning (DL), a subset of AI utilizes the idea of Neural Networks that is fundamentally the same as the human nervous system and brain. Our Intelligence lies in adapted learning and in realizing how to apply the information in real-world situations. In deep learning, we copy the ability of human brains to learn in various stages. We take care of complex issues by separating them into levels of information. Wonder how well you read a long anecdotal book a long time back and still can recall your favourite character and the famous quotes?

Apparently, in the interminable quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the greatest outcomes. Convolutional neural networks (CNN), an architecture regularly utilized in computer vision deep learning algorithms, are achieving tasks that were extremely difficult with traditional software.

In any case, comparing neural networks to human perception remains a challenge. Furthermore, this is partly in light of the fact that we despite everything have a long way to learn about the human vision system and the human brain all in all. The complex functions of deep learning frameworks additionally intensify the issue. Deep neural networks work in convoluted manners that frequently confound their own makers.

Going with the current data and our AI-progressions, language processing, vision, Image processing, and common sense is as yet a challenge to machines and require human mediations. Since AI is still in its advancement stage, the future lies in how well we people govern AI applications with the goal that they maintain human values and security measures. As Nick Burns, SQL Services Data Scientist clarified: Regardless of how great your models are, they are just on a par with your data.

As our AI frameworks become more intricate, we should grow more complex techniques to test them. Past work in the field shows that many of the famous benchmarks used to measure the accuracy of computer vision systems are deceiving. The work by the German researchers is one of the numerous endeavors that try to measure artificial intelligence and better measure the differences between AI and human intelligence. Whats more, they reach conclusions that can give headings to future AI research.

The general challenge in comparison studies between humans and machines is by all means the strong internal human interpretation bias. Proper analysis tools and extensive cross-checks , for example, varieties in the network architecture, alignment of experimental procedures, generalization tests, adversarial examples and tests with constrained networks help to justify the interpretation of findings and put this inner bias into point of view. All things considered, care must be taken to not force our systematic bias when comparing human and machine perception.

Excerpt from:

Should Human Perception and Artificial Intelligence be Compared? - Analytics Insight

AI is still several breakthroughs away from reality – VentureBeat

While the growth of deep neural networks has helped propel the field of machine learning to new heights, theres still a long road ahead when it comes to creating artificial intelligence. Thats the message from a panel of leading machine learning and AI experts who spoke at the Association for Computing Machinerys Turing Award Celebration conference in San Francisco today.

Were still a long way off from human-level AI, according to Michael I. Jordan, a professor of computer science at the University of California, Berkeley. He said that applications using neural nets are essentially faking true intelligence but that their current state allows for interesting development.

Some of these domains where were faking intelligence with neural nets, were faking it well enough that you can build a company around it, Jordan said. So thats interesting, but somehow not intellectually satisfying.

Those comments come at a time of increased hype for deep learning and artificial intelligence in general, driven by interest from major technology companies like Google, Facebook, Microsoft, and Amazon.

Fei-Fei Li, who works as the chief scientist for Google Cloud, said that she sees this as the end of the beginning for AI, but says there are still plenty of hurdles ahead. She identified several key areas where current systems fall short, including a lack of contextual reasoning, a lack of contextual awareness of their environment, and a lack of integrated understanding and learning.

This kind of euphoria of AI has taken over, and [the idea that] weve solved most of the problem is not true, she said.

One pressing issue identified by Raquel Urtasun, who leads Ubers self-driving car efforts in Canada, is that the algorithms used today dont model uncertainty very well, which can prove problematic.

So they will tell you that there is a car there, for example, with 99 percent probability, and they will tell you the same thing whether they are wrong or not, she said. And most of the time they are right, but when they are wrong, this is a real issue for things like self-driving [cars].

The panelists did concur that an artificial intelligence that could match a human is possible, however.

I think we have at least half a dozen major breakthroughs to go before we get close to human-level AI, said Stuart Russell, a professor of computer science and engineering at the University of California, Berkeley. But there are very many very brilliant people working on it, and I am pretty sure that those breakthroughs are going to happen.

More here:

AI is still several breakthroughs away from reality - VentureBeat

Huaweis new Vision TV offers 4K picture and a long list of AI powers – The Verge

Huawei announced its own 4K television, the Huawei Vision, during the Mate 30 Pro event today. Like the Honor Vision and Vision Pro TVs that were announced back in August, Huaweis self-branded TV runs the companys brand-new Harmony OS software as its foundation.

Huawei will offer 65-inch and 75-inch models to start, with 55-inch and 85-inch models coming later. The Huawei TV features quantum dot color, thin metal bezels, and a pop-up camera for video conferencing that lowers into the television when not in use. On TVs, Harmony OS is able to serve as a hub for smart home devices that support the HiLink platform.

Huawei is also touting the TVs AI capabilities, likening it to a smart speaker with a big screen. The TV supports voice commands and includes facial recognition and tracking capabilities. Apparently, theres some AI mode that helps protect the eyes of young viewers presumably by filtering blue light. The Vision also allows one-hop projection from a Huawei smartphone. The TVs remote has a touchpad and charges over USB-C.

Harmony OS is Huaweis attempt to branch out with its own operating system after the US government forced the move by barring Google from continuing to license Android to the China-based tech giant.

Huawei didnt disclose a release date or pricing information for the Vision TV. To consumers outside of China, Huawei's TV efforts mean very little. I doubt The Verge will be reviewing this set anytime soon, nor would I expect it to outperform the best TVs on the market right now. But the underlining importance here is that these products represent a new path for Huawei as it remains entangled in a bitter dispute with US intelligence agencies and lawmakers who continue to raise the threat of Huawei serving as a backdoor for the Chinese government. Huawei has insisted since the beginning that it would never go along with any surveillance.

See the original post here:

Huaweis new Vision TV offers 4K picture and a long list of AI powers - The Verge

The real test of an AI machine is when it can admit to not knowing something – The Guardian

On Wednesday the European Commission launched a blizzard of proposals and policy papers under the general umbrella of shaping Europes digital future. The documents released included: a report on the safety and liability implications of artificial intelligence, the internet of things and robotics; a paper outlining the EUs strategy for data; and a white paper on excellence and trust in artificial intelligence. In their general tenor, the documents evoke the blend of technocracy, democratic piety and ambitiousness that is the hallmark of EU communications. That said, it is also the case that in terms of doing anything to get tech companies under some kind of control, the European Commission is the only game in town.

In a nice coincidence, the policy blitz came exactly 24 hours after Mark Zuckerberg, supreme leader of Facebook, accompanied by his bag-carrier a guy called Nicholas Clegg who looked vaguely familiar had called on the commission graciously to explain to its officials the correct way to regulate tech companies. The officials, in turn, thanked him and courteously explained that they had their own ideas, and escorted him back to his hot-air balloon.

For this columnist, the most interesting document is the white paper on AI. It declares that the commission supports a regulatory and investment oriented approach that has two objectives: to promote the uptake of AI and to address the risks associated with certain uses of the technology. The document then sets out policy options on how these objectives might be achieved.

Once you get beyond the mandatory euro-boosting rhetoric about how the EUs technological and industrial strengths, high-quality digital infrastructure and regulatory framework based on its fundamental values will enable Europe to become a global leader in innovation in the data economy and its applications, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite transparency. The only discernible omissions are motherhood and apple pie.

But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. Its got to the point where one longs for some plain speaking and common sense.

And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question Should we trust algorithms?

Its trustworthiness rather than trust we should be focusing on

Underpinning Spiegelhalters approach is an insight from the philosopher Onora ONeill that its trustworthiness rather than trust we should be focusing on, because trust is such a nebulous, elusive and unsatisfactory concept. (In that respect, its not unlike privacy.) Seeking more trust, ONeill observed in a famous Ted Talk, is not an intelligent aim in this life intelligently placed and intelligently refused trust is the proper aim.

Applying this idea, Spiegelhalter argues that, when confronted with an algorithm, we should expect trustworthy claims made both about the system (what the developers say it can do, and how it has been evaluated) and by the system (what it concludes about a specific case).

From this, he suggests a set of seven questions one should ask about any algorithm. 1. Is it any good when tried in new parts of the real world? 2. Would something simpler, and more transparent and robust, be just as good? 3. Could I explain how it works (in general) to anyone who is interested? 4. Could I explain to an individual how it reached its conclusion in their particular case? 5. Does it know when it is on shaky ground, and can it acknowledge uncertainty? 6. Do people use it appropriately, with the right level of scepticism? 7. Does it actually help in practice?

This is a great list, in my humble opinion. Most of the most egregiously deficient machine-learning systems we have encountered so far would fail on some or all of those grounds. Spiegelhalters questions are specific rather than couched in generalities such as transparency or explainability. And best of all they are intelligible to normal human beings rather than the geeks who design algorithms.

And the most important question in that list? Spiegelhalter says it is number five. A machine should know when it doesnt know and admit it. Sadly, thats a test that many humans also fail.

BBC v Netflix: a prequelHow the BBCs Netflix-killing plan was snuffed by myopic regulation. Sobering piece in Wired about how the BBC and other UK broadcasters came up with the idea of a Netflix-like service when the streaming giant was still shipping DVDs, but were thwarted by UK regulators. Regulation is hard unless you know the future.

An open and shut caseThe messy, secretive reality behind OpenAIs bid to save the world. Great investigative reporting by MITs Technology Review.

How the peace was lostThe Last Days at Yalta A gripping reconstruction on Literary Hub by the historian Diana Preston of the conference that launched the cold war.

Excerpt from:

The real test of an AI machine is when it can admit to not knowing something - The Guardian

Author, Digital Book World, And The World Of Voice And AI – Digital Book World

The news that Apple is formally discontinuing iBooks Author is no surprise to those following the software, but it certainly does mark the end of an era, and the beginning of a new one.

On the heels of selling a previous business, the year 2014 saw me learning new skills and picking up the trail of new interests. Publishing had always been one of those, as I viewed with fascination the fact that anyone could publish a book, for the first time in human history, with mitigated or outright removed gatekeepers that previously would've blocked their way.

I was blown away by iBooks Author, a relatively simple and yet shockingly powerful piece of software that was one of Steve Jobs' final legacies before his passing in late 2011. He had been very interested and invested in the software, resulting in Apple rolling it out in grand style in early 2012, which in many ways served as a tribute to him.

My company, Score Publishing, had gotten involved in interactive content creation with the software and began hosting the iBooks Author Conference from 2015 through 2017. Our keynotes each year were, respectively, The Mozart Project's team, Southwest Airlines, and NASA. Groups no one would have thought were using iBooks Author, even Apple's own personnel.

My frustration in not seeing interactive content creation - books with multimedia or fancy features embedded - not discussed with the same level of respectby the publishing industry played arole in our acquisition of Digital Book World from a very troubled F&W Media in late 2017. The iBooks Author Conference had grown to the extent where it was time for us to produce a more comprehensive publishing event. As it turned out, acquisition was the easiest route to take for this.

Since then, DBW has been radically revitalized, selling out last year, and is on the verge of sellout again (albeit in our limited capacity format), with only a few passes remaining.

My disappointment extended to the fact that iBooks Author, by virtue of being free and deeply connected to Apple's iBooks Store where content could then be monetized, represented a potential means of authors and publishers of all types being able to generate wealth. Unfortunately, Apple's disinterest in continuing down the rabbithole Steve Jobs had started down short-circuited what could have been. Books created in iBooks Author were very hard to find, and the lack of support from Apple meant a rising tide of bugs crept into the software that slowly but surely drove people away.

iBooks Author did do one great thing for us, though - it lead me and my team into the realm of voice technology and conversational AI. Not surprisingly, this is yet another area where Apple has squandered significant potential over the last decade.

Companies we worked with to create interactive content were vanguard enough to also have their eye on voice, and in late 2016 as Alexa surfaced, we began getting asked about what Amazon was doing, and what all this "voice stuff" was about.

I had no clue. But I started learning.

Today, anyone playing a relevant role in the growth of voice technology and conversational AI will have a job for the rest of their life. The skills are that important, and they cater to writers and authors and those who possess literary and language skills, rather than strictly to those able to code.

Further, and the publishing industry hasn't fully awoken to this yet, but conversational AI is about to utterly re-write the organizational chart for publishing houses. Operational roles, including production, and marketing roles, specifically, will change dramatically as voice technology and AI transforms what publishers will need and what they won't need. Digital Book World is on the forefront of this evolution.

Many of the people I met through our work with iBooks Author are now instrumental players in why Digital Book World has grown into the conference so many now call essential. These are the true innovators, the change agents, making waves through "the wide world of publishing" across the world.

Some in publishing talk about how "interactive content never really took off," which wholly ignores the rise of voice tech, where interactive content now lives. It still has the same challenges iBooks Author books had, but it has a much stronger galvanizing force behind it, which is a motivated Amazon, as well as the reality that voice is part of who we are as human beings, beginning from the mother's womb.

And for all the conversation about privacy, and the controversial role Amazon has played in shaping the publishing industry for better or worse, the fact is that enterprising publishers and authors who want to lift themselves up - those that would've found their way to iBooks Author five years ago - are now gravitating to the Alexa ecosystem, where free tools and a raft of support await them.

Let's hope it lasts this time.

View post:

Author, Digital Book World, And The World Of Voice And AI - Digital Book World

AI summit aims to help world’s poorest – Nature.com

Joshua Stevens/NASA Goddard SFC

AI algorithms can compare night-time and day-time satellite images to measure levels of poverty.

In the worlds wealthiest neighbourhoods, artificial intelligence (AI) systems are starting to steer self-driving cars down the streets, and homeowners are giving orders to their smart voice-controlled speakers. But the AI revolution has yet to offer much help to the 3 billion people globally who live in poverty.

That discrepancy lies at the heart of a meeting in Geneva, Switzerland, on 79 June, grandly titled the AI for Good Global Summit. The meeting of United Nations agencies, AI experts, policymakers and industrialists will discuss how AI and robotics might be guided to address humanitys most enduring problems, such as poverty, malnutrition and inequality.

Development agencies are buzzing with ideas, although only a few have reached the stage of pilot experiments. But scientists caution that the rise of AI will also bring societal disruption that will be hard to foresee or manage, and that could harmthe worlds most disadvantaged. Developing countries may have the most to gain from AI, but also the most to lose if we are not vigilant, says Chaesub Lee, director of the Telecommunication Standardization Bureau of the UNs International Telecommunications Union, which is organizing the meeting.

Many researchers expect that AI systems will help to assess and track measures to alleviate poverty. At present, there are few accurate data on where the poorest people live because household surveys are infrequently carried out in poor or remote areas, says Marshall Burke, an economist at Stanford University in California.

Burke and his colleagues are training algorithms using night-time satellite images (in which well-lit areas are a rough proxy for affluence) to learn which features in daytime satellite imagessuch as roads, or roof typescorrelate with relative wealth or poverty. In a pilot study in five African nations, the team found that its AI system predicts village-level wealth better than do earlier methods that use night lights alone.

Other scientists at Stanford, led by Jiaxuan You, are using AI and satellite remote-sensing data to predict crop yields months ahead of harvest, hoping to anticipate food shortages. And the UN childrens charity UNICEF is investing in work to test whether deep learning can diagnose malnutrition from photographs and videos of children. This is currently done using mid-upper-arm circumference and is slow and not always super-accurate, says Christopher Fabian, the head of UNICEFs innovation and venture funding unit. We believe we can do better.

AI has been used for years in responses to natural disasters: helping to track where casualties and relief needs are greatest by parsing social-media messages and analysing satellite and drone imagery. In 2016, the XPRIZE Foundation, based in Culver City, Californiawhich is co-organizing the Geneva summit announced a US$5-million prize fund to reward ideas for using AI to solve challenges facing society.

But just as the Internet has brought risks and rewards that few could have anticipated, so AI will have good, bad, transformative and plain weird effects on societies, says Anders Sandberg, who studies the societal and ethical issues of new technologies at the University of Oxford, UK. The Geneva summit, for instance, focuses on how AI could help to achieve the UNs Sustainable Development Goals targets to improve the lives of the worlds poorest people by 2030. One is to ensure decent jobs for all. Yet a 2016 report from financial institution Citi suggests that AI and robotics might hit jobs in developing countries hardest.

Concerns over some of these risks have prompted industry to fund initiatives focused on societal benefit. They include OpenAIa non-profit research company launched in December 2015 with $1billion of funding from philanthropists and entrepreneurs, in part to develop safe AI systemsand the Partnership on Artificial Intelligence to Benefit People and Society, founded last October. The partnership includes Google, Microsoft and Facebook,but also UNICEF, Human Rights Watch and a host of non-profit organizations; in May, it announced that it would launch a grand-challenge series to boost initiatives that use AI to address long-term societal issues.

Ultimately, it is the firms developing AI that will have the greatest say in the technologys future direction, warns Milton Mueller, an expert on Internet governance at the Georgia Institute of Technology in Atlanta.

Visit link:

AI summit aims to help world's poorest - Nature.com

Adaptive Drilling Application Uses AI To Enhance On-Bottom Drilling Performance – Journal of Petroleum Technology

The drilling optimizer employs artificial intelligence to help mitigate drilling dysfunction and improve performance.

You have access to this full article to experience the outstanding content available to SPE members and JPT subscribers.

To ensure continued access to JPT's content, please Sign In, JOIN SPE, or Subscribe to JPT

Ever since the first commercial well was spudded, operators have looked for ways to drill wells faster without sacrificing safety or incurring huge costs. While saving time and money through efficient drilling is not a new concept, the more recent adoption of drilling optimization and automation services has certainly become one of the biggest drivers to achieving those goals.

As the current downturn has shown limited signs of recovery, it has continued to evolve in ways never imagined, and the effects are taking their toll in every facet of the oil and gas industry. While rigs and drilling equipment can be set aside to ride out the storm, what about the drilling teams who are working on the rigs and in remote operation centers? As these teams are being removed from the field, the expectation is that many of them wont return for a myriad of reasons. So, what happens when that experience is lost?

The exodus of seasoned crews, otherwise known as the great crew change, has been discussed for several years, but recent conditions could expedite the process. Considering the recent shutdown of rigs and the loss of personnel, the question remains whether we will see a noticeable gap in knowledge and experience once crews return to the drilling rigs in full force.

The lack of individual skills can be offset over time with hands-on experience, but a drilling crew needs to operate at the highest level possible, preferably with few to no gaps in experience. To assist the drilling process, NOVs M/D Totco division recently launched its KAIZEN intelligent drilling optimization application, which performs as an adaptive autodriller. The system features continuous learning capabilities, enabling it to provide proactive drilling dysfunction mitigation while maximizing rate of penetration (ROP) and optimizing mechanical specific energy. It also reduces human dependence in the drilling process, lowering the risk of slow or incorrect responses to drilling dysfunction. In turn, the system assesses wellbore conditions and drilling performance, then automatically applies appropriate parameters to mitigate those dysfunctions.

When faced with distinct interbedded formations, drillers often encounter drilling dysfunction due to varying formations, and optimal setpoints are required to identify and proactively mitigate dysfunction.

While drillers are inundated with large amounts of data, the system takes the human dependence away and employs artificial intelligence (AI) to continuously optimize the drilling process. Utilizing an array of machine-learning algorithms and a digital twin that is updated each second, the AI system builds a store of knowledge that the drilling application leverages to make more accurate and timely decisions. This automated parameter application approach enables the system to remove distractions from the driller so their focus can be on critical items such as keeping the crew safe and the well under control, while the system instantly responds to changing conditions and provides optimal weight on bit (WOB) and revolutions per minute (rev/min) setpoints.

The AI and machine-learning feature stores thousands of hours of processed drilling data. This capability allows the system to recommend surface parameters that deliver the best expected performance as well as select the correct dataset to mitigate changes detected in drilling dynamic behaviors.

Additionally, the systems digital twin uses an advanced, physics-based model to analyze several key dysfunctions in real time by way of distributed stress modeling, torque and drag modeling, and critical rev/min calculations. The result is a system that is capable of providing guidance to the parameter search engine, identifying which parameters to avoid. Drillstring buckling, downhole vibration (torsional, axial, and lateral), and mud motor stalls can all be diagnosed, and thus, mitigated in real time.

The systems optimization functions can be run in either advisory or control mode. The advisory mode sends recommended rev/min and WOB setpoints for the driller to implement, while control mode sends those same setpoints directly to the autodriller.

Advisory mode is typically run if a rigs control system is not fully compatible with the control mode. When a rig has a control system that is compatible with the KAIZEN command requests, control mode is activated, and the system consistently achieves a higher level of performance by continuously and automatically adjusting the setpoints.

The KAIZEN system has been successfully integrated into multiple control systems, including NOVs NOVOS reflexive drilling system. The application is currently compatible in closed-loop mode with approximately 50% of the active North American rig fleet.

The drilling systems installation comprises a minor update to the rig control layer to accept the systems commands, along with the physical installation of a KAIZEN system box. The rig must be running NOVs RigSense electronic drilling recorder, and the system box is simply plugged into both the rigs control network and the RigSense system. The system is then accessible on the RigSense screen and operated through the control system interface.

The use of KAIZEN control mode on the NOVOS control system has recently shown immediate improvements in the field.

A drilling contractor operating in the Marcellus Shale sought a solution to complete each hole section in a single-bit run while improving the ROP and reducing drilling time with less damage to the bit. The target formation showed shale with interbedded limestone as well as concentrations of iron pyrite and siderite, making for a challenging well.

The intelligent drilling optimizer applied its continuous learning capabilities, successfully demonstrating the ability to optimize the drilling program by way of dysfunction mitigation and parameter optimization. It was noted that the system handled formation changes exceptionally well and exhibited creative solutions to solve downhole vibration, resulting in smooth drilling. This allowed the system to exploit this parameter map and drill at the limit of efficiency, and the system performed those functions consistently.

The use of the drilling systems control mode on the NOVOS system showed immediate improvements. All three wells that used the system were compared against the customer-provided benchmark well; the system collectively saved the customer 38.6 drilling hours. The cumulative hourly savings translated to an average of $37,518 per well for a total savings of $112,554 over the three wells, based on a $70,000/day spread rate.

The need for greater efficiency through optimized drilling systems has pushed service companies into new and exciting areas of product development, with features like automation and AI leading the way. As the KAIZEN systems performance has shown, the application of automated control systems delivers cumulative operational benefits that continue to increase independent of the crews experience levels. As the industry looks to optimize processes in all areas of operations, intelligent systems will become the accepted, pragmatic approach to achieving safer and more efficient performance at the wellsite.

Here is the original post:

Adaptive Drilling Application Uses AI To Enhance On-Bottom Drilling Performance - Journal of Petroleum Technology

AI Helps Magicians Perform Mind Reading Tricks – IEEE Spectrum

Illustration: iStockphoto Computer algorithms can help magicians create magic tricks that exploit human psychology

You are presented with two decks, one with images and the other with words. The magician shuffles and distributesthe decks into piles of four cards. You get to choose twopiles, one from the word deck and one from the image deck, to make a hand of eight cards. Then youre invited to picka word card and and an imagecardfrom yourhand.Once youve selected a pair, youwatch the magician reveal a previously written prediction about the cards youve chosen. The prediction is correct!

That kind of mind-reading magic trick could benefitfrom new AI computer algorithms. These algorithms are designed to exploithuman psychology andhelpmagicians choosethe best card combinations.

Thisassociation magic trickrelies upon making a spectator believe that the magician hasmanaged to predict his or herfree choice from a random combination ofshuffled cards. In reality, the magician has preselected two decks of cards that together containa category of card pairs that triggera particularly powerful mental association for most people. To help pull off this mind-reading illusion, computer scientists created a computer algorithm that can automatically help find compellingword and image combinations.

First and foremost its an entertaining magic trick we have built, but it does potentially allow insight into the processes that humans use to decide associations, saysPeter McOwan, a professor of computer science at Queen Mary University London in the UK.There are a range of mentalism tricks that use associations to accomplish their effects and similar computational frameworks could be applied across that range, he said.

McOwan began practicing magic as a hobby in his teens. He has since used magic tricks to teach computer algorithms and haswrittenfree e-books on the intersection between the two subjects. In recent years, McOwan has teamed up withHoward Williams, another computer scientistat Queen Mary University London, to develop computer algorithms that can help create new magic tricks. Their latest study on the association magic trick was published in the 9 Aug 2017 issue of the journal PLOS One.

The association magic trick takes advantage of how the human subconscious tends to formstrong mental associations between certain concepts. For example, people may quickly make food associations between images of burgers or fruitand related words such asbites,treats,snack andfeast. The human subconscious can quickly recognize and process such associations in a way that appears almost automatic to the conscious mind.

Another key part of the trick involves an appreciation of two psychological systems that underlay our decision making, as described byDaniel Kahneman, a psychologist and Nobel Prize-winner. System 1 covers the swift and seemingly automatic mental processing. System 2 refers to the more active, conscious thinking involved in planning, puzzle solvingor calculations.

The magician wants the spectator participating in the magic show to use the first system and make the automatic association because it makes his or her choice predictableespecially when the decks of cards are organized and shuffled in a way that ensures a matched pair of cards that belongto a certain category will always be among the choices. So the magician adds time pressure by asking the spectator to make a quick decision. That pressure typically ensures the spectator makes the predictable choice rather than making a more idiosyncratic pairingbased on the more conscious thought processes of the second system.

To collect relevant data in making the magic trick, the Queen Mary University London researchersperformed an online psychology experiment by showing human participants various selections of 10 trademarks from a pool of 100 of the most famous trademarks. The researchers then askedparticipantsto write down any words about how the trademarks made them feel, along with any otherassociations they had with each mark.

But theresearchers alsodeveloped an AI to help themfindstrong associations for the magic trick. First, their computer algorithm ran Internet searches on popular trademarks and plucked words from the webpages linked by the top ten search results for each trademark. Second, itused a previously developed search algorithm, called BM25, to organizeand rank the collecteddata according to certain association categories (such as food-related words). Additional AI techniques called word2vec and Wordnet helpedby providingsimilarity scores for certain word pairings.

The AI by itself was not necessarily able to find the strongest or most useful associations for the magic trick without human help. But suchautomated data gathering and organization could prove a handy time-saving tool for complementing data collected from the more time-consuming experimental surveys, according to Williams at Queen Mary University London. He described the tradeoff as follows:

Automated data gathering is useful as it is quick and can gather large sets of data. Experiments take longer to organize, perform, process data, etc., but provide more specific and targeted data. [Its] essentially a tradeoff between quality and quantity. Though quantity provides broadness, and is useful in its own right.

That process led Williams and McOwan to create image and word card decks that contained the food category as the likeliest choice. Theytested out their association magic trick on 143individuals during theBig Bang 2013 science fair in Birmingham, UK, where it succeeded in all but 15 cases. Those more unusual word and image pairings chosen in the unsuccessful cases could potentially be excluded by the computer algorithm or by hand in the future.

Even though there is a fairly clear pathway we have created in the trick for them to follow in the performances, some people just had left field associationsprobably influenced by their life experiences, McOwan says.Its an area worth looking at more.

Magicians could eventually makeuseof popular AI techniques such as machine learning and deep learning that can automatically find and learn from patterns in data. McOwan speculated that such techniques could prove useful in cold reading, which is when a magician uses psychological tricks and a data-driven understanding of population trends to pretend to divine personal details about a stranger.

The researchers have already commercialized magic tricks that were created with the help of computer algorithms. In 2014, they used a computer algorithm to help create a magic jigsaw puzzle that makes certain shapes seem to disappear upon reassembly based on certain geometric principles. That jigsaw puzzlesold out two production runs in a well known London magic shop, McOwan says.

The idea of computer algorithms helping create magic tricks may lack the emotional drama ofChristopher Nolans film The Prestige,where rival magicians vie to perfecttheir magic illusions. But even some of thefictional wizards in the magical world of Harry Potter might appreciate muggle AI technology that can help magicians seem toperform mind reading without wands and spells.

Of course a trick is only as good as the performer and our work is simply giving new tools to create new methods to perform with, McOwan says.The real magic still lies with the magician.

IEEE Spectrums general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Minor changes to street sign graphics can fool machine learning algorithms into thinking the signs say something completely different 4Aug

A laptop-sized system could make it easier to diagnose and study sleep disorders 10Aug

Engineers at Knowles bring the hearing aid industry together to fight feedback with simulation.

The DNA-as-malware hackthough difficultpoints to weaknesses in bioinformatics software 10Aug

From consumer audio to ultrasound imaging, the impact of new metamaterial structures for acoustic cloaking is far-reaching.

Deep learning AI can identify individual anthrax spores in seconds within special microscope images 4Aug

An audio technology startup delivers manufacturable transducers for high-end headphones.

In designing acoustical systems, engineers must account for multiple physics and their interactions at multiple scales and frequencies.

Glassdoor's latest research shows software jobs are diffusing beyond traditional geographies and industries 31Jul

A race is on to discover Planet Nine using classical astronomy and new computational techniques 31Jul

Blind opens its tech gossip app to anyone who works in tech, but only some get into closed company rooms 27Jul

Software engineering has highest share of foreign-born workers 25Jul

This body-tracking software could help robots read your emotions 22Jul

Find the programming languages that are most important to you 18Jul

Python jumps to No.1, and Swift enters the Top Ten 18Jul

We analyze the languages that are indemandbyemployers 18Jul

Blind quantum computing in the cloud could keep computation results secret even for remote classical-computer users 14Jul

Personalized learning has to get social. Students learn better through conversation 10Jul

Social and computer scientists parse online bot discourse 6Jul

Hedge funds are testing new quantitative strategies that could supplant traditional fund managers 28Jun

Go here to read the rest:

AI Helps Magicians Perform Mind Reading Tricks - IEEE Spectrum

This Deepfake AI Singing Dolly Parton’s "Jolene" Is Worryingly Good

Holly Herndon uses her AI twin Holly+ to sing a cover of Dolly Parton's

AI-lands in the Stream

Sorry, but not even Dolly Parton is sacred amid the encroachment of AI into art.

Holly Herndon, an avant garde pop musician, has released a cover of Dolly Parton's beloved and frequently covered hit single, "Jolene." Except it's not really Herndon singing, but her digital deepfake twin known as Holly+.

The music video features a 3D avatar of Holly+ frolicking in what looks like a decaying digital world.

And honestly, it's not bad — dare we say, almost kind of good? Herndon's rendition croons with a big, round sound, soaked in reverb and backed by a bouncy, acoustic riff and a chorus of plaintive wailing. And she has a nice voice. Or, well, Holly+ does. Maybe predictably indie-folk, but it's certainly an effective demonstration of AI with a hint of creative flair, or at least effective curation.

Checking the Boxes

But the performance is also a little unsettling. For one, the giant inhales between verses are too long to be real and are almost cajolingly dramatic. The vocals themselves are strangely even and, despite the somber tone affected by the AI, lack Parton's iconic vulnerability.

Overall, it feels like the AI is simply checking the boxes of what makes a good, swooning cover after listening to Jeff Buckley's "Hallelujah" a million times — which, to be fair, is a pretty good starting point.

Still, it'd be remiss to downplay what Herndon has managed to pull off here, and the criticisms mostly reflect the AI's limited capabilities more than her chops as a musician. The AI's seams are likely intentional, if her previous work is anything to go off of.

Either way, if you didn't know you were listening to an AI from the get-go, you'd probably be fooled. And that alone is striking.

The Digital Self

Despite AI's usually ominous implications for art, Herndon views her experiment as a "way for artists to take control of their digital selves," according to a statement on her website.

"Vocal deepfakes are here to stay," Herndon was quoted saying. "A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology."

Whether Herndon's views are fatalistic or prudently pragmatic remains to be seen. But even if her intentions are meant to be good for artists, it's still worrying that an AI could pull off such a convincing performance.

More on AI music: AI That Generates Music from Prompts Should Probably Scare Musicians

The post This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good appeared first on Futurism.

See the article here:

This Deepfake AI Singing Dolly Parton's "Jolene" Is Worryingly Good

The global AI agenda: Promise, reality, and a future of data sharing – MIT Technology Review

Artificial intelligence technologies are no longer the preserve of the big tech and digital platform players of this world. From manufacturing to energy, health care to government, our research shows organizations from all industries and sectors are experimenting with a suite of AI technologies across numerous use cases.

Among the organizations surveyed for this report, 72% had begun deploying AI by 2018, and 87% by 2019. Yet much remains unknown about AIs real, as opposed to potential, impact. Companies are developing use cases, but far from all are yet bearing fruit. How, business leaders ask, can we scale promising use cases to multiple parts of the enterprise? How can we leverage data, talent, and other resources to exploit AI to the fullest? And how can we do so ethically and within the bounds of regulation?

MIT Technology Review Insights surveyed 1,004 senior executives in different sectors and regions of the world to understand how organizations are using AI today and planning to do so in the future. Following are the key findings of this research:

Download the full report.

Visit link:

The global AI agenda: Promise, reality, and a future of data sharing - MIT Technology Review

Why human-like is a low bar for most AI projects – The Next Web

Show me a human-like machine and Ill show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of human-like AI. Maybe its time to reconsider that approach.

The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans arent very good at the kinds of tasks a computer makes sense for and AI isnt very good at the kinds of tasks that humans are. Thats why researchers are moving away from development paradigms that focus on imitating human cognition.

A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of psychological semantics, the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv:

Many AI researchers do not dwell on whether their models are human-like. If someone could develop a highly accurate machine translation system, few would complain that it doesnt do things the way human translators do.

In the field of translation, humans have various techniques for keeping multiple languages in their heads and fluidly interfacing between them. Machines, on the other hand, dont need to understand what a word means in order assign the appropriate translation to it.

This gets tricky when you get closer to human-level accuracy. Translating one, two, and three into Spanish is relatively simple. The machine learns that they are exactly equivalent to uno, dos, and tres, and is likely to get those right 100 percent of the time. But when you add complex concepts, words with more than one meaning, and slang or colloquial speech things can get complex.

We start getting into AIs uncanny valley when developers try to create translation algorithms that can handle anything and everything. Much like taking a few Spanish classes wont teach a human all the slang they might encounter in Mexico City, AI struggles to keep up with an ever-changing human lexicon.

NLP simply isnt capable of human-like cognition yet and making it exhibit human-like behavior would be ludicrous imagine if Google Translate balked at a request because it found the word moist distasteful, for example.

This line of thinking isnt just reserved for NLP. Making AI appear more human-like is merely a design decision for most machine learning projects. As the NYU researchers put it in their study:

One way to think about such progress is merely in terms of engineering: There is a job to be done, and if the system does it well enough, it is successful. Engineering is important, and it can result in better and faster performance and relieve humans of dull labor such as keying in answers or making airline itineraries or buying socks.

From a pure engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks would be necessary directing traffic in a shipping port, for example its hard to imagine a use-case where a general AI would outperform several narrow, task-specific systems.

Consider self-driving cars. It makes more sense to build a vehicle made up of several systems that work together instead of designing a humanoid robot that can walk up to, unlock, enter, start, and drive a traditional automobile.

Most of the time, when developers claim theyve created a human-like AI, what they mean is that theyve automated a task that humans are often employed for. Facial recognition software, for example, can replace a human gate guard but it cannot tell you how good the pizza is at the local restaurant down the road.

That means the bar is pretty low for AI when it comes to being human-like. Alexa and Siri do a fairly good human imitation. They have names and voices and have been programmed to seem helpful, funny, friendly, and polite.

But theres no function a smart speaker performs than couldnt be better handled by a button. If you had infinite space and an infinite attention span, you could use buttons for anything and everything a smart speaker could do. One might say Play Mariah Carey, while another says Tell me a joke. The point is, Alexas about as human-like as a giant remote control.

AI isnt like humans. We may be decades or more away from a general AI that can intuit and function at human-level in any domain. Robot butlers are a long way off. For now, the best AI developers can do is imitate human effort, and thats seldom as useful as simplifying a process to something easily automated.

Published August 6, 2020 22:35 UTC

More:

Why human-like is a low bar for most AI projects - The Next Web

Facebook to train AI with police bodycam footage to combat extremism – New York Post

Facebook will work with law enforcement agencies to train its artificial intelligence systems to detect videos of violent events as part of its ongoing battle against extremism on the platform.

The new effort, announced in a Tuesday blog post, will harness bodycam footage of firearms training provided by US and UK government and law enforcement agencies as a way to train systems to automatically detect first-person violent events without also sweeping up violence from movies or video games.

The tech giant came under fire earlier this year when its AI systems were unable to detect a livestreamed video of a mass shooting at a mosque in Christchurch, New Zealand. The company eventually imposed some new restrictions on livestreaming.

The Mark Zuckerberg-led company is also expanding its definition of terrorism to include not just acts of violence intended to achieve a political or ideological aim, but attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate.

Facebook has been trying to stem the tide of extremist content over the years. In March, it banned material from white nationalist and white separatist groups. The social network says it has banned 200 white supremacist organizations and 26 million pieces of content related to global terrorist groups like ISIS and al Qaeda.

The company is facing a wide range of challenges, including an antitrust investigation from a group of state attorneys general, along with a broader probe of Big Tech being led by the FTC and the Justice Department.

More regulation might be needed to deal with the problem of extremist material, Dipayan Ghosh, a former Facebook employee and White House tech policy adviser, told the Associated Press.

Content takedowns will always be highly contentious because of the platforms core business model to maximize engagement, he said. And if the companies become too aggressive in their takedowns, then the other side including propagators of hate speech will cry out.

Read more from the original source:

Facebook to train AI with police bodycam footage to combat extremism - New York Post

The DOD needs to define AI and protect its data better, watchdog says – FedScoop

Written by Jackson Barnett Jul 2, 2020 | FEDSCOOP

What is artificial intelligence, anyway?

Its a question that the Department of Defense should answer, according to a new report by DODs inspector general. The watchdog saysthat whileparts of the DOD have their own definitions, the department mustsettle on a standard, establish strong governance structures for the technology and develop more consistent securitycontrols so as notto put the militarys AItechnology and other systemsat risk.

Without consistent application of security controls, malicious actors can exploit vulnerabilities on the networks and systems of DoD Components and contractors and steal information related to some of the Nations most valuable AI technologies, the report states.

The desired security controls appear to be basic, like using strong passwords and monitoring for unusual network activity. Much of the security updates need to happen at service-level offices working on AI,but contractorsalso mustbe included in the uniform standards as well, the IG says.

The reportcommends the DODs early work to adopt goals and initiatives, and incorporate ethics principles into its AI development. But more standardization of that work needs to happen for it to mean something, the IG says. Much of the department-wide standardization and coordination needs to happen in the Joint AI Center (JAIC), the DODs AI hub.

As of March 2020, while the JAIC has taken some steps, additional actions are needed to develop and implement an AI governance framework and standards, the report said.

Much of the IG report echos criticism from a RAND Cooperation report on the JAIC. The RAND report detailed a lack of structure in the new office and recommend better coordination across the department, as does the IG report.

Responding to the report, the DOD CIO said that the JAIC has taken several steps already that the IG recommend. They include plans for a AI Executive Steering Group and several other working groups and subcommittees to coordinate work in specific areas like workforce recruitment and standards across the departments.

The final report does not completely reflect a number of actions the JAIC took over the past year to enhance DoD-wide AI governance and to accelerate scaling AI and its impact across the DoD, the CIO wrote to the IG.

The fuel that all AI runs on data is still in short (usable) supply. The IG recommended the DOD CIO set up more data-sharing mechanisms. While data sharing will increase the ability for data-driven projects to flourish, the JAIC needs better visibility as to how many AI initiatives are underway across the department.

Currently, the DOD doesnt know how many AI projects its many components have under way. Thats a problem if offices like the JAIC are to be a central hub for both AI policy and fielding.

Without a reliable baseline of active AI projects within the DoD the JAIC will not be able to effectively execute its mission to maintain an accounting of DoD AI initiatives, the report stated.

Read more:

The DOD needs to define AI and protect its data better, watchdog says - FedScoop

Boosting Contact Centers with Conversational AI (Part 2) – No Jitter

As discussed in last weeks post, conversational AI (CAI) offers promise for helping contact centers deal with increasing customer inquiries such as weve seen related to COVID-19 and boosting call center productivity. But companies need to understand how best to deploy this technology and follow up by measuring its effectiveness.

Can you give me an example of how a company might get value from conversational AI?

We recently had a global technology company come to us because the IVR system for its online store was routing 60% of inbound tech support calls to the wrong agent. This company wanted an AI-driven IVR platform for customer inquiries related to order status, sales, billing, account management, and tech support.

The new IVR was up and running in 10 weeks. The solution immediately reduced misrouted calls from 60 to 30% a figure that continues to shrink as the AI system is fine-tuned and optimized. The smarter IVR also helped the company improve live agent metrics. The average handle time decreased by two minutes per call, reducing operational costs and getting customers to a resolution faster. Call containment averaged about 60%, with the potential to improve to 75 to 80%. That, combined with an increase in self-service, is driving a projected $39 million return on investment (ROI) for the company.

Could this be multilingual? If so, would the developer have to repeat the same build process?

The CAI interface can be built in 35 languages. The centralized nature of the solution promotes consistency throughout the business. This requires less time and effort when expanding to other use cases, parts of the enterprise, or globally to other languages.

Do end customers accept conversational AI?

Speech-based assistants such as Apples Siri and Amazons Alexa have made their way into homes. Consumers have become more comfortable interacting with virtual agents especially if doing so helps save time or makes life easier.

The evolution of conversational AI has shifted from scripted and FAQ-based experiences to delivery of human-like conversations. By reducing hold times, helping customers get to resolution faster, and offering a more personalized, intelligent experience, customer satisfaction and acceptance of CAI is growing. We have also found that there are some sensitive scenarios, like billing, where customers prefer to avoid a human interaction and that are well-suited for a virtual agent.

What are your best practice recommendations?

When deploying a new conversational AI solution, follow these steps:

Read the original post:

Boosting Contact Centers with Conversational AI (Part 2) - No Jitter

Adobe Illustrator Artwork – Wikipedia

Adobe Illustrator Artwork (AI) is a proprietary file format developed by Adobe Systems for representing single-page vector-based drawings in either the EPS or PDF formats. The .ai filename extension is used by Adobe Illustrator.

The AI file format was originally a native format called PGF. PDF compatibility is achieved by embedding a complete copy of the PGF data within the saved PDF format file. This format is not related to .pgf using the same name Progressive Graphics Format.[5]

The same dual path approach as for PGF is used when saving EPS-compatible files in recent versions of Illustrator. Early versions of the AI file format are true EPS files with a restricted, compact syntax, with additional semantics represented by Illustrator-specific DSC comments that conform to DSC's Open Structuring Conventions. These files are identical to their corresponding Illustrator EPS counterparts, but with the EPS procsets (procedure sets) omitted from the file and instead externally referenced using%%Include directives.

Aside from Adobe Illustrator, the following applications can edit .ai files:

Read the original here:

Adobe Illustrator Artwork - Wikipedia

Provizio closes $6.2M seed round for its car safety platform using sensors and AI – TechCrunch

Provizio, a combination hardware and software startup with technology to improve car safety, has closed a seed investment round of $6.2million. Investors include Bobby Hambrick (the founder of Autonomous Stuff); the founders of Movidius; the European Innovation Council (EIC); ACT Venture Capital.

The startup has a five-dimensional sensory platform that it says perceives, predicts and prevents car accidents in real time and beyond the line-of-sight. Its Accident Prevention Technology Platform combines proprietary vision sensors, machine learning and radar with ultra-long range and foresight capabilities to prevent collisions at high speed and in all weather conditions, says the company. The Provizio team is made up of experts in robotics, AI and vision and radar sensor development.

Barry Lunn, CEO of Provizio said: One point three five road deaths to zero drives everything we do at Provizio. We have put together an incredible team that is growing daily. AI is the future of automotive accident prevention and Provizio 5D radars with AI on-the-edge are the first step towards that goal.

Also involved in Provizio is Dr. Scott Thayer and Prof. Jeff Mishler, formally of Carnegie Mellon robotics, famous for developing early autonomous technologies for Google / Waymo, Argo, Aurora and Uber.

Go here to see the original:

Provizio closes $6.2M seed round for its car safety platform using sensors and AI - TechCrunch

Disney Research taught AI how to judge short stories – Engadget

The researchers used social question and answer site Quora for a large database to feed into its AI algorithms. Many of the answers on Quora come in the form of stories, so reader upvotes can be used as a measure of popularity, and as "a proxy for narrative quality." The team gathered almost 55,000 answers and classified more than 28,000 of them as stories, each with an average of 369 words. Then they developed a couple of different neural networks one to look at different sections of each story and one to take a more holistic view of a story's meaning. Each AI made predictions about the relative popularity of a given story. Both neural nets were better at choosing a story's popularity over a baseline text evaluation, but the holistic network showed an 18 percent improvement over the one that focused on sections.

It's not hard to imagine a movie studio, for example, using a future version of this type of technology to choose scripts for production, of course, but the tech is still in its infancy. Let's just hope that researchers find a way to filter stories for quality, and not just popularity. No one needs another Transformers movie.

Continue reading here:

Disney Research taught AI how to judge short stories - Engadget

Australia, South Africa have recognised AI as inventor. International patent law needs to catch up – The Indian Express

If you prick them, they do not bleed; if you tickle them, they do not laugh and for now, they do not revenge. But Artificial Intelligence, in more and more jurisdictions, can now invent, create and file patents. DABUS, a creativity machine, has been recognised as an inventor for a type of food container that improves grip and heat transfer. It might be easy to dismiss this development as another way for corporations to protect profits or fear it as yet another step towards the AI apocalypse. But the problem and the subsequent need for patent protection is not merely one of technology.

Ryan Abbott, a law professor at the University of Surrey, has been campaigning for the better part of a decade to grant AIs near-person status in international patent law. While the EU and US patent laws still do not allow AI to be regarded as an owner, there is increasing pressure on these countries to do so. And there is some merit to the argument that Abbott and his colleagues are making.

AI can perform calculations, analyse data and even generate novel ideas and systems at a far faster pace, and in greater volume, than human minds. In practice, this could mean, for example, that the vaccine for the next pandemic is discovered by a thinking machine. For the West, particularly the US, development and deployment of AI is something that will have to be undertaken on a much larger scale to compete with China both strategically and economically. However, without adequate patent law, where and how AIs are deployed by corporations and individuals could be limited. Thats really the rub of it: While the inventor may be artificial, the owner is still human often greedily so. The law is yet to catch up, in most places, with the reality of how much thinking and innovating machines now undertake. And without legal clarity on IP and patents, there will always be someone who gets an undue advantage.

This editorial first appeared in the print edition on August 24, 2021 under the title Machine Law.

See more here:

Australia, South Africa have recognised AI as inventor. International patent law needs to catch up - The Indian Express

Flying Cars to AI Feature in Contest to Solve Bangalore Gridlock – Bloomberg

In Bangalore, tech giants and startups typically spend their days fiercely battling each other for customers. Now they are turning their attention to a common enemy: the Indian citys infernal traffic congestion.

Cross-town commutes that can take hours has inspired Gridlock Hackathon, a contest initiated by Flipkart Online Services Pvt. for technology workers to find solutions to the snarled roads that cost the economy billions of dollars. While the prize totals a mere $5,500, its attracting teams from global giants Microsoft Corp., Google and Amazon.com. Inc. to local startups including Ola.

The online contest is crowdsourcing solutions for Bangalore, a city of more than 10 million, as it grapples with inadequate roads, unprecedented growth and overpopulation. The technology industry began booming decades ago and with its base of talent, it continues to attract companies. Just last month, Intel Corp. said it would invest $178 million and add more workers to expand its R&D operations.

The ideas put forward at the hackathon range from using artificial intelligence and big data on traffic flows to true moonshots, such as flying cars.

The gridlock remains a problem for a city dependent on its technology industry and seeking to attract new investment. Bangalore is home to Asian outsourcing giants Infosys Ltd. and Wipro Ltd. along with 800,000 tech workers that account for 38 percent of the countrys $116-billion software outsourcing industry, according to Priyank Kharge, state minister of Information Technology.

Traffic is the only negative Bangalore has, Kharge said, When delegations bring investment proposals to the government, I tell them, The city is fantastic in every way, weather-wise and otherwise.

Yet, so bad is the traffic that Bangalores most infamous logjam at Silk Board Junction has inspired its own Twitter parody account for what it calls Indias largest parking lot.

V. Ravichandar, urban infrastructure expert and chairman at market researcher Feedback Consulting, estimates that traffic jams directly shave about 2 percent from the citys estimated GDP of $30 billion. The opportunity, health care, slackened productivity and other related costs are immense and could take the actual losses into the billions.

Gridlock Hackathon came about as part of the 10-year celebrations of Flipkart, Indias most valuable startup. The Bangalore-based companys 30,000 workers, including hundreds of deliverymen, spend hours stuck in jams.

The city has the potential to become a truly-world class business and social destination if only its traffic were a little less unruly, said Binny Bansal, Flipkarts group chief executive officer. Any solution can only have an impact if it originates from and has the support of citizens the people who use the citys roads and contribute to the traffic problem to begin with.

The contest has drawn more than 1,000 teams with entries from as far afield as Seattle, Atlanta and Dubai with quirky names like NoHonk, RushHour and CitizenCop. Submissions closed last week.

From IT workers stuck in cars and buses to Flipkart and Amazon workers sweating it out in the dust, the cost of Bangalores gridlock is visible everywhere. Drivers for ride-hailing apps Ola and Uber Technologies Inc., who have incentives to hit a certain number of daily trips, end up working ever-longer hours to meet the company-assigned ride targets.

Akshay Rao, a Seattle-based engineer who works on process improvement at Amazon, put in his entry, proposing to reform the driver licensing system by creating incentives for the right road behavior and propagating timely information on public transport for paid users.

Rao, a former officer in the Indian Navy, had plenty of experience on the citys roads when he led the start of Amazon Indias logistics operations in 2013 and subsequent roll out of next-day delivery and same-day delivery.

If we missed distribution schedules, we would run into rush hour traffic, said Rao, who recalled delivering at midnight to angry customers and at 4.30 am to a customer catching a train. To get around choke points, deliverymen on scooters did short relays, memorized the shortcuts and on some occasions carried packages by hand.

Other entries suggested including Internet of Things-powered road dividers that change orientation to handle changing situations. There is also a proposal for a reporting system that tracks vehicles that dont conform to the road rules, a device to track social media to generate traffic reports and a network of smart satellite townships to ease the flow of vehicles.

Then there are the more ambitious. Utkarsh B, a seven-year veteran at Flipkart who is overseeing the competition, said a team suggested building smart roads underneath the city and another has sent in detailed drawings of flying cars.

Among those participating is Harish Mamtani, a former Morgan Stanley banker who splits his time between Atlanta and Hyderabad, where he runs a low-cost school. His idea is an app platform that helps crowdsource and report traffic violations to the cloud that police can use to nab violators and levy penalties.

The traffic police in Indian cities probably have no inkling what a cloud is, they cannot be expected to come up with technology solutions, said Mamtani, who was spurred to think up a fix when he was hit by an autorickshaw going the wrong way only to be abused by the driver. His proposal aims to help police tackle the sheer volume of violators and is customizable across cities.

While much is made of Bangalores traffic woes, other Indian cities are no better, said Kiran Mazumdar-Shaw, chairman and managing director of Indian biotechnology company, Bangalore-based Biocon Ltd. She spoke from Mumbai where she had just missed a flight after being stuck in a 1.5-hour, 4 mile-traffic jam en route to the airport.

Gridlock Hackathon is the kind of contest that Indian cities desperately need, she said. Only innovative thinkers can come up with technology solutions for the problems that plague cities nationwide, said Mazumdar-Shaw. Age-old solutions will no longer work.

Read the original:

Flying Cars to AI Feature in Contest to Solve Bangalore Gridlock - Bloomberg