AI startup Cresta launches from stealth with millions from Greylock and a16z – TechCrunch

As Silicon Valleys entrepreneurs cluster around the worldview that artificial intelligence is poised to change how we work, investors are deciding which use cases make the most sense to pump money into right now. One focus has been the relentless communication between companies and customers that takes place at call centers.

Call center tech has spawned dozens if not hundreds of AI startups, many of which have focused on automating services and using robotic voices to point customers somewhere they can spend money. There has been a lot of progress, but not all of those products have delivered. Cresta is more focused on using AI suggestions to help human contact center workers make the most of an individual call or chat session and lean on whats worked well for past interactions that were deemed successful.

I think that there will always be very basic boring stuff that can be automated like frequently asked questions and Oh, whats the status of my order?, CEO Zayd Enam says. But theres always the role of the person thats building the relationship between the company and the customer, and thats a really strategic role for companies in the modern age.

Udacity co-founder Sebastian Thrun is the startups board chairman and is listed as a co-founder. Enam met Thrun during his PhD research at Stanford focused on workplace productivity. Cresta is launching from stealth and announcing that theyve raised $21 million in funding from investors including Greylock Partners and Andreessen Horowitz. The company recently closed a $15 million Series A round.

Cresta wants to use AI to school customer service workers and salespeople on how to close the deal.

Theres quite a lot of turnover in contact center jobs and that can leave companies reticent to spend a ton of time investing in each employees training. Naturally, there are some inherent issues where the workers interacting with an individual customer might not have the experience necessary to suggest a solution that they might if they had more experience. In terms of live feedback, for many, fumbling through paper scripts at their desk can be about as good as it gets. Cresta is hoping that by tapping improvements in natural language processing, their software can help alleviate some stress for contact center workers and help them move conversations in the direction of selling something else for their company.

Cresta is entering a field where theres already quite a bit of interest from established software giants. Salesforce, Google and Twilio all operate AI-driven products for contact centers. Even with substantial competition, Enam believes Crestas team of 30 can offer its customers a lot more individual attention.

Were one of the few technical teams where were just obsessed with the customer, to the point where its normal for people on our team to fly to the customer and live by a call center in an Airbnb for a week, Enam said. When Greylock led the Series A, they had heard that and said thats what gave them so much conviction that we were the team to solve the problem.

Sun Microsystems co-founder Andy Bechtolsheim, Mark Leslie and Vivi Nevo are also investors in Cresta.

Read the original post:

AI startup Cresta launches from stealth with millions from Greylock and a16z - TechCrunch

AI doesn’t have to be a power hog | Greenbiz – GreenBiz

Plenty of prognostications, including this one from the World Economic Forum, tout the integral role artificial intelligence could play in "saving the planet."

Indeed, AI is integral to all manner of technologies, ranging from autonomous vehicles to more informed disaster response systems to smart buildings and data collection networks monitoring everything from energy consumption to deforestation.

The flip side to this rosy view is that there are plenty of ethical concerns to consider. Whats more, the climate impact of AI both in terms of power consumption and all the electronic waste that gadgets create is a legitimate, growing concern.

Research from the University of Massachusetts Amherst suggests the process of "training" neural networks to make decisions or searching them to find answers uses five times the lifetime emissions of the average U.S. car. Not an insignificant amount.

What does that mean if things continue on their current trajectory?

Right now, data centers use about 2 percent of the worlds electricity. At the current rate of AI adoption with no changes in the underlying computer server hardware and software the data centers needed to run those applications could claim 15 percent of that power load, semiconductor firm Applied Materials CEO Gary Dickerson predicted in August 2019. Although progress is being made, he reiterated that warning last week.

At the current rate of AI adoption with no changes in the underlying computer server hardware and software the data centers needed to run those applications could claim 15 percent of that power load.

"Customized design will be critical," he told attendees of a longstanding industry conference, SemiconWest. "New system architectures, new application-specific chip designs, new ways to connect memory and logic, new memories and in-memory compute can all drive significant improvements in compute performance per watt."

So, whats being done to "bend the curve," so to speak?

Technologists from Applied Materials, Arm, Google, Intel, Microsoft and VMware last week shared insights about advances that could help us avoid the most extreme future scenarios, if the businesses investing in AI technologies start thinking differently. While much of the panel (which I helped organize) was highly technical, here are four of my high-level takeaways for those thinking about harnessing AI for climate solutions.

Get acquainted with the concept of "die stacking" in computing hardware design. There is concern that Moores Law, the idea that the number of transistors on integrated circuit will double every two years, is slowing down. Thats why more semiconductor engineers are talking up designs that stack multiple chips on top of each other within a system, allowing more processing capability to fit in a given space.

Rob Aitken, a research fellow with microprocessor firm Arm, predicts these designs will show up first in computing infrastructure that couples high-performance processing with very localized memory. "The vertical stacking essentially allows you to get more connectivity bandwidth, and it allows you to get that bandwidth at lower capacitance for lower power use, and also a lower delay, which means improved performance," he said during the panel.

So, definitely look for far more specialized hardware.

Remember this acronym, MRAM. It stands for magnetic random-access memory, a format that uses far less power in standby mode than existing technologies, which require energy to maintain the "state" of their information and respond quickly to processing requests when they pop up. Among the big-name players eyeing this market: Intel; Micron; Qualcomm; Samsung; and Toshiba. Plenty of R&D power there.

Consider running AI applications in cloud data centers using carbon-free energy. That could mean deferring the processing power needed for certain workloads to times of day when a facility is more likely to be using renewable energy.

"If we were able to run these workloads when we had this excess of green, clean, energy, right now we have these really high compute workloads running clean, which is exactly what we want," said Samantha Alt, cloud solution architect at Intel. "But what if we take this a step further, and we only had the data center running when this clean energy was available? We have a data center thats awake when we have this excess amount of green, clean energy, and then asleep when its not."

This is a technique that Google talked up in April, but its not yet widely used, and it will require attention to new cooling designs to keep the facilities from running too hot as well as memory components that can respond dynamically when a facility goes in and out of sleep mode.

New system architectures, new application-specific chip designs, new ways to connect memory and logic, new memories and in-memory compute can all drive significant improvements in compute performance per watt.

Live on the edge. That could mean using specialized AI-savvy processors in some gadgets or systems youre trying to make smarter such as automotive systems or smart phones or a building system. Rather than sending all the data to a massive, centralized cloud service, the processing (at least some of it) happens locally. Hey, if energy systems can be distributed, why not data centers?

"We have a lot of potential to move forward, especially when we bring AI to the edge," said Moe Tanabian, general manager for intelligent devices at Microsoft. "Why is edge important? There are lots of AI-driven tasks and benefits that we derive from AI that are local in nature. You want to know how many people are in a room: people counting. This is very valuable because when the whole HVAC system of the whole building can be more efficient, you can significantly lower the balance of energy consumption in major buildings."

The point to all this is that getting to a nirvana in which AI can handle many things wed love it to handle to help with the climate crisis will require some pretty substantial upgrades to the computing infrastructure that underlies it.

The environmental implications of those system overhauls need to be part of data center procurement criteria immediately, and the semiconductor industry needs to step up with the right answers. Intel and AMD have been leading the way, and Applied Materials last week threw down the gauntlet, but more of the industry needs to wake up.

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here. Follow me on Twitter: @greentechlady.

Visit link:

AI doesn't have to be a power hog | Greenbiz - GreenBiz

Bullstop! This AI-based app to protect you from social media bullying – Zee Business

Computer scientists have launched an app, 'Bullstop' that uses novel artificial intelligence (AI) algorithms to combat trolling and bullying online.

The downloadable app is the only anti-cyberbullying app that integrates directly to social media platforms to protect users from bullies and trolls messaging them directly, the scientists from Aston University in the UK, reported.

Bullstop is available for free and can now be downloaded on GooglePlay.

"This application differs from other apps because the use of artificial intelligence to detect cyberbullying is unique in itself," said Semiu Salawu, who designed Bullstop.

"Other anti-cyberbullying apps, in comparison, use keywords to detect instances of bullying, inappropriate or threatening language," Salawu added.

According to the developer, the detection AI has been trained on over 60,000 tweets to recognise not only abusive and offensive language, but also the use of subtle means such as sarcasm and exclusion to bully, which are otherwise difficult to detect using keywords.

"It uses a distributed cloud-based architecture that makes it possible for 'classifiers' to be swapped in and out. Therefore, as better artificial intelligence algorithms become available, they can be easily integrated to improve the app," Salawu explained.

The team revealed that 'Bullstop' is unique in that it monitors a user's social media profile and scans for offensive incoming messages, to ensure the user is not subject to incoming abuse, as well as offensive outgoing messages.

This works via an algorithm which is designed to understand written languages. It analyses messages and flags offensive content, such as instances of cyberbullying, abusive, insulting or threatening language, pornography and spam.

Offensive messages can be immediately deleted from the user's inbox.

A copy of deleted messages are, however, retained should the user wish to review them. The app can also automatically block contacts who continuously send offensive messages.

Bullstop is highly configurable, allowing the user to determine how comprehensively the app removes inappropriate messages.

The app currently supports Twitter with support for text messages planned in the next stage of the rollout.

It is hoped that, with continued usage of the app and good results, other social media platforms such as Facebook and Instagram will come on board, allowing their users to benefit from the application.

The app is currently in the beta testing stage which means the researchers invite users of the app to provide them with feedback to allow them to make improvements.

"It has already been tested by a number of young people and professionals, including teachers, police officers and psychologists," the authors wrote.

Read the rest here:

Bullstop! This AI-based app to protect you from social media bullying - Zee Business

Arrow Electronics and Taoglas Join Forces to Boost IoT and AI Initiatives – Business Wire

SAN DIEGO--(BUSINESS WIRE)--Taoglas, a leading enabler of digital transformation using IoT, today announced their collaboration with Arrow Electronics. This includes a new global distribution agreement focused on AI and IoT products and services. Taoglas will use Arrows sales and services organization to expand their customer base, grow ecosystem partnerships, and improve supply chain services.

Our customers need both products and services to support the success of IoT projects. Taoglas portfolio of high-performance RF antennas and IoT solutions is a fantastic addition to Arrow's broad product and services portfolio, said Aiden Mitchell, vice president and general manager of global IoT solutions for Arrow Electronics. Our customers will benefit from Taoglas reputation as a best-in-class, performance-driven technology provider.

Customers will have access to the Taoglas EDGE portfolio of next-generation IoT solutions and newly announced CROWD Insights to measure, monitor, predict, alert and notify based on volumes of devices connected to existing infrastructure. Taoglas and Arrows global distribution agreement provides customers streamlined access to Taoglas extensive portfolio of antenna and RF design offerings.

We are delighted to establish our partnership with Arrow, a leading solutions provider with one of the worlds largest technology ecosystems, said Dermot OShea, co-CEO of Taoglas. By working with Arrow, we can offer even more unrivalled engineering, sales and customer support globally. As the IoT ecosystem continues to mature and grow, the opportunities for projects that need to be designed, built, deployed, and managed increases exponentially. It also means that providers with the quickest and most effective execution capabilities will offer the most value to the end customer.

IoT and AI engineers can find more information at http://www.arrow.com or http://www.taoglas.com.

About Arrow Electronics

Arrow Electronics guides innovation forward for over 200,000 leading technology manufacturers and service providers. With 2019 sales of $29 billion, Arrow develops technology solutions that improve business and daily life. Learn more at fiveyearsout.com.

About Taoglas

Taoglas is a leading enabler of digital transformation using IoT from initial strategy definition, to design, build, deployment and managed services. Our solutions combine high-performance RF design with advanced positioning, imaging, audio and artificial intelligence technologies for organizations solving critical problems using IoT. A nimble and efficient approach which mobilizes quickly makes Taoglas a trusted advisor helping customers regardless of where they are on their IoT journey. With world-class design, consultancy and engineering expertise, along with support and test centers globally, Taoglas delivers complex IoT solutions to market quickly and cost-effectively. Taoglas has proven expertise globally across the transportation, connected healthcare, smart cities and smart building industries.

Go here to read the rest:

Arrow Electronics and Taoglas Join Forces to Boost IoT and AI Initiatives - Business Wire

Using AI in an uncertain world – KMWorld Magazine

Jul 3, 2017

Like anything else in life except death and taxes (and even the particulars of those are uncertain), uncertainty is something that humans deal with every day. From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty, and we all have learned how to navigate an unpredictable world. As AI becomes widely deployed, it simply adds a new dimension of unpredictability. Perhaps, however, instead of trying to stuff the genie back in the bottle, we can develop some realistic guidelines for its use.

Our expectations for AI and for computers in general have always been unrealistic. The fact is that software is buggy, that algorithms are crafted by humans who have certain biases about how systems and how the world worksand they may not match your biases. Furthermore, no data set is unbiased, and we use data sets with built-in biases or with holes in the data to train AI systems. Those systems are by their very nature, then, biased or lacking in information. If we depend on those systems to be perfect, we are letting ourselves in for errors, mistakes and even disasters.

However, relying on biased systems is no different from asking a friend who shares your worldview for information that may serve to bolster that view rather than balance it. And we do that all the time. Finding balanced, reliable, reputable information is hard and sometimes impossible. Any person trying to navigate an uncertain world tries to make decisions based on balanced information. The import of the decision governs (or should) the effort we make in hunting for reliable but differing sources. The speed with which a decision must be made often interferes with that effort. And we need to accept that our decisions will be imperfect or even outright wrong, because no one can amass and interpret correctly everything there is to know.

Where might AI systems fit into the information picture? We know that neither humans nor systems are infallible in their decision-making. Adding the input of a well-crafted, well-tested system that is based on a large volume of reputable data to human decision making can speed and improve the outcome. There are good reasons for that. Human thinking balances AI systems. They can plug each others blind spots. Humans make judgments based on their worldview. They are capable of understanding priorities, ethics, values, justice and beauty. Machines cant. But machines can crunch vast volumes of data. They dont get embarrassed. They may find patterns we wouldnt think to look for. But humans can decide whether to use that information. That makes a perfect partnership in which one of the partners wont be insulted if their input is ignored.

Adding AI into the physical world in which snap decisions are required raises additional design and ethical issues that we are ill-fit to resolve today. Self-driving cars are a good example of that. In the abstract and at a high level, its been shown that most accidents and fatalities are due to human error. So, self-driving cars may help us save lives. Now we come down to the individual level. Suppose we have a sober, skilled, experienced driver who would recognize a danger she has never seen before. Suppose that we have a self-driving car that isnt trained on that particular hazard. Should the driver or the system be in charge? I would opt for an AI-assisted system with override from a sober, experienced driver.

On the other hand, devices with embedded cognition can be a boon that changes someones world. One project at IBM research is developing self-driving buses to assist the elderly or the disabled in living their lives independently. Like Alexa or Siri on a smaller scale, that could change lives. We come back to the matter of context, use and value. There is no single answer to human questions of should.

That brings us to the question of trust. Given that we understand that several layers of bias will inevitably be built into the cognitive systems we interact with and given that the behaviors coming out of the systems are occasionally unpredictable, what kind of trust should we place in AI-based systems and under what circumstances? That depends on:

Underlying those four important conditioning considerations is a fundamental challenge: In many of the outcomes from systems involving machine learning, particularly in the current applications of deep learning, it can be exceedingly difficult for the human decision-maker to analyze how the computer came up with its report, output or recommendation(s).

At this point in the development of the technology, we cant simply ask the system, as we could a human collaborator, how it reached the answer or recommendation it is proposing. Most systems today are not designed to be self-explanatory. Your algorithms may not be forthcoming about what insightful routes through the data they have taken to develop their answer for you. In many applications where solutions are well-contextualized, the systems answers wont raise questions or eyebrows, and you the user will be able to say, for example, Yes, this is an image of a nascent tumor. But in other applications, the context may be murkier, and the human user will be left to wonder whether to trust the recommendation coming back from the machine. Should I really be shorting Amazon shares at this level? What makes you think so?

Is there some way to design systems so that they become an integral part of our thinking process, including helping us develop better questions, focus our problem statements and reveal how reliable their recommendations are? Can we design systems that are transparent? Can we design systems that help people understand the vagaries of probabilistic output? Will we be able to collaborate with an AI about whether an umbrella or a full foul weather outfit is the better choice for todays weather circumstances? Insightful application design will remain the keyalways taking direction from the context of the use and the intention of the user.

Read the rest here:

Using AI in an uncertain world - KMWorld Magazine

Stripchats AI-powered anal-ytics helped it reach nearly 1B new users in 2020 – The Next Web

Stripchat today released its end of the year data report for 2020 and good-golly is it packed full of more information than you could ever want to know about the live camming business.

For example, who among us would have guessed that the cam-modelling industry had such a huge userbase. Per the report:

Stripchat reports an overall growth in traffic in 2020 having 906,181,416 in total new users vs 537,938,778 in 2019.

Thats nearly a billion new users after a year in which the platform saw more than half-a-billion. Were talking about an industry thats pandemic-proof. (And for good reason, theres no safer sex than cam sex).

However, despite Salt-N-Peppas best attempts, were not here to talk about sex. Were here to talk about AI and what Stripchat calls the the next best use case for AI.

[Read:Why AI is the future of home security]

Called Anal-ytics, its well, heres what Stripchat says:

Our team has developed a machine-learning algorithm to identify particular sex acts within the live-streaming video. Weve called our AI Anal-ytics and it analyzes live video as it happens, allowing us to bring live anal and live blowjob scenes happening at the very moment right to the eyes and hands of our users. In 2020 the total traffic to both categories approximately doubled.

It sounds a lot like Stripchat developed a computer vision model capable of searching through the hundreds of thousands (if not millions) of live streams happening on its platform in real-time for specific model interactions. The system, we assume, then surfaces those results to a featured section on the main page or something like that, admittedly I havent interacted with the site itself outside of due diligence for this article.

Quick take: This is genius. A decade ago when the cam industry was really starting to gain steam on the back of quicker home internet connections the idea of this particular use case would have seemed far-fetched. But AIs come a long way.Modern deep learning techniques make it possible for businesses to search through huge mountains of data that would take a staff of humans thousands of years to parse.

In case youre wondering how that surge of users turned out for the models, Stripchat reports the top five earners made between $388 and $965 an hour. The site also said:

Users spent 98,595,157 hours on live cams in 2020, which is 68% more than they did last year (62,190,458 in 2019). Evidently, it coincides with the traffic growth. Its also worth mentioning that in 2020 we have a new single tip record, which is $16,897.

This just goes to show you: nothings more pandemic-proof than robots and sex.

You can check out the very not-safe-for-work Stripchat website here.

Published December 10, 2020 23:02 UTC

Continue reading here:

Stripchats AI-powered anal-ytics helped it reach nearly 1B new users in 2020 - The Next Web

Tech Firm Acquires Boston-based Provider of AI-Based Drone Inspections – Transmission & Distribution World

On February 6, 2020, the American Association of Blacks in Energy (AABE) hosted a panel of utility executives from the Kansas City area at the Burns & McDonnell world headquarters. Panelists included Ray Kowalik, CEO of Burns & McDonnell; John Bridson, VP of generation at Evergy; and Bill Johnson, general manager of Kansas City Board of Public Utilities (BPU).Paula Glover, president and CEO of the AABE moderated the panel, covering topics such as climate change, customer satisfaction, and diversity and inclusion.

The event was kicked off in typical Burns & McDonnell fashion with a safety moment.Izu Mbata, staff electrical engineer at Burns & McDonnell, who is also the communications chair of the Kansas-Missouri AABE Chapter, stated that the Chapter hopes to make an impact and inspire the communities where they live and work.The other panelists expressed similar sentiments.Bridson said if Evergy is doing its job correctly, it will result in stronger communities for people to live and work.

Panelists each made a presentation on the future of energy.Common industry themes included the decrease in electricity consumption, changes in generation resources, the proliferation of renewable energy, and the need for investments in transmission infrastructure to bring renewable energy to load.As an example, Bridson described that wind generation in the Southwest Power Pool (SPP) peaked at 71% just last week.

Paula Glover shocked the crowd by naming Kansas City as the number five top city to be affected by climate change, according to the Weather Channel.Ray Kowalik of Burns & McDonnell said that the energy industry is doing its part with respect to carbon reduction, but there must be a balance between reliability and cost.When asked about what they are doing to mitigate climate change, Johnson indicated that the BPU has been aggressive in moving from coal generation to renewables and has decreased carbon dioxide (CO2) emissions by 56% since 2012.Similarly, Bridson stated that by the end of 2020, Evergy will have reduced CO2 emissions from 2005 levels by 40%.Evergy and the BPU both also discussed their respective community solar farm initiatives.

The panelists also agreed that it is always good business to have a diverse workforce.According to Kowalik, "Our industry is woefully underrepresented by women and minorities."The panel discussed diversity initiatives within their respective organizations including training, recruiting, supplier diversity programs, partnerships with higher education institutions, scholarships, internships, and internal scorecards.

Laron Evans, diverse business manager of the T&D Group at Burns & McDonnell, who is also the president of the Kansas-Missouri AABE Chapter, provided the following concluding remarks, "Studies have shown that diverse and inclusive teams of people make better business decisions. The opportunity to participate in inclusive collaboration helps us to stay on the forefront of innovation while moving our communities forward."

Read more:

Tech Firm Acquires Boston-based Provider of AI-Based Drone Inspections - Transmission & Distribution World

How Microsoft runs its $40M ‘AI for Health’ initiative – TechCrunch

Last week, Microsoft announced the latest news in its ongoing AI for Good program: a $40M effort to apply data science and AI to difficult and comparatively under-studied conditions like tuberculosis, SIDS and leprosy. How does one responsibly parachute into such complex ecosystems as a tech provider, and what is the process for vetting recipients of the companys funds and services?

Tasked with administrating this philanthropic endeavor is John Kahan, chief data analytics officer and AI lead in the AI for Good program. I spoke with him shortly after the announcement to better understand his and Microsofts approach to entering areas where they have never tread as a company and where the opportunity lies for both them and their new partners.

Kahan, a Microsoft veteran of many years, is helping to define the program as it develops, he explained at the start of our interview.

John Kahan: About a year ago, they announced my role in conjunction with expanding AI for Good from being really a grants-oriented program, where we gave money away, to a program where we use data science to help literally infuse AI and data to drive change around the world. It is 100% philanthropic we dont do anything thats commercial-related.

TechCrunch: This kind of research is still a very broad field, though. How do you decide what constitutes a worthwhile investment of resources?

See the original post:

How Microsoft runs its $40M 'AI for Health' initiative - TechCrunch

Mark Cuban Says AI Will be The Biggest Disruption to Jobs We’ve Seen in 30 Years – Futurism

In Brief In a recent speech, Mark Cuban argued that AI will replace a significant number of jobs however, according to previous statements from Cuban, he does not believe that universal basic income is a solution. His views contrast with other technology industry leaders. Cubans Warning

Mark Cuban warned against the potential dangers that artificial intelligence (AI) poses to the work force,asserting during a one-to-one question session at OZY Fest on Sunday that:

Theres going to be a lot of unemployed people replaced with technology and if we dont start dealing with that now, were going to have some real problems.

Cuban added that he hasnt seen an equal transformationto the workforce in recent memory:

Were going through a transitional period where well see more disruption driven by artificial intelligence than weve seen in the last 30 years.

It is the latest in a series of warnings that the sports tycoon has issued about the 21st Centurys AIrevolution. In February Cuban Tweeted that Automation is going to cause unemployment and we need to prepare for it but, unlike others, he disagrees that universal basic income (UBI) is a solutionto this, Tweeting that it is one of the worst possible responses to the potential crisis.

Cuban joins other industry leaders in warning against AI. Bill Gates told the BBC that the intelligence is strong enough to be a concern.

Stephen Hawking has also weighed in on the debate, apocalyptically telling the Guardian that the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.

However, while leading figures in the technology industry agree that AI will he highly disruptive, they vary on their solutions to the problem. In contrast to Cuban who Tweeted that we should optimize existing support networks by making them more efficient so more money can be distributed with far less overhead Bill Gates, Founder of Microsoft, believes that taxing robots is a temporary solution.Gates believes UBI is a good long-term plan,although society is not ready for it yet.

Mark Zuckerberg, Founder and CEO of Facebook, is situated at the pro-UBI end of the spectrum, telling Harvard graduates that We should explore ideas like universal basic income to make sure that everyone has a cushion to try new ideas.

The rest is here:

Mark Cuban Says AI Will be The Biggest Disruption to Jobs We've Seen in 30 Years - Futurism

Did Elon Musk’s AI champ destroy humans at video games? It’s complicated – The Verge

You might not have noticed, but over the weekend a little coup took place. On Friday night, in front of a crowd of thousands, an AI bot beat a professional human player at Dota 2 one of the worlds most popular video games. The human champ, the affable Danil "Dendi" Ishutin, threw in the towel after being killed three times, saying he couldnt beat the unstoppable bot. It feels a little bit like human, said Dendi. But at the same time, its something else.

The bots patron was none other than tech billionaire Elon Musk, who helped found and fund the institution that designed it, OpenAI. Musk wasnt present, but made his feelings known on Twitter, saying: OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go. Even more exciting, said OpenAI, was that the AI had taught itself everything it knew. It learned purely by playing successive versions of itself, amassing lifetimes of in-game experience over the course of just two weeks.

But how big a deal is all this? Was Friday nights showdown really more impressive than Googles AI victories at the board game Go? The short answer is probably not, but it still represents a significant step forward both for the world of e-sports and the world of artificial intelligence.

First, we need to look at Musks claim that Dota is vastly more complex than traditional board games like chess & Go. This is completely true. Real-time battle and strategy games like Dota and Starcraft II pose major challenges that computers just cant handle yet. Not only do these games demand long-term strategic thinking, but unlike board games they keep vital information hidden from players. You can see everything thats happening on a chess board, but you cant in a video game. This means you have to predict and preempt what your opponent will do. It takes imagination and intuition.

In Dota, this complexity is increased as human players are asked to work together in teams of five, coordinating strategies that will change on the fly based on which characters players choose. To make things even more complex, there are more than 100 different characters in-game, each with their own unique skill set; and characters can be equipped with a number of unique items, each of which can be game-winning if deployed at the right moment. All this means its basically impossible to comprehensively program winning strategies into a Dota bot.

But, the game that OpenAIs bot played was nowhere near as complex as all this. Instead of 5v5, it took on humans at 1v1; and instead of choosing a character, both human and computer were limited to the same hero a fellow named the Shadow Fiend, who has a pretty straightforward set of attacks. My colleague Vlad Savov, a confirmed Dota addict who also wrote up his thoughts on Fridays match, said the 1v1 match represents only a fraction of the complexity of the full team contest. So: probably not as complex as Go.

The second major caveat is knowing what advantages OpenAIs agent had over its human opponents. One of the major points of discussion in the AI community was whether or not the bot had access to Dotas bot API which would let it tap directly into streams of information from the game, like the distances between players. OpenAIs Greg Brockman confirmed to The Verge that the AI did indeed use the API, and that certain techniques were hardcoded in the agent, including the items it should use in the game. It was also taught certain strategies (like one called creep block) using a trial-and-error technique known as reinforcement learning. Basically, it did get a little coaching.

Andreas Theodorou, a games AI researcher at the University of Bath and an experienced Dota player, explains why this makes a difference. One of the main things in Dota is that you need to calculate distances to know how far some [attacks] travel, he says. The API allows bots to have specific indications of range. So you can say, If someone is in 500 meters range, do that, but the human player has to calculate it themselves, learning through trial and error. It really gives them an advantage if they have access to information that a human player does not. This is particularly true in a 1v1 setting with a hero like Shadow Fiend; where players have to focus on timing their attacks correctly, rather than overall strategy.

Brockmans response is that this sort of skill is trivial for an AI to learn, and was never the focus of OpenAIs research. He says the institutes bot could have done without information from the API, but youd just be spending a lot more of your time learning to do vision, which we already know works, so whats the benefit?

So, knowing all this, should we dismiss OpenAIs victory? Not at all, says Brockman. He points out that, perhaps more important than the bots victory, was how it taught itself in the first place. While previous AI champions like AlphaGo have learned how to play games by soaking up past matches by human champions, OpenAIs bot taught itself (nearly) everything it knows.

You have this system that has just played against itself, and it has learned robust enough strategies to beat the top pros. Thats not something you should take for granted, says Brockman. And its a big question for any machine learning system: how does complexity get into the model? Where does it come from?

As OpenAIs Dota bot shows, he says, we dont have to teach computers complexity: they can learn it themselves. And although some of the bots behavior was preprogrammed, it did develop some strategies by itself. For example, it learned how to fake out its opponents by pretending to trigger an attack, only to cancel at the last second, leaving the human player to dodge an attack that never comes exactly like a feint in boxing.

Others, though, are still a little skeptical. AI researcher Denny Britz, who wrote a popular blog post that put the victory in context, tells The Verge that its difficult to judge the scale of this achievement without knowing more technical details. (Brockman says these are forthcoming, but couldnt give an exact time frame.) Its not clear what the technical contribution is at this point before the paper comes out, says Britz.

Theodorou points out that although OpenAIs bot beat Dendi onstage, once players got a good look at its tactics, they were able to outwit it. If you look at the strategies they used, they played outside the box a bit and they won, he says. The players used offbeat strategies the sort that wouldnt faze a human opponent, but which the AI had never seen before. It didnt look like the bot was flexible enough, says Theodorou. (Brockman counters that once the bot learned these strategies, it wouldnt fall for them twice.)

All the experts agree that this was a major achievement, but that the real challenge is yet to come. That will be a 5v5 match, where OpenAIs agents have to manage not just a duel in the middle of the map, but a sprawling, chaotic battlefield, with multiple heroes, dozens of support units, and unexpected twists. Brockman says that OpenAI is currently targeting next years grand Dota tournament in 12 months time to pull this off. Between now and then, theres much more training to be done.

Read this article:

Did Elon Musk's AI champ destroy humans at video games? It's complicated - The Verge

Scientists think well finally solve nuclear fusion thanks to cutting-edge AI – The Next Web

Scientists believe the world will see its first working thermonuclear fusion reactor by the year 2025. Thats a tall order in short form, especially when you consider that fusion has been almost here for nearly a century.

Fusion reactors not to be confused with common fission reactors are the holiest of Grails when it comes to physics achievements. According to most experts, a successful fusion reactor would function as a near-unlimited source of energy.

In other words, if theres a working demonstration of an actual fusion reactor by 2025, we could see an end to the global energy crisis within a few decades.

TAE, one of the companies working on the fusion problem, says the big difference-maker now is machine learning. According to a report from Forbes, Googles been helping TAE come up with modern solutions to decades-old math problems by using novel AI systems to facilitate the discovery of new fusion techniques.

The CEO of TAE says his company will commercialize fusion technology within the decade. Hes joined by executives from several other companies and academic institutions who believe were finally within a decade or so of debuting the elusive energy technology MIT researchers say theyll have theres done before 2028.

But, this level of optimism isnt reflected in the general scientific community. The promise of nuclear fusion has eluded the worlds top researchers for so long now that, barring a major peer-reviewed eureka moment, most self-respecting physicists are taking these new developments with an industrial-sized grain of salt.

The problems pretty simple: smash a couple of atoms together and soak up the resultant energy. But fusion is really, really difficult. It occurs naturally in stars such as our sun, but recreating the suns conditions on Earth is simply not possible with our current technology.

First off, the sun is much more massive than the Earth, and that mass comes with the fusion-friendly benefit of increased gravity.

All that extra gravity smashes the suns atoms into one another. The combination of pressure and heat (the suns core rocks out at a spicy 27 million degrees Fahrenheit) force hydrogen atoms to fuse together, thus becoming helium atoms. This results in the expulsion of energy.

What makes this type of energy so wonderful is the fact that fusion produces so much more energy than current methods. At least, it should. Unfortunately all the current terrestrial attempts at fusion have come up short because, though many have been successful at fusing atoms, they always take more energy to produce the temperatures required to fuse atoms on Earth than said atoms produce in the process. This is because, lacking the requisite gravity, our only choice is to turn up the heat. Instead of 27 million degrees, Earth-bound fusion occurs at several hundred million degrees.

But now weve harnessed the power of the machines, something previous researchers never had at their disposal. So just how, exactly, is AI supposed to be the difference-maker? Mostly in the area of data analysis. Physics experiments arent exactly simple, and sifting through the figurative tons of data produced by a fusion experiment is an inhumane task best left to the machines.

By giving physicists super human analysis abilities, they can turn around experiments faster. This enables quicker iterations and more meaningful results. Whether or not this is the game changer thatll finally put us over the fusion hump remains to be seen, but theres plenty of reason for optimism.

Published April 27, 2020 23:18 UTC

Continue reading here:

Scientists think well finally solve nuclear fusion thanks to cutting-edge AI - The Next Web

Facebook’s AI accidentally created its own language – TNW

Researchers from the Facebook Artificial Intelligence Research lab (FAIR) recently made an unexpected discovery while trying to improve chatbots. The bots known as dialog agents were creating their own language.

Using machine learning algorithms, dialog agents were left to converse freely in an attempt to strengthen their conversational skills. Over time, the bots began to deviate from the scripted norms and in doing so, started communicating in an entirely new language one they created without human input.

In an attempt to better converse with humans, chatbots took it a step further and got better at communicating without them.

And its not the only interesting discovery.

Researchers also found these bots to be incredibly crafty negotiators. After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations. Over time, the bots became quite skilled at it and even began feigning interest in one item in order to sacrifice it at at a later stage in the negotiation as a faux compromise.

Were not talking singularity-level beings here, but the findings are a huge leap forward for AI.

Deal or No Deal? End-to-End Learning for Negotiation Dialogues on Facebook AI Research & Georgia Institute of Technology

Read next: Quantum entanglement is the future of the internet whether we understand it or not

Bryan Clark is the US Editor, a California resident, and a believer that the West Coast truly is the best coast. He digs web culture, social media, and inappropriate use of GIFs during otherwise serious conversation. Connect via Twitter or Facebook.

Read this article:

Facebook's AI accidentally created its own language - TNW

The AI adman – Axios

Marketing and advertising companies are increasingly using AI models to track trends and generate slogans.

The big picture: Marketers and advertisers focus on two things: identifying and predicting trends that indicate what consumers want, and shaping messages that will appeal to them.

By the numbers: The global market for AI in advertising and marketing is valued at more than $12 billion, and is projected to reach $107 billion by 2028, according to a recent report.

How it works: AI models are particularly good at drawing in vast amounts of data and identifying connections and correlations, which makes them excellent at instant trendspotting, says Daniel Anstandig, CEO of the enterprise tech platform Futuri.

Between the lines: AI can increasingly help with generating that content as well.

Details: I used neuroflash's product to try to generate a slogan for Axios Future, my twice-weekly newsletter.

The other side: If this feels a little mechanical, some flesh-and-blood Mad Men agree.

What's next: As AI text and even video generation improves, more and more of the digital ad copy we see will be at least touched by a machine.

The bottom line: And unlike the copywriters in "Mad Men," AI marketers won't raid your liquor cabinet.

Original post:

The AI adman - Axios

Inside the Army’s futuristic test of its battlefield artificial intelligence in the desert – C4ISRNet

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Promethean Fire

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Dead Center

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

Here is the original post:

Inside the Army's futuristic test of its battlefield artificial intelligence in the desert - C4ISRNet

Equifax and SAS Leverage AI And Deep Learning To Improve Consumer Access To Credit – Forbes


Forbes
Equifax and SAS Leverage AI And Deep Learning To Improve Consumer Access To Credit
Forbes
Artificial intelligence, machine learning, and neural networks-based deep learning are concepts that have recently come to dominate venture capital funding, startup formation, promotion and exits, and policy discussions. The highly-publicized triumphs ...
Deals: Learn The Math Behind AI In This Fascinating Course BundleGizmodo Australia

all 4 news articles »

Excerpt from:

Equifax and SAS Leverage AI And Deep Learning To Improve Consumer Access To Credit - Forbes

Googles AI-focused venture fund leads $5.4M investment for Seattle analytics startup Iteratively – GeekWire

Iteratively CEO Patrick Thompson. (Iteratively Photo)

Seattle startup Iteratively raised a $5.4 million round led by Gradient Ventures, Googles AI-focused venture fund.

Founded in 2019 by veterans of Atlassian and Microsoft, Iteratively sells software to data and product teams for customer analytics tracking. The idea is to help prevent data quality problems at the outset of entry and have standardized customer data in one place. Iteratively integrates with third-party data analytics tools such as Amplitude, Mixpanel, Segment, dbt, and more.

Box, Beekeeper, thredUP, Dribbble, and others are clients.

The 10-person company is led by CEO Patrick Thompson, who co-founded Iteratively with Ondrej Hrebicek. They previously worked together at Syncplicity, a file sharing startup co-founded by Hrebicek that was acquired by EMC in 2016.

We kept hearing the same thing from data and product teams that they have lost confidence in their analytics, Thompson said in a statement. We built a tool that helps them rebuild trust in their data and empowers them to collaborate on analytics. We believe data is a team sport and collaboration is key for cross-functional teams to succeed.

Fika Ventures and PSL Ventures also participated in the round. PSL Ventures led a previous round in 2019. Zach Bratun-Glennon, partner at Gradient Ventures, joined the board.

The Iteratively team possesses a relentless focus on finally creating the source of truth for analytics data, PSL wrote in a blog post. This trustworthy foundation unlocks countless new data use cases from personalization and recommendation engines to drive growth, churn prediction and prevention to improve retention, and new 1-1 marketing scenarios.

Google launched Gradient Ventures in 2017 as part of Alphabets continued investment in AI. Gradient portfolio companies get access to AI training from Google and help from Google engineers.

See the original post:

Googles AI-focused venture fund leads $5.4M investment for Seattle analytics startup Iteratively - GeekWire

Beyond Limits Brings Space-Tested AI to Earth’s Harshest Terrains – PCMag

(Photo by Greg Rakozy on Unsplash)

When sending rovers and robots to traverse rough terrain, from the surface of Mars to the bottom of the ocean, communicating with these devices from afar can be a dicey affair. One false move and million-dollar gadgets turn into bricks and valuable data is toast.

Glendale, California-based Beyond Limits wants to make sure that never happens. Its software adds "human-like reasoning" to technology that solves complex problems in high-risk environments. As CEO AJ Abdallat explains, these "bio-inspired algorithms...imitate the functions of a human brain."

After one year of exclusivity with BP, which also invested $20 million in the company, Beyond Limits has signed a $25 million contract with Xcell to build the world's first power plant guided by cognitive AI in West Africa. PCMag spoke with Abdallat ahead of his keynote at the at the Phi Science Institute AI Summit in Jordan. Here are edited and condensed excerpts of our conversation.

PCMag: Beyond Limits is deploying many technologies developed by your co-founder and CTO, Dr. Mark James, a research scientist who worked at NASA-JPL for over 20 years. How did you two meet?AJ Abdallat: We met in '98 at Caltech, which manages the [NASA] Jet Propulsion Laboratory. I was working with the Caltech president and the Technology Transfer Program to commercialize technologies that were developed for the space program and make them available on Earth.

Give us the backstory on Dr. James' work at JPL.Mark designed and wrote NASA's first AI system, the Spacecraft Health Automated Reasoning Program [SHARP], which was used for the Voyager mission. It monitored all the system and performance data from the Deep Space Network. Since Neptune is 2.7 billion miles away from Earth, it takes three huge DSN antenna arrays in North America, Spain, and Australia to communicate with the spacecraft.

As Voyager 2 headed toward Neptune, Mark's AI system predicted the imminent failure of a key communications transponder (at mission control) that would have caused a catastrophic break in comms. The spacecraft could have burned up in the Neptune atmosphere and the mission would have terminated. Instead, engineers were able to replace the transponder just in time and the mission continues to this day, more than 10.3 billion miles from Earth.

An artist's concept of NASA's Voyager spacecraft. (Credit: NASA)

Tell us about the Autonomous AI for Mars Opportunity Rover.Solar energy is the life-blood of spacecraft like the Mars Opportunity Rover, but the conditions up there are harsh, unknown, and unpredictable. So, the management of energy is mission-critical. A key component of Beyond Limits AI solutions is a technology called the Hypothetical Scenario Generator (HSG), a revolutionary way of reasoning in the presence of missing and misleading information developed by JPL for NASA.

This advanced software system analyzes data inputs, generates hypothetical situations, and reasons optimal behaviors and results. Early Mars missions suffered from a lack of information about conditions on the surface. Human expertise and limited geographical data were loaded into the HSG. But when bad Mars weather threatened the mission, HSG did not have access to historical weather data. There was no historical data, as it was a first-of-its-kind mission.

But HSG is capable of learning autonomously.Exactly. When the Rover was having trouble charging its batteries, it had detected clouds and wind and associated clouds with particulates, which no one had ever encountered on Mars. HSG reasoned that a cloud might deposit particulates on the solar panels and conducted an autonomous experiment by rotating its solar wings upside down to shake off the dust. It worked, and the Rover's health was assured for years to come. JPL scientists on Earth noticed that HSG had taught itself to correlate hypotheses that had been proven to be correct with sensor data from the Rover.

(Photo by NASA on Unsplash)

That's amazing.HSG learned on its own to optimize behavior of the Rover to conserve power, deploy solar cells safely, and keep the system charged, even during harsh Mars sand and wind storms. HSG had induced new weather models from scratch. The results kept the mission going far beyond its expected lifespan.

How do Beyond Limits' AI solutions differentiate from others in the market today?We specialize in solving complex problems in high-risk environments. Unlike the conventional machine learning, neural networks, and deep learning techniques that are gaining traction today, we take a different approach by adding a symbolic reasoning layer to produce cognitive, human-like reasoning. Beyond Limits has deep roots in what we call bio-inspired algorithms that imitate the functions of a human brain. It allows us to do things like deductive, inductive, and abductive human-like reasoning.

Your AI isn't a black box, then.No. Unlike conventional AI approaches, Beyond Limits AI systems are explainable. The results our systems produce have transparent and detailed audit trails, which are interpretable by both humans and machines. This sets our systems apart from 'black box' conventional AI systems that cannot explain how they arrived at a recommendation. Our systems provide an audit trail that explains the rationale and evidence for the answer in natural language. In high-value industries, establishing trust is important and you need explainability to do this.

Do you provide updates on the core technology back to JPL as part of your licensing agreement?Yes. We enhance some of the IP blocks we've licensed from Caltech/JPL and contribute them back to the core. Our people frequently work with Caltech/JPL people and I'm on the board of advisors for Caltech's CAST lab. We do not supply software to NASA currently. Space is our origin, but our mission today is to solve problems here on Earth.

(Photo by WORKSITE Ltd. on Unsplash)

In fact, your original client and investor was BP, which called on your expertise after the Deepwater Horizon disaster. How did you beat out GE, IBM, and other incumbents in that space?GE and IBM are really good in their fields of conventional AI, but what BP was looking for was a cognitive AI approach. Conventional AI is a great way to analyze a lot of data and tell you the what, but you need cognitive AI to explain the answer and tell you the why. Cognitive AI is needed for true explainability and conventional black box approaches simply cannot explain their answers, which means the engineers cannot fully trust the system or apply it to high-value assets. As AI systems are rolled out at BP, they will increase efficiency, generate revenue, and diagnose problems and predict remedies. All of which could help prevent disasters like Deepwater Horizon from happening again.

BP's exclusivity with Beyond Limits recently ended. What's next for you?The natural next step for us was to expand into natural resources and power management. We recently announced a $25 million project with Xcell for the world's first cognitive AI power plant. We are also working with a car company to monitor driver health while in the car.

Process manufacturing is also going to be a focus for us. These are very complex factories that are running 24/7 365 days of the year. Cognitive AI can make these factories run more efficiently with less risk and downtime while maximizing profits. One of the big highlights is that we've proved that our cognitive approach works for a very tough commercial audience. We are working in high-value, high-risk industries.

Finally, I have to asksticking with the space origin storyhave you built a [benign] HAL 9000?We are not comfortable with the sci-fi cliches about deadly robots, killer cyborgs, and so on. Artificial General Intelligence [AGI], as a concept, is one that's as smart as a human. This is science fiction. The compute required for such a super-powered AI system would fill a football arena and require a huge power plant. Our systems accommodate humans in the loop. The role of our AI systems at Beyond Limits is as an advisor to humans to help with decision-making. Additionally, in many cases, our technology can be embedded in the sensors themselves.

You've built an AI that is more of an IA [Intelligent Augmentation] to us bio-beings, then?Yes, humans make the final decisions with our systems.

AJ Abdallat will be giving the keynote at the Phi Science Institute AI Summit in Jordan on Oct. 29.

Read this article:

Beyond Limits Brings Space-Tested AI to Earth's Harshest Terrains - PCMag

2020 Is The Year AI Goes Mainstream In Marketing – Forbes

Getty

These and many other insights are from BlueShifts study published in collaboration with Kelton Global titled Marketer Martech in 2020: An annual look at tech stack weak spots and wins. You can download a copy of the study here (13 pp., PDF, opt-in). The methodology is based on interviews with 514 B2C marketers distributed across several industries, including e-commerce and retail, digital media & publishing, consumer finance, career development & education, hospitality & travel, arts & entertainment, and real estate. 76% of the respondents are in senior-level positions in B2C marketing companies.

Whats noteworthy about the study is the finding that marketers are relying on more integrated tech stacks combined with AI applications to increase real-time responsiveness with customers across multiple channels.

Key insights from the study include the following:

Marketer Martech in 2020: An annual look at tech stack weak spots and wins.

Marketer Martech in 2020: An annual look at tech stack weak spots and wins.

Marketer Martech in 2020: An annual look at tech stack weak spots and wins.

Marketer Martech in 2020: An annual look at tech stack weak spots and wins.

Link:

2020 Is The Year AI Goes Mainstream In Marketing - Forbes

Choosing Between Rule-Based Bots And AI Bots – Forbes

Shutterstock

Until a decade ago, the only option people had to reach out to a company was to call or email their customer service team. Now, companies offer a chat team to provide better round-the-clock customer service. According to aFacebook-commissioned study by Nielsen, 56% of people would prefer to message rather than call customer service, and thats where bots come into play.

Bots are revolutionizing the way companies interact with their customers. A decade ago, bots were considered a passing tech fad. However, that debate has been put to rest now as major companies like Amazon, Microsoft, Facebook and others have started deploying bots in almost every area of their business. The new debate brewing in the bot community is about the choice between rule-based bots or AI bots. Which one to choose? Which one is better? These are the questions on the minds of business leaders intending to utilize bots in their organizations. There are many factors that contribute to the efficiency of bots for different applications, and understanding these factors can help businesses make informed decisions as to choosing between rule-based versus AI bots.

Building and deploying bots is now on most companies to-do lists, if theyre not already deployed. Nevertheless, most are confused about whether they should go with rule-based bots or AI bots. Lets evaluate the pros and cons of each.

Rule-based bots can answer questions based on a predefined set of rules that are embedded into them. The set of rules can greatly vary in complexity. Building such rule-based bots is much simpler than building AI bots. Rule-based bots are generally faster to train. The bots are built on a conditional if/then basis, which makes them simpler to train. The rule-based bots can take action based on the outcome of the conditional statements. Easy training of rule-based bots simultaneously reduces implementation cost. The rule-based bots are highly accountable and secure. These bots cannot learn on their own and will provide the answers that the companies want them to provide. Since rule-based bots cannot self-learn, this ensures that they will provide consistent customer service. Rule-based bots can professionally hand over the conversation to a human agent if the customer asks something that is absent from the database. The practice to handover the conversation to the human agent ensures that no unnecessary information is conveyed to the customer.

A rule-based approach also enables faster implementation of bots. Unlike AI bots, rule-based bots do not need to wait for years to gather data that can be analyzed by algorithms to understand customer problems and provide solutions. Rule-based bots can be easily implemented by embedding known scenarios and their outputs into them. These bots can then be embedded with more data according to new conversational patterns from new customer interactions. Although rule-based bots have many advantages, their limitations cannot be overlooked.

The problem with predefined rule-based bots is that they need to be embedded with rules for performing every small to complex task. If anything that is out of the database comes their way, then the rule-based bots hand over the conversation to humans. It means that rule-based bots cannot operate on a standalone basis; they need human intervention at some point.

Another limitation of rule-based bots is that of personalized communication. Chatbots can service different people speaking different languages. In addition, not only the languages, but the way of communication also varies from person to person. For instance, to book a flight to Paris one person may say, I want to book a flight to Paris, and another may say, I need a ticket to Paris. Both statements mean the same thing, yet if the rule-based bot is unable to understand that, it will pass the conversation to a human which may frustrate the customer.

Rule-based bots can be embedded with information from conversational patterns as time passes. Nevertheless, it becomes a challenge for developers to embed every possible scenario into rule-based bots. Although rule-based bots can be quickly implemented, they are hard to maintain after a certain length of time.

AI bots are self-learning bots that are programmed with Natural Language Processing (NLP) and Machine Learning. It takes a long time to train and build an AI bot initially. However, AI bots can save a lot of time and money in the long run. The use of AI bots works well with companies that have a lot of data as they can self-learn from the data. The self-learn ability of AI bots saves money, as unlike rule-based bots, they do not need to be updated after a certain interval of time. AI bots can be programmed to understand different languages and can address personalized communication challenges faced by rule-based bots.

With the use of deep learning, AI bots can learn to read the emotions of a customer. These bots can interact with the customers based on their mood. For instance, a China-based startup, Emotibot, is helping to develop chatbots that can detect the current mood of the customer and respond accordingly. With constant learning, AI bots can help provide personalized customer service to enhance customer engagement. Since AI bots can handle customer queries from end-to-end without human interaction required, they can be deployed for round-the-clock customer service.

AI can make chatbots smart, but it cannot make them understand the context of human interactions. For example, humans can change their way of communication depending on with whom they are communicating. If they are communicating with small children they use simpler words and shorter sentences. In addition, when human employees communicate with clients they use a more formal tone. Since bots cannot understand the human context, they communicate with everyone in the same way, irrespective of age or gender. The self-learning ability of AI bots might seem helpful to businesses but it can cause trouble sometimes. AI bots do not possess an accurate decision-making quality, and thus can learn something that they are not supposed to. For instance, a chatbot named Tay started posting offensive tweets. The chatbot Tay got manipulated through social engineering tweets and started posting undesirable phrases like Hitler was right, in a canned Repeat after me. series of tweets.

These advantages and disadvantages can help companies decide whether to use rule-based bots or AI bots, but only up to a certain extent. There are many other factors that enterprises should consider before implementing chatbots in their companies. Whether the bots will serve B2B or B2C, in what areas the bots will be deployed, and how the bots will be maintained are some factors to be considered. Rule-based bots and AI bots both have their own benefits and disadvantages, and both can be useful in their own ways. Enhanced customer service is king when it comes to the growth of a business. Understanding how different bots will improve their customer service ultimately helps them choose the best-suited bot for their business.

Read this article:

Choosing Between Rule-Based Bots And AI Bots - Forbes