DigestAIs 19-year-old founder wants to make education addictive – TechCrunch

When Quddus Pativada was 14, he wished that he had an app that could summarize his textbooks for him. Just five years later, Pativada has been there and done that earlier this year, he launched the AI-based app Kado, which turns photos, documents or PDFs into flash cards. Now, as the 19-year-old founder takes the stage for Startup Battlefield, hes looking to take his company, DigestAI, beyond flashcards to create an AI dialogue assistant that we can all carry around on our phones.

If we make learning truly easy and accessible, its something you could do as soon as you open your phone, Pativada told TechCrunch. We want to put a teacher in every single persons phone for every topic in the world.

Quddus Pativada, founder at DigestAI pitches as part of TechCrunch Startup Battlefield at TechCrunch Disrupt in San Francisco on October 18, 2022. Image Credits: Haje Kamps / TechCrunch

The companys AI is trained on data from the internet, but the algorithm is fine-tuned to recall specific use cases to make sure that its responses are accurate and not too thrown off by online chaos.

We train it on everything, but the actual use cases are called within silos. Were calling it federated learning, where its sort of siloed in and language models are operating on a use case basis, Pativada said. This is good because it avoids malicious use.

Pativada said that this kind of product would be different from smart assistants like Apples Siri or Amazons Alexa because the information it provides would be more personalized and detailed. So, for certain use cases, like asking for sources to use in an essay, the AI will pull from academic journals to make sure that the information is accurate and appropriate for a classroom.

Despite running an educational AI startup, Pativada isnt currently in school. He took a gap year before going to college to work on his startup, but as DigestAI took off, he decided to keep building instead of going back to school. Growing up, he taught himself to code because he loved video games, so he wanted to make his own by age 10, he published a Flappy Bird clone on the App Store. Naturally, his technological ambitions matured a bit over time. Before founding DigestAI, Pativada built a COVID-19 contact tracing platform. At first, he just made the app as a tool for his classmates but his work ended up being honored by the United Arab Emirates government.

Image Credits: DigestAI

So far, the outlook is good for the Dubai-based company. Pativada who says he feels skittish about the CEO label, and prefers to think of himself as just a founder has raised $600,000 so far from angel investors like Mark Cuban and Shaan Patel, who struck a deal on Shark Tank for his SAT prep company, Prep Expert.

How does a 19-year-old in Dubai capture the attention of one of thee most well-known startup investors? A cold email. Mark, we apologize if this admission makes your inbox even more nightmarish.

I was watching a GQ video of Mark Cubans daily routine, Pativada said. He said he reads his emails every morning at 9 AM, and I looked at the time in Dallas, and it was about 9 AM. So I was like, maybe I should just shoot him an email and see what happens. While he was at it, he reached out to Patel, whose educational startup has done over $20 million in sales. Patel hopped on a video call with the teenage founder, and by the next week, he and Cuban both offered to invest in DigestAI.

We raised our entire round through cold emails and Zoom, Pativada told TechCrunch. It sort of helped because no one can see how young I look in person.

Before he decided to eschew college altogether, Pativada applied to Stanford and interviewed with an alumnus, as is standard in the admissions process. He didnt end up getting into the competitive Palo Alto university, but his interviewer, who works at Stanford, did end up investing in his company. Go figure.

Our goal is to work with universities like Stanford, Pativada said. The company is also targeting enterprise clients. Currently, DigestAI works with some U.S.-based universities, Bocconi University in Italy, a European law firm and other clients. At the law firm, DigestAI is testing a tool that allows associates to text a WhatsApp number to quickly brush up on legal terms.

In the long term, DigestAI wants to create an SMS system where people can text the AI asking for help learning something he wants information to be so accessible that its addictive.

That is what AI is its almost the best version of a human being, Pativada said.

View original post here:

DigestAIs 19-year-old founder wants to make education addictive - TechCrunch

This little USB stick is designed to make AI plug-and-play – The Verge – The Verge

Step by step, artificial intelligence is moving down from the cloud and into the device in your hand. The latest sign? This unassuming little thumb drive from chipmaker Movidius, which packs one of the companys machine vision processors the same chip used by DJI for its autonomous drones into a plug-and-play USB stick. If manufacturers want to beef up the AI capabilities of their new product, all they need to do is plug in one of these.

The Movidius Neural Compute Stick was actually announced last April as a prototype device called the Fathom. But then Intel came acalling, and bought Movidius in September that year for an undisclosed amount. In all the work and confusion that comes with any sale like that, the Fathom got put on hold. Now though, its back.

From a technical point of view, the new Compute Stick is the same as the old one. At its heart is a Myriad 2 Vision Processing Unit or VPU a low-power processor (it consumes just a single watt) that uses twelve parallel cores to run vision algorithms like object detection and facial recognition. Movidius says it delivers more than 100 gigaflops of performance, and can natively run neural networks built using the Caffe framework. (Caffe is one of the neural network libraries around, but its not clear if the Compute Stick will also work with Googles popular TensorFlow framework.) For more details, you can check out the full spec sheet for the Myriad 2 here.

The main changes in this new version are that its made out of aluminum instead of plastic, and the price has been cut from a putative $99 for the original, to $79. Movidius says Intels involvement helped push this price down.

But who will use the Neural Compute Stick? Well, itll come in handy for a few different groups. AI researchers will be able to use the stick as an accelerator plugging it in to their computers to get a little more local power when training and designing new neural nets. (Movidius notes that you can also chain multiple sticks together, boosting the performance linearly with each one you add). Companies looking to put AI powers in a physical product will also benefit, with the USB-compatible stick giving them an easy and fast way to execute neural networks locally.

But of course, a device like this certainly has its limitations. For a company building, say, an AI-powered security camera, there will be more efficient ways to incorporate specialized vision processors in their product, especially if theyre manufacturing at scale. And for a researcher training new neural nets, buying the latest graphic cards or renting processing power in the cloud will offer quicker results. Itll just be more expensive too.

What a device like the Neural Compute Stick does well, is fill a gap in the market. And, in doing so, it make artificial intelligence that little bit more accessible.

See original here:

This little USB stick is designed to make AI plug-and-play - The Verge - The Verge

5 Ways IBM Predicts AI and Ad Tech Will Evolve in 2021 – Adweek

With tech giants set to crack down on cookies and third-party trackers in the coming months, the ad-tech industry is in for some major changes.

IBM Watson Advertising has bet that artificial intelligence and anonymized behavioral insights will play a central role in that post-cookie future. The company has rolled out a series of product releases this year that aimed to lessen marketers reliance on personal data.

In a new report this week, Sheri Bachstein, global head of IBM Watson Advertising and The Weather Company, laid out some predictions for how those changes may take shape in the year to come, from a ramping up of discussions around consumer privacy to what a post-Covid-19 new normal might look like.

Bachstein expects pushes for more data privacy policies like the European Unions General Data Protection Regulations and Californias Prop 24, which modifies its existing Consumer Privacy Act, to intensify in the coming year. These efforts could create a patchwork of state-by-state regulations that might make it difficult for some companies to scale.

To avoid that situation, IBM is calling on the industry to join in advocating for federal legislation that would standardize rules across the board. This effort should be collaborative and include viewpoints from a variety of industry partners, councils and big technology brands to ensure legislation works across the entire ecosystem, Bachstein said.

Partly as a result of legislative pushes, consumers will likely have more transparency into what data is being collected on them and how its being used, Bachstein predicts. But consumers will also continue to expect personalized experiences, meaning that there will still be a market for targeting.

Marketers are operating under conditions that are unique to the current state of the pandemic, and Bachstein expects many of those changes to revert next year as the world eventually begins to reopen. While some virtual formats like video conferencing platforms and augmented reality will likely see lasting effects, other trendslike a growth in desktop performance over mobilewill return to the overarching trajectories of the years before the pandemic.

There are going to be some user behaviors that may stick around, Bachstein told Adweek. But as people start going back to work, some of the digital behaviors that were seeing will likely return to normal.

Meanwhile, industries will likely recover from the economic turmoil at different paces. Industries such as travel, publishing and advertising, for instance, may be slower to bounce back from the devastation.

The Trade Desk recently struck a series of major partnerships in the ad-tech industry for its Unified ID 2.0 initiative, which seeks to use encryption to create a standardized replacement for third-party cookies. IBM believes that collaborative efforts like these are a step in the right direction, but ultimately wont make up for the capabilities that will be lost with the end of third-party tracking.

Bachstein maintains that the shift to reliance on AI-gleaned consumer insights will ultimately be as transformative for the ad-tech industry as the transition to programmatic was a decade ago. But the company stresses that adoption will take time, and that consumers and business clients still dont fully understand the ins and outs of what the technology can do.

When programmatic came on the scene 10 years ago, it took a while for everyone to really adopt it. And AI is probably going to be similar in that some people will be early adopters of it, Bachstein said. But it is going to take education. Weve got to take AI and make it not a buzzword anymore, but put it into practice to get results.

Read the original here:

5 Ways IBM Predicts AI and Ad Tech Will Evolve in 2021 - Adweek

Quick-Thinking AI Camera Mimics the Human Brain – Scientific American

Researchers in Europe are developing a camera that will literally have a mind of its own, with brainlike algorithms that process images and light sensors that mimic the human retina. Its makers hope it will prove that artificial intelligencewhich today requires large, sophisticated computerscan soon be packed into small consumer electronics. But as much as an AI camera would make a nifty smartphone feature, the technologys biggest impact may actually be speeding up the way self-driving cars and autonomous flying drones sense and react to their surroundings.

The conventional digital cameras used in self-driving and computer-assisted cars and drones as well as in surveillance devices capture a lot of extraneous information that eats up precious memory space and battery life. Much of that data is repetitive because the scene the camera is watching does not change much from frame to frame. The new AI camera, called an ultralow-power event-based camera, or ULPEC, will have pixel sensors that come to life only when the camera is ready to record a new image or event. That memory- and power-saving feature will not slow performancethe camera will also have new electrical components that allow it to react to changing light or movement in a scene within microseconds (millionths of a second), compared with milliseconds (thousandths) in todays digital cameras, says Ryad Benosman, a professor at the University Pierre and Marie Curie who leads the Vision and Natural Computation group at the Paris-based Vision Institute. It records only when the light striking the pixel sensors crosses a preset threshold amount, says Benosman, whose team is developing the learning algorithms for an artificial neural network that serves as the cameras brain. An artificial neural network is a group of interconnected computers configured to work like a system of flesh-and-blood neurons in the human brain. The interconnections among the computers enable the network to find patterns in data fed into the system, and to filter out extraneous information via a process called machine learning. Such a network does away with not only acquiring but also processing irrelevant information, thus making the camera faster and requiring lower power for computation, Benosman says.

The AI camera's photo sensorsits eyeswill consist of tiny pieces of semiconductors and circuitry on silicon, which turn changes in light into electrical signals sent to the neural network. Integrated circuits and a new type of electronic component called a memory resistor or memristor, acting as the equivalent of synaptic connections, will process the information in those signals, says Sren Boyn, a researcher at the Zurich-based Swiss Federal Institute of Technology who worked with the CNRS-Thales joint research unit that is now working with Benosmans team. One of the biggest challenges to that approach is that memristor technologyfirst theorized in 1971 by University of California, Berkeley, professor emeritus Leon Chua (pdf) and later mathematically modeled by HewlettPackard Labs researchers in 2008is still largely in the development stage, which would explain why for the ULPEC project is not expected to have a working device until 2020.

The AI cameras memristors will consist of a thin layer of a ferroelectric materialbismuth ferritesandwiched between two electrodes, says Vincent Garcia, a research scientist at French scientific research agency CNRS/Thales, which is developing the ULPEC memristor. Ferroelectric materials have positive and negative sidesbut applying voltage reverses those charges. Thus, the resistance of memristors can be tuned using voltage, Garcia explains. Similar to our brains learning ability that is dependent on the stimulation of synapses, which serve as connections between our neurons, this tunable resistance helps in making the network learn. The more the synapse is stimulated, the more the connection is reinforced and learning improved.

The combination of bio-inspired optical sensors and neural networks will make the camera an especially good fit for self-driving cars and autonomous drones, says Christoph Posch, chief technology officer of the Paris-based start-up Chronocam, which is designing the cameras optical sensors. In self-driving cars the onboard computer must react to changes very quickly while navigating through traffic or determining the movement of pedestrians, Posch explains. The ULPEC can detect and process these changes rapidly. German automotive equipment manufacturer Boschalso involved in the projectwill investigate how the camera might be used as part of its autonomous and computer-aided driving technology.

The researchers plan to place 20,000 memristors on the AI cameras microchip, says Sylvain Saighi, an associate professor of electronics at the University of Bordeaux and head of the $5.57-million ULPEC project.

Getting all of the components of a memristor neural network onto a single microchip would be a big step, says Yoeri van de Burgt, an assistant professor of microsystems at Eindhoven University of Technology in the Netherlands, whose research includes building artificial synapses. Since it is performing the computation locally, it will be more secure and can be dedicated for specific tasks like cameras in drones and self-driving cars, adds van de Burgt, who was not involved in the ULPEC project.

Assuming the researchers can pull it off, such a chip would be useful well beyond smart cameras because it would be able to perform a variety of complicated computations itself, rather than off-loading that work to a supercomputer via the cloud. In this way, Posch says, the camera is an important step toward determining whether the underlying memristors and other technology will work, and how they might be integrated into future consumer devices. The camera, with its innovative sensors and memristor neural network, could demonstrate that AI can be built into a device in order to make it both smart and more energy efficient.

More here:

Quick-Thinking AI Camera Mimics the Human Brain - Scientific American

China and the US are battling to become the world’s first AI superpower – The Verge

In October 1957, the Soviet Union launched the Earths first artificial satellite, Sputnik 1. The craft was no bigger than a beach ball, but it spurred the US into a frenzy of research and investment that would eventually put humans on the Moon. Sixty years later, the world might have had its second Sputnik moment. But this time, its not the US receiving the wake-up call, but China; and the goal is not the exploration of space, but the creation of artificial intelligence.

The second Sputnik arrived in the form of AlphaGo, the AI system developed by Google-owned DeepMind. In 2016, AlphaGo beat South Korean master Lee Se-dol at the ancient Chinese board game Go, and in May this year, it toppled the Chinese world champion, Ke Jie. Two professors who consult with the Chinese government on AI policy told The New York Times that these games galvanized the countrys politicians to invest in the technology. And the report the pair helped shape published last month makes Chinas ambitions in this area clear: the country says it will become the worlds leader in AI by 2030.

Its a very realistic ambition, Anthony Mullen, a director of research at analyst firm Gartner, tells The Verge. Right now, AI is a two-horse race between China and the US. And, says Mullen, China has all the ingredients it needs to move into first. These include government funding, a massive population, a lively research community, and a society that seems primed for technological change. And it all invites the trillion-dollar question: in the coming AI Race, can China really beat the US?

To build great AI, you need data, and nothing produces data quite like humans. This means Chinas massive 1.4 billion population (including some 730 million internet users) might be its biggest advantage. These citizens produce reams of useful information that can be mined by the countrys tech giants, and China is also significantly more permissive when it comes to users privacy. For the purposes of building AI, this compares favorably with European countries and their citizen-centric legislation, says Mullen. Companies like Apple and Google are designing workarounds for this privacy problem, but its simpler not to bother in the first place.

Chinas 1.4 billion population is a data gold mine for building AI

In China, this also means that AI is being deployed in ways that might not be acceptable in the West. For example, facial recognition technology is used for everything from identifying jaywalkers to dispensing toilet paper. These implementations seem trivial, but as any researcher will tell you, theres no substitute for deploying tech in the wild for testing and developing. I dont think China will have the same level of existential crisis about the development of AI that the West will have, says Mullen.

The adventures of Microsoft chatbots in China and the US make for a good comparison. In China, the companys Xiaoice bot, which is downloadable as an app, has more than 40 million users, with regulars talking to it every night. It even published a book of poetry under a pseudonym, sparking a debate in the country about artificial creativity. By comparison, the American version of the bot, named Tay, was famously shut down in a matter of days after Twitter users taught it to be racist.

Matt Scott, CTO of Shenzhen machine vision startup Malong Technologies, says Chinas attitude toward new technology can be risk-taking in a bracing way. For AI you have to be at the cutting edge, he says. If youre using technology thats one year old, youre outdated. And I definitely find that in China at least, my community in China is very adept at taking on these risks.

The output of Chinas AI research community is, in some ways, easy to gauge. A report from the White House in October 2016 noted that China now publishes more journal articles on deep learning than the US, while AI-related patent submissions from Chinese researchers have increased 200 percent in recent years. The clout of the Chinese AI community is such that at the beginning of the year, the Association for the Advancement of Artificial Intelligence rescheduled the date of its annual meeting; the original had fallen on Chinese New Year.

Whats trickier, though, is knowing how these numbers translate to scientific achievement. Paul Scharre, a researcher at the think tank Center for a New American Security, is skeptical about statistics. You can count the number of papers, but thats sort of the worst possible metric, because it doesnt tell you anything about quality, he says. At the moment, the real cutting-edge research is still being done by institutions like Google Brain, OpenAI, and DeepMind.

In China, though, there is more collaboration between firms like these and universities and government something that could be beneficial in the long term. Scotts Malong Technologies runs a joint research lab with Tsinghua University, and there are much bigger partnerships like the national laboratory for deep learning run by Baidu and the Chinese governments National Development and Reform agency.

Other aspects of research seem influential, but are difficult to gauge. Scott, who started working in machine learning 10 years ago with Microsoft, suggests that China has a particularly open AI community. I think there is a bit more emphasis on [personal] relationships, he says, adding that Chinas ubiquitous messaging app WeChat is a rich resource, with chat groups centered around universities and companies sharing and discussing new research. The AI communities are very, very alive, he says. I would say that WeChat as a vehicle for spreading information is highly effective.

What most worries Scharre is the US governments current plans to retreat from basic science. The Trump administrations proposed budget would slash funding for research, taking money away from a number of agencies whose work could involve AI. Clearly [Washington doesnt] have any strategic plan to revitalize American investment in science and technology, Scharre tells The Verge. I am deeply troubled by the range of cuts that the Trump administration is planning. I think theyre alarming and counterproductive.

Trumps administration could never be called science-friendly

The previous administration was aware of the dangers and potential of artificial intelligence. Two reports published by the Obama White house late last year spelled out the need to invest in AI, as well as touching on topics like regulation and the labor market. AI holds the potential to be a major driver of economic growth and social progress, said the October report, noting that public- and private-sector investments in basic and applied R&D on AI have already begun reaping major benefits.

In some ways, Chinas July policy paper on AI mirrors this one, but China didnt just go through a dramatic political upheaval that threatens to change its course. The Chinese policy paper says that by 2020 it wants to be on par with the worlds finest; by 2025 AI should be the primary driver for Chinese industry; and by 2030, it should occupy the commanding heights of AI technology. According to a recent report from The Economist, having the high ground will pay off, with consultancy firm PwC predicting that AI-related growth will lift the global economy by $16 trillion by 2030 with half of that benefit landing in China.

For Scharre, who recently wrote a report on the threat AI poses to national security, the US government is laboring under a delusion. A lot of people take it for granted that the US builds the best tech in the world, and I think thats a dangerous assumption to make, he says, saying that a wake-up call is due. China may have had the Sputnik moment it needed to back AI, but has the US?

Others question whether this is necessary. Mullen says that while the momentum to be the world leader in AI currently lies with China, the US is still marginally ahead, thanks to the work of Silicon Valley. Scharre agrees, and says that government funding isnt that big of an issue while US tech giants are able to redirect just a little of their ad money to AI. Money you get from somewhere like DARPA is just a drop in the ocean compared to what you can get from the likes of Google and Facebook, he says.

These companies also provide a counterpoint to the argument that Chinas demographics give it an unmatchable advantage. Its certainly good to have a huge number of users in one country, but its probably better to have that same number of users spread across the world. Both Facebook and Google have more than 2 billion people hooked on to their primary platforms (Facebook itself and Android) as well as a half-dozen other services with a billion-plus users. Its arguable that this sort of reach is more useful, as it provides an abundance of data, as well as diversity. Chinas tech companies may be formidable, but they lack this international reach.

Scharre suggests this is important, because when it comes to measuring progress in AI, on-the-ground implementations are worth more than research. What counts, he says, is the ability of nations and organizations to effectively implement AI technologies. Look at things like using AI in healthcare diagnoses, in self-driving cars, in finance. Its fine to be, say, 12 months behind in research terms, as long as you can still get ahold of the technology and use it effectively.

In that sense, the AI race doesnt have to be zero sum. Right now, cutting-edge research is developed in secret, but shared openly across borders. Scott, who has worked in the field in both the US and China, says the countries have more in common than they think. People are afraid that this is something happening in some basement lab somewhere, but its not true, he says. The most advanced technology in AI is published, and countries are actively collaborating. AI doesnt work in a vacuum: you need to be collaborative.

In some ways, this is similar to the situation in 1957. When news of Sputniks launch first broke, there was an air of scientific respect, despite the the geopolitical rivalry between the US and USSR. A contemporary report said that Americas top scientists showed no rancor at being beaten into space by the Soviet engineers, and, as one of them put it, We are all elated that it is up there.

Throughout the 60s and early 70s, America and Russia jockeyed back and forth to be first in the space race. But in the end, the benefits of this competition new scientific knowledge, technology, and culture didnt just go to the winner. They were shared more evenly than that. By this metric, a Sputnik moment doesnt have to be cause for alarm, and the race to build better AI could still benefit us all.

Read the original post:

China and the US are battling to become the world's first AI superpower - The Verge

AI: Boon to business, bane to low-skilled workers – Inquirer.net

VIDEO CONFERENCE ON TECH VISION. JP Palpallatoc, Accenture PH digital lead, discusses his companys technology vision for 2017 in a video conference with Cebus business reporters held at Accentures office in I.T. Park, Barangay Apas, Cebu City. (CDN PHOTO/JUNJIE MENDOZA)

As artificial intelligence (AI) moves to the forefront of business operations in the Philippines, the workforce needs to learn more complex skills to cushion the risk of losing jobs to automation.

JP Palpallatoc, Accenture Philippines digital lead, said that with the rise of AI comes the risk of employment reduction, especially among those with lower-level skills.

We need to move those with lower-level skills up the value chain through education and helping them learn more complex skills, he said in a video conference with Cebu-based reporters on Wednesday.

While AI has the potential to support humans in terms of business, through digital means of transacting and interacting with customers, Palpallatoc also recognized that this technology runs the risk of displacing human workers.

With the many improvements to AI technology today, more companies have opted not to hire additional employees to handle transactions that can easily be automated such as responding to common customer queries.

The Philippine Information Technology-Business Process Management (IT-BPM) Roadmap 2022 has projected a decline in demand for low-level skilled workers in the coming years, but also sees a rise in need for workers with mid- to high-complexity skills.

Palpallatoc said the roadmap also focuses on education and getting the workforce ready once this time comes, putting an emphasis on developing graduates in Science, Technology, Engineering, and Math (STEM).

The industry roadmap targets to directly employ 1.8 million IT-BPM workers by year 2022, of which 73 percent hold mid- to high-value jobs.

But Palpallatoc said the opportunities in taking advantage of AI are greater than its risks, adding that the technology is among the five trends seen to drive the transformation of businesses in the next three years.

Citing Accentures Technology Vision 2017, Palpallatoc said people used to be the ones adapting to technology but are now starting to make technology adapt to them and their needs.

The report identified five emerging technology trends that are essential to business success in todays digital economy, based on insight from more than 5,400 business and IT executives surveyed worldwide.

Among these are AI becoming the new User Interface (UI), underpinning the way transactions and interaction are done with systems.

According to the report, 79 percent of executives agreed that AI will revolutionize the way they gain information from and interact with customers.

Meanwhile, 85 percent reported that they will invest intensively in AI-related technologies over the next three years.

Another trend was design for humans, where technology decisions are being made by humans for humans.

Technology now adapts to how people behave, which many executives believe should be used to guide a businesss desired outcomes.

The report also saw a surge in demand for labor platforms and online work-management solutions, resulting to companies dissolving traditional hierarchies and replacing them with talent marketplaces.

Case in point, 85 percent of executives surveyed said they plan to increase their organizations use of independent freelance workers over the next year.

Another trend seen by Accenture was ecosystems as macrocosms, where platform companies that provide a single point of access to multiple services have completely broken rules on how companies compete.

Companies are now integrating their core business functionalities with third parties and leverage these relationships to build their roles in new digital ecosystems.

One example is car manufacturing company General Motors (GM) investing $500 million on ride-hailing app Lyft, launching a program that allows car-less drivers of Lyft to rent vehicles made by GM, opening up an entirely new line of business.

Accentures annual report also stated that to succeed in todays ecosystem-driven digital economy, businesses must delve into unchartered territory. Instead of focusing on the introduction of new products and services, firms should also seize opportunities to establish rules and standards for entirely new industries. /with USJ-R Intern Vanisa Soriano

Read more from the original source:

AI: Boon to business, bane to low-skilled workers - Inquirer.net

The AI revolution in science – Science Magazine

Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human. An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if its 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a humans expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computers attempt to understand spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.

PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as earn a high video game score or manage a factory efficiently. During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say its impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AIs ability to pass as human. In Alan Turings original conception, an AI would be judged by its ability to converse through written text.

Read the original post:

The AI revolution in science - Science Magazine

Microsoft makes major cuts to MSN editorial team amid AI shift and broader fiscal year-end layoffs – GeekWire

BigStock Photo

Microsoft is further cutting back its MSN team amid a controversial shift to automation and AI for content decisions previously left to human editors.

After previously eliminating dozens of MSN contract positions, the company is now laying off an unspecified number of direct employees from MSN, including some senior leaders on the Microsoft News editorial team, according to people familiar with the situation.

Microsoft made cuts across the company as part of its annual fiscal year-end business review, one of these people said. Its fiscal year ends June 30, and its common for Microsoft to restructure some of its operations in conjunction with the annual milestone. Overall, the cutbacks this year appear much smaller than the thousands of employees laid off by the company in some years past.

The company isnt commenting publicly on the cuts at MSN or other groups.

Last week, in an article for Motherboard, former MSN Money editor Bryan Joiner detailed his experience being replaced by an algorithm. Joiner was a contractor for MSN who, like dozens of full-time journalists, lost his job in June. Microsoft replaced the team tasked with curating and editing MSN news content with AI software.

After news of the earlier layoffs surfaced, Microsofts software misidentified a member of the British pop group Little Mix. The mistake trended vigorously because it came so soon after MSN let its human editors go, Joiner wrote.

Based on how far theyve come down this road, the algorithm will sink or swim on its own, which is to say itll probably sink and take down the whole of MSN with it, he wrote. Maybe thats overstating things, but MSN is low enough in the Microsoft hierarchy that its existence has felt like it was on the chopping block for years.

Monica Nickelsburg contributed to this report.

Here is the original post:

Microsoft makes major cuts to MSN editorial team amid AI shift and broader fiscal year-end layoffs - GeekWire

What CIOs need to know about adding AI to their processes – TechRepublic

AI can help many types of businesses get more from their data. In 2021, one expert believes adoption of AI will take leaps forward.

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies. The following is an edited transcript of their conversation.

Karen Roby: As we're heading into 2021, CIOs need to have a checklist of some things to keep in mind when making decisions for this coming year, whether that be about hiring or projects to consider. Let's start with the talent that's needed at companies now, to pull off some of these AI projects. What do you think CIOs need to keep in mind?

SEE: Natural language processing: A cheat sheet (TechRepublic)

Ira Cohen: As you said, 2020 was really special in all this disruption to so many businesses. And AI, actually, is now becoming even more important. The projects that maybe people talked about before have been accelerated now because the speed of movement to new paradigms, that has to be much faster. If you're talking about, for example, commerce, supply chains, need to move much faster. A lot of different projects that maybe before were slowly moving towards more e-commerce, and more shipment. I mean, you're getting your Amazon, but now, so many companies are sending what they're selling out, that you have to have a lot more automation and be a lot more mindful of the data, and be a lot more reactive to how things change constantly. Things are changing much faster, and AI is the perfect thing to manage all of that, if we talk about AI in a very, very global sense, because it has a capability of processing data very fast, giving you insights very fast of very high volumes of data, which is what's happening now, but that's what's needed.

What do you need to actually have in your company in order to actually be able to achieve these goals of these projects? The first order of business, and this is something that people and companies have been doing in the last few years is, put all your data together. Create these data lakes. Data lakes have been very popular, and growing at companies like Snowflake, and other types of companies that have grown tremendously in the last few years, because that's what they offer. But, now, to leverage those data lakes, you need data engineers that know how to pull data quickly out of them, and serve them to the data science team that can actually transform them with algorithms into meaningful insights.

Data engineers is something that is going to be required a lot more in the next year or so, because without those data pipelines, laying of data pipelines that will feed all these AI algorithms and projects, there is nothing. The AI doesn't work without data, at least the AI that we have today. And then comes the machine learning engineers. Today, data science has been something that has grown in the last few years. The data scientists are the ones that are developing the AI required for all these projects. But data scientists, a lot of what was hired was basically people that do analysis. They do kind of one-off projects.

And, now, because these things are starting to be more and more automated, you don't need just a data scientist who knows how to do a project well, and prototype something, but you need the engineers that will make it into products, even if they're internal products. It's not a project anymore, it's internal products that have to constantly work for the company to deliver the rate that they need to deliver. These two areas, the data engineers, and the machine learning engineers, and not just the scientist, these are probably the areas where we need to ... I believe, CIOs need to invest most in their companies.

SEE:Is AI just a fairy tale? Not in these successful use cases(TechRepublic)

Karen Roby: When you consider the talent pool, Ira, how much are we talking about here, as far as supply and demand, when it comes to these more specialized areas with AI and machine learning? I mean, do we have the talent to fill the positions that we're going to need?

Ira Cohen: No. I think there's still a big gap, but what's happening in the market, in general, is that the whole field of AI or machine learning is being democratized by all sorts of tools that are either being wrapped into loose products, or open source completely, either from Google, or from Facebook, from companies that are actually invested a lot in developing the, let's say, the foundations that you would need. And then, the talent pool that needs to use it, they don't have to know as deeply, they don't have to have the knowledge as deeply as the people who developed all these tools. So, there is hope of getting a lot more talent into the area without the need for them to get Ph.Ds, in order to be able to do this. And that is happening in parallel.

With good education, with good courses, you can actually get junior machine learning engineers that can start bringing value. Where the gap is, is in the more senior ones, the ones that do have experience, because you can't hire just junior people. They won't have a clue what to do. You do need some sense of the field. The gap is in the machine learning engineers that are kind of, I would say, the mid-tier, and the experts, of course, that will always be a gap. But, the mid-tier that can teach the juniors how to work, that's where most of the gap is today, I believe.

Karen Roby: There's no question that AI has been fast-tracked for many companies that may not have even been considering moving in that direction yet, until their digital transformation plans were really put on fast-forward as well, from March 2020. Is there any particular industry you're really seeing where it's being embraced even more?

SEE:3D scanning, lidar, and drones: Big data is helping law enforcement solve crimes(TechRepublic)

Ira Cohen: We're seeing it in all sorts of commerce, where even if it was half brick-and-mortar, half online, now, this has pushed them quite significantly. Supply chains and deliveries are definitely a big push in those types of companies. And, telcos, we've also seen in telcos that very big push towards AI, and it's driven by two things that happen now in parallel. One is the virus, right? The whole pandemic, which actually put a lot more pressure on networks, and made them even more important, and actually brought some of the telcos to ... Basically, that provide all the foundations for our communications, brought them to the front, and center.

5G, is the second one that's happening in parallel. So, 5G, changing, coming to play, creating a lot more data, a lot more complexities in the networks, is also pushing them to implement AI, to actually being able to manage all that complex infrastructure, which is becoming even more complex, and even more critical.

Karen Roby: When you look to say, nine months to a year from now, how do you see AI playing a role, even versus now? And, again, how is that going to change things overall for businesses, from small businesses to huge enterprise companies?

SEE:Healthcare is adopting AI much faster since the pandemic began(TechRepublic)

Ira Cohen: I think small businesses will leverage AI for particular tasks, small tasks, and probably, the adoption there will be less, because AI, at the end of the day, is fueled by data. And if you don't have a lot of data, you can make your own decisions fairly quickly anyway. But, for larger companies, the ones that do not embrace it, and do not start using it heavily to make better decisions, to forecast the future, they'll be left behind, because they are not going to benefit from the improved, either margins, by being more efficient, or improve the ability to sell more, because of what those tools will give them, they will start losing out.

There's definitely a race for them to actually do this, tools to embrace it quickly. For the small businesses, I think it will be slower to embrace, unless it's for very particular tasks that before, they could not do, because they could not hire the people to do it. But, now, they'll get the tool that already does it for a small fraction of that price that would be if they had to develop it themselves, and then they can run away with it.

I mean, even looking at just simple e-commerce sites, right? You're trying to sell something, and you want to have a recommendation engine, like Amazon has a recommendation engine on its website, which does improve how much you're selling. Today, a small website, or as a small seller, cannot develop it themselves. It's too expensive. But with it becoming available as a service from companies, they can actually start using it for a fraction of the price, and get the benefit of it even for themselves. For recruiting tools, it will give them a benefit. They'll probably want to buy it rather than trying to develop it themselves.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies.

Image: Mackenzie Burke

Original post:

What CIOs need to know about adding AI to their processes - TechRepublic

China is betting big on AI – and here’s why it’s going to pay off – South China Morning Post

China will see the greatest economic gains from artificial intelligence (AI) by 2030 as the technology accelerates global GDP growth by increasing productivity and boosting consumption, says PwC in a new research report released Tuesday.

Dubbed the fourth industrial revolution, AI technologies are expected to boost global GDP by a further 14 per cent by 2030 the equivalent of an additional US$15.7 trillion and China, as the worlds second largest economy, will see an estimated 26 per cent boost to GDP by that time, the PwC report said.

Launched at the World Economic Forums annual June meeting in northeast Chinas Dalian city, often known as the Summer Davos, the report said labour productivity improvements would account for over half of the US$15.7 trillion in economic gains from AI between 2016 and 2030 more than the current output of China and India combined while increased consumer demand resulting from AI-enabled product enhancements will account for the rest.

The analysis demonstrates how big a game changer AI is likely to be transforming our lives as individuals, enterprises and as a society, said Anand Rao, global leader of artificial intelligence at PwC.

The future is here: China sounds a clarion call on AI funding, policies to surpass US

The technology behind an array of advanced applications, from facial recognition to self-driving vehicles, is the centre of attention for almost every tech company in China as they bet big on AI to gain a competitive edge before it begins to have a more profound impact on peoples lives.

Since the start of this year, Chinese internet heavyweights Baidu, Tencent Holdings and Alibaba Group have been competing harder than ever to lure top AI talent from Silicon Valley in order to accelerate their own AI development. Alibaba owns the South China Morning Post.

PwC predicts that North America will experience productivity gains earlier than China due to its first mover advantage in AI but China is expected to pull ahead of the United States in terms of AI productivity gains within 10 years after it catches up to the technology.

According to the PwC research, AI is projected to boost Chinas GDP by 26 per cent by 2030, while for North America the number is 14.5 per cent. For developing countries in Latin America and Africa, the expected GDP gain will only be about 6 per cent due to the much lower rates of AI technology adoption.

China has already made great leaps in the development of AI and our research shows that [AI] has the potential to be a powerful remedy for slowing growth, said Chuan Neo Chong, chairwoman of Greater China operations for global consultancy Accenture.

Artificial intelligence could put as many as 50m Asian jobs at risk over next 15-20 years: UBS study

In separate research done by Accenture, AI is expected to accelerate Chinas annual growth rate from 6.3 per cent to 7.9 per cent by 2035. The Accenture research, published on Monday, shows that AI could boost Chinas gross value added (GVA) by US$7.11 trillion by 2035 and has the potential to boost Chinas labour productivity by 27 per cent by the same year.

Minimising the economic imbalances brought about by AI will be an important challenge, said Lee Kai-fu, the former Greater China president of Google and founder of venture capital firm Sinovation Ventures.

Those developing countries which will experience rapid population growth in coming decades are expected to be hardest hit by AI in terms job losses, he added.

Most of the wealth created by AI will go into the US and China because of their big pool of talent and [high levels of data generation], as well as the size of their markets, said Li, who is one of the most prominent advocates of AI in China.

Here is the original post:

China is betting big on AI - and here's why it's going to pay off - South China Morning Post

Guavus Unwraps New AI-based Analytics and Automation Products for CSPs – GlobeNewswire

News Summary:

SAN JOSE, Calif., July 16, 2020 (GLOBE NEWSWIRE) -- Guavus, a pioneer in AI-based analytics for communications service providers (CSPs), today announced the launch of Guavus-IQ -- a comprehensive product portfolio that provides a unique multi-perspective analytics experience for CSPs.

Guavus-IQ delivers highly instrumented analytics insights to CSPs on how each subscriber is experiencing their network and services (bringing the outside perspective in) and how their network is impacting their subscribers (understanding how their internal operations are impacting their customers). This single, real-time outside-in/inside-out perspective helps operators identify subscriber behavioral patterns and better understand their operational environments. This enables them to increase revenue opportunities through data monetization and improved customer experience (CX), as well as reduce costs through automated, closed-loop actions.

In addition, Guavus-IQ has been designed to be operator-friendly for CSPs -- it doesnt require the operator to be a data science specialist or expert. It combines network and data science and leverages explainable AI to deliver easy-to-understand analytics insights to CSP users across the business at a significantly reduced cost.

The new Guavus-IQ products build on Guavus ten plus years of experience providing innovative analytics solutions focused exclusively on the needs of CSPs. The products are currently deployed in 8 of the top CSPs in Europe, Latin America, Asia-Pac and North America.

Big Data Doesnt Need to Come at a Big Cost

Guavus-IQ consists of two main product categories:

Just because data is big doesnt mean it cant be resource-efficient. The Guavus-IQ products leverage approximately 50% of the compute/processing-related hardware required by traditional analytics solutions through their use of advanced big data collection capabilities and real-time, in-memory stream processing edge analytics. This results in more powerful data collection from over 200 sources at half the cost.

Ops-IQ provides additional operational efficiencies through a combination of anomaly detection, fault correlation, and root cause analysis -- which not only lower OPEX but elevate CX. Ops-IQ fault analytics suppress more than 99.5% of alarms not associated with network incidents, and accurately predict incident-causing alarms by 93.9%. This significantly improves the Mean-Time-To-Response (MTTR) in a CSP Network Operations Center (NOC), saving more than $10 million a year in OPEX costs currently for a large service provider customer.

Service-IQ also plays a significant role in positively impacting CX and reducing costs. Service-IQ allows for flexible data reuse when it ingests new data, it ingests data once and then enables the reuse of that same data for additional use cases across both Service-IQ and Ops-IQ. This new level of efficiency saves operators time with ingest, a costly and complex part of the analytics process.

Because the data pipeline of previously ingested data can be automatically re-instantiated for use within Service-IQ or Ops-IQ, CSPs dont need to become big data experts in order to leverage the power and value of the data theyve collected. Instead, the Guavus-IQ products apply proven data science methods inside the integrated solutions to do the heavy lifting for the operator. This also allows analytics projects to be streamlined and shortened by more than 40-50%, as many organizations struggle not only with managing and deploying the infrastructure but also with gaining value in the early stage of analytics and AI experimentation.

Supporting Quotes:

In the world of 5G, IoT and now a global pandemic, were seeing an even greater need for operators to take advantage of AI and analytics to deal with increased network complexity, operational costs and subscriber demands for improved experience. To address these challenges, operators need to better understand network and subscriber behavior and be able to do so in real time.

These challenges can be tackled by utilizing big data collection, in-memory stream processing and AI-based analytics capabilities to ingest, correlate and analyze data (on premise and in the cloud) in real time from operators multivendor infrastructure. Insights generated can then be used to better serve operators needs across network, service, and marketing operations.Adaora Okeleke, Principal Analyst, Service Provider Operations and IT, Omdia

Weve seen a lot of excitement from the top CSPs worldwide in Guavus-IQ. Our customers plan to leverage the products for root cause analysis, subscriber behavior analysis, new personalized products, and IoT services, among other use cases. They like the fact that Guavus-IQ is easy to operate and its highly instrumented specifically for operators and their multivendor infrastructures versus traditional general-purpose enterprise platforms or homogeneous network-equipment-oriented solutions.Alexander Shevchenko, CEO of Guavus, a Thales company

Additional Resources:

About Guavus (a Thales company)Guavus is at the forefront of AI-based big data analytics and machine learning innovation, driving digital transformation at 6 of the 7 world's largest telecommunications providers. Using the Guavus-IQanalytics solutions, customers are able to analyze big data in real time and take decisive actions to lower costs, increase efficiencies, and dramatically improve the end-to-end customer experience all with the scale and security required bynext-gen 5G and IoT networks.

Guavus enables service providers to leverage applications for advanced network planning and operations, mobile traffic analytics, marketing, customer care, security and IoT. Discover more at http://www.guavus.com and follow us on Twitter and LinkedIn.

Media Contact:Laura StiffGuavus PR & Analyst Relations+1-408-827-1242laura.stiff@external.thalesgroup.com

Go here to see the original:

Guavus Unwraps New AI-based Analytics and Automation Products for CSPs - GlobeNewswire

Microsoft made its AI work on a $10 Raspberry Pi – Engadget – Engadget

The idea came about from Microsoft Labs teams in Redmond and Bangalore, India. Ofer Dekel, who manages an AI optimization group at the Redmond Lab, was trying to figure out a way to stop squirrels from eating flower bulbs and seeds from his bird feeder. As one does, he trained a computer vision system to spot squirrels, and installed the code on a $35 Raspberry Pi 3. Now, it triggers the sprinkler system whenever the rodents pop up, chasing them away.

"Every hobbyist who owns a Raspberry Pi should be able to do that," Dekel said in Microsoft's blog. "Today, very few of them can." The problems is that it's too expensive and impractical to install high-powered chips or connected cloud-computing devices on things like squirrel sensors. However, it's feasible to equip sensors and other devices with a $10 Raspberry Zero or the pepper-flake-sized Cortex M0 chip pictured above.

To make it work on systems that often have just a few kilobytes of RAM, the team compressed neural network parameters down to just a few bits instead of the usual 32. Another technique is "sparsification" of algorithms, a way of pruning them down to remove redundancies. By doing that, they were able to make an image detection system run about 20 times faster on a Raspberry Pi 3 without any loss of accuracy.

However, taking it to the next level won't be quite as easy. "There is just no way to take a deep neural network, have it stay as accurate as it is today, and consume 10,000 times less resources. You can't do it," said Dekel. For that, they'll need to invent new types of AI tech tailored for low-powered devices, and that's tricky, considering researchers still don't know exactly how deep learning tools work.

Microsoft's researchers are working on a few projects for folks with impairments, like a walking stick that can detect falls and issue a call for help, and "smart gloves" that can interpret sign language. To get some new ideas and help, they've made some of their early training tools and algorithms available to Raspberry Pi hobbyists and other researchers on Github. "Giving these powerful machine-learning tools to everyday people is the democratization of AI," says researcher Saleema Amershi.

View post:

Microsoft made its AI work on a $10 Raspberry Pi - Engadget - Engadget

AI will be a big part of the DoDs big data effort – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

The Defense Departments data strategy released just a few weeks ago says improving data management will help it fight and win wars. It says artificial intelligence will become an important component of data-fueled digital modernization. For an assessment, the CEO of data analysis company Govini, Tara Murphy Dougherty joined Federal Drive with Tom Temin.

Tom Temin: Tara, good to have you back.

Tara Murphy Dougherty:Thanks, Tom. Its great to be here.

Tom Temin: So give us from your standpoint, as someone who analyzes DoD data itself, what does this data strategy really say? What are they actually trying to do here?

Tara Murphy Dougherty:Sure. So as a company thats purpose built for national security problems related to data and leveraging the use of AI and machine learning to solve these problems, we do track defense data issues very closely. So I was thrilled to see the data strategy be released in early October. That alone is a significant achievement. But overall, its really strong on content, too, which is not something you can take for granted with these government documents sometimes.

Tom Temin: Well, these types of government documents seem to fall into two categories, I call them 16 pages and under, 200 pages and greater, and this one falls into the smaller category. So its kind of a high level view of what they plan to do. Theres a few fresh acronyms. But is there more beyond that, that were just not seeing in terms of detail of execution?

Tara Murphy Dougherty:There is Tom, theres a lot packed into this relatively short document. And the most important takeaway from the DoD data strategy is that it establishes data as a strategic resource for the department. And that was exactly the right place to start. The second key takeaway is that for the first time, the department is looking holistically at the use of data across the operational battlefield applications of data and use of data in support of joint all-domain operations, as well as senior leader decision support and business analytics, laying that groundwork and taking that holistic approach. Those are really important elements of the first ever data strategy. Like I said, it was the right place to start. And now the question will be can it execute the blueprint that it provides alongside those core elements, which is fairly ambitious.

Tom Temin: Yes, under Section 4 there are seven goals and enabling objectives: Making data visible, making it accessible, understandable, linked, trustworthy, interoperable, and secure. They even have a acronym VAULTIS, for all of that. And thats a big bite that theyve taken here. And it seems like the only sensible way to try to carry it out is application by application, otherwise theyre, to use the old term, boiling the ocean.

Tara Murphy Dougherty:Boiling The ocean is exactly what I thought as well. But as a framework, the comprehensive nature of it works really nicely. And I believe that was the goal. Also, it wouldnt be DoD if they didnt put an acronym on it. So theres that. The key to achieving these goals is not only going to be execution and how this strategy turns into plans, its going to be the answer to the question of whether the department and the leadership is prepared to resource these activities. We may be facing budget cuts in upcoming years, we may not thats still an open question. But as budgets start to constrict, if they head in that direction, this will be a tough area to argue for additional resources and growing investments for modernization. Hopefully, the strategy is effective in cementing the need for those resources and making the case for the importance of this for the department, which it very, very much is. And the key to that importance is really how the strategy talks about the use of data and the role of data in artificial intelligence, which we know is going to play a significant role in the future of warfare. And data is core to effective AI.

Tom Temin: Were speaking with Tara Murphy Dougherty, she is the CEO of Govini. And the DoD does have a dedicated channel of spending on artificial intelligence. Theres a program office for this. And then you have each of the armed services that have their own artificial intelligence requirements they can see clearly, and perhaps program offices working there, plus all of the other components. So integration, governance, all of those pieces would seem to be also equally important here.

Tara Murphy Dougherty:Exactly. And one of the additional recent changes from DoD is the establishment of the first ever chief data officer that happened a few years ago. And theres a new chief data officer in place David Spirk, coming from [U.S. Special Operations Command] very accomplished, defense professional. He has really taken the perspective that DoD needs to think about data, not just in a strategic way, but in a streamlined way, where he as the person accountable and responsible for data operations within DoD can have visibility into the full spectrum of not just data activities and data systems, but also personnel who are working on data, whether thats as data scientists or people supporting the elements of the data enterprise in DoD. That is an ambitious undertaking but if he can pull it off, will be really effective to having a sense of where those resources need to go. And where the department is getting the best return on its investment in data capabilities.

Tom Temin: Is it necessary for them to inventory the data that the department has? I think the reference is to the larger massive flows of data that are generated? Or is it better to approach it from the application standpoint, by saying, heres what we want to do. Now lets go find the data thats out there that we need for this, and maybe back into some kind of a catalog?

Tara Murphy Dougherty:Its probably best to start with how data is being used. This is an area where the Joint Artificial Intelligence Center for DoD has done a really good job of highlighting the most important areas where DoD needs to be using its data and starting there in terms of setting standards and improving data hygiene. And the not always glamorous, but actually very important aspects of data work that make it effective. You know, you mentioned earlier, the funding the dedicated funding line for DoD, and theres a lot of funding in the department that goes into AI efforts. Theyre not always centralized. Theyre not always coordinated. This was part of the original premise of creating the JAIC, as its known. And yet we see, on one hand, the department saying that artificial intelligence is a key priority for DoD, and an absolutely important aspect of being an effective competitor in our great power competition that were facing primarily with China. And then on the other hand, you look at the budget numbers and funding for the joint Artificial Intelligence Center is flat over the next five years. So we are going to have to get a handle on not just what the strategy is, not just what the plans are to execute it, but ensure that what goes out the door from a funding perspective actually lines up with that strategy.

Tom Temin: Well, just to make an absurd analogy, which maybe shows that this needs to be very diffuse, you could say that, yes, artificial intelligence is important to competitive advantage. So is the ability to shoot straight and hit the target. But theres not one department of shooting straight. Its something that is distributed over every member thats on the tooth end of the military. So maybe they have to diffuse this artificial intelligence, skill and data knowledge to the edges?

Tara Murphy Dougherty:Thats exactly right. And that is another achievement of the DoD data strategy is the fact that it really took on and took on purposefully, the cultural and workforce aspects of this. I was surprised, frankly, that there was so much attention given to the talent and workforce side of bringing data into DoD and making the department an increasingly data-driven organization. And yet it was exactly right to do so. So then the question will be how much does the department decide it needs to internalize and internally resource or create both from a talent perspective in terms of developing technical skills for its workforce. And how much does it want to rely on the private sector? And thats an area where the department had a really solid model for private sector collaboration in the Cold War. And we certainly over the past 10 to 15 years have seen a significant growth in efforts from the Department of Defense to work with innovative new technology companies, and to reach what were calling the national security innovation base, rather than the traditional defense industrial base. But theres still a long way to go there. And Im not convinced that the department yet has really figured out what its new model is, particularly in the data and software and other technology sectors.

Tom Temin: Sounds like they need a couple of good wins they can point to to kind of give everyone an example of whats possible.

Tara Murphy Dougherty:That would go a long way, particularly on the heels of this data strategy. As I mentioned, just getting it out the door is a big win for DoD. Now it will need to take steps to start to implement it and that will cement these principles, these goals these essential capabilities and indeed the way ahead.

Tom Temin: Tara Murphy Dougherty is the CEO of Govini. Thanks so much.

Tara Murphy Dougherty: Thank you, Tom.

Tom Temin: Well post this interview along with a link to the DoD data strategy at FederalNewsNetwork.com/FederalDrive, subscribe to the Federal Drive at Podcastone or wherever you get your podcasts.

View post:

AI will be a big part of the DoDs big data effort - Federal News Network

AI to Ensure Fewer UFOs – IEEE Spectrum

Photo: Black Sage Technologies Searching the Skies: Black Sage Technologies artificial-intelligence system spots flying objects and determines whether theyre a threat.

Is it a bird? A plane? Or is it a remotely operated quadrotor conducting surveillance or preparing to drop a deadly payload? Human observers wont have to guessor keep their eyes glued to computer monitorsnow that theres superhuman artificial intelligence capable of distinguishing drones from those other flying objects. Automated watchfulness, thanks to machine learning, has given police and other agencies tasked with maintaining security an important countermeasure to help them keep pace with swarms of new drones taking to the skies.

The security challenge has only grown over the past few years: Millions of people have bought consumer drones and sometimes flown them into offlimits areas where they pose a hazard to crowds on the ground or larger aircraft in the sky. Off-the-shelf drones have also become affordable and dangerous weapons for the Islamic State and other militant groups in war-torn regions such as Iraq and Syria.

The need to track and possibly take down these flying intruders has spawned an antidrone market projected to be worth close to US $2 billion by the mid-2020s. The lions share of that haul will likely go to companies that can best leverage the power of machine-learning AI based on neural networks.

But much of the antidrone industry still lags behind the rest of the tech sector in making effective use of machine learning AI, says David Romero, founder and managing partner of Black Sage Technologies, based in Boise, Idaho. With machine learning, 90 percent of the work is figuring out how to make it so simple so that the customer doesnt have to know how machine learning works, says Romero. Many companies do that well, but not in the defense community.

He and Ross Lam, his Black Sage cofounder, are poised to take advantage of this opening for the upstarts looking to take on the defense industrys giants. They initially collaborated on a project that trained machine-learning algorithms to automatically detect deer on highways based on radar and infrared camera data. Eventually, they realized that the same approach could help spot drones and other unidentified flying objects.

Since the self-funded startups launch in 2015, it has won multiple contracts from the United States governmentincluding for U.S. military forces deployed in Iraq and Afghanistanand from U.S. allies.

Romero says its fairly straightforward to apply machine learning to the task of automatically detecting and classifying flying objects. But because the stakes are highmistakenly shooting down a small passenger plane or failing to take out an explosives-laden drone intruder could be equally disastrousBlack Sage puts its system through a rigorous training phase when its installed at a new site. The systems radar and infrared cameras capture information about each unidentified flying objects velocity, size, altitude, and so forth. Then a human operator helps train the machine-learning algorithms by positively identifying certain classes of drones (rotor or fixed-wing) as well as other objects such as birds or manned aircraft. For proof that it has learned its lessons well, the AI is tested against 20 percent of the positively identified data setthe part reserved specifically for cross validation.

Another company called Dedroneoriginally based in Kassel, Germany, but currently headquartered in San Franciscois taking a similar approach. When a Dedrone system is being installed at a new site, humans label unfamiliar objects as part of the training process, which also updates the companys proprietary DroneDNA library. Since its launch in 2014, Dedrones machine-learning software has helped safeguard events and locations such as a Clinton-Trump presidential debate, the World Economic Forum, and CitiField, home of the New York Mets baseball team.

Each time we update DroneDNA, we process over 250 million different images of drones, aircraft, birds, and other objects, says Michael Dyballa, Dedrones director of engineering. In the past eight months, weve annotated 3 million drone images.

Though Black Sages and Dedrones automated detection systems are said to be capable of running without human assistance after their respective training phases, the companies clients may choose to put humans in the loop for engaging active defenses, such as jammers or lasers, to take down flying intruders. Such caution is critical at sites like airports, where drone detection accuracy greater than 90 percent still means the occasional false alarm or case of mistaken identity. Even so, a humans interpretive ability can only supplement the ceaseless vigilance that AI systems will need to provide as the number of drones continues to rise.

Link:

AI to Ensure Fewer UFOs - IEEE Spectrum

AI is not optional for retail – VentureBeat

Most people dont realize that theyre likely exposed to AI each and every time they shop online whether its on eBay, Nordstrom.com, Warby Parker, or any other retailer. When you are searching for an item and a merchandising strip appears saying something like similar items thats AI in its simplest terms. Its what gives retailers the ability to automatically make informed recommendations.

AI has been around for many years, but recent advancements have moved AI out of the realm of science fiction and made it a business imperative. The game changers: powerful new GPUs, dedicated hardware, new algorithms, and platforms for deep learning. These enable massive data inputs to be calculated quickly and made actionable, as technology powers new algorithms that dramatically increase the speed and depth of learning. In mere seconds, deep learning can reach across billions of data points with thousands of signals and dozens of layers.

We all aspire to a grand vision of AIs role in commerce, and recent developments are creating a fertile environment for new forms of personalization to occur between brands and consumers. Make no mistake about it, the implications of AI will be profound. This is the new frontier of commerce.

As an industry, we are just beginning to scratch the surface of AI. In the next few years, we will see AI-powered shopping assistants embedded across a wide variety of devices and platforms. Shopping occasions will take advantage of camera, voice interfaces, and text.

We are already witnessing the early success of voice-activated assistants like Google Home, Siri, and Cortana. It wont be long before we see virtual and augmented reality platforms commercialized, as well. We see a future rich with voice-activated and social media assistants on platforms such as Messenger, WeChat, WhatsApp, and Instagram. Personal assistants will be everywhere and are already being woven into the fabric of everyday life. This means commerce will become present wherever and whenever the user is engaged on the social, messaging, camera, or voice-activated platforms of their choice.

AI by itself is simply a catalyst for achieving greater levels of personalization with shoppers. Customer data and human intelligence are the critical ingredients needed to run a personal AI engine. As we continue to launch more sophisticated applications, technologists should continue to focus on how to make greater use of our treasure trove of customer data. Looking ahead, the industry will evolve to combine customer data and human expertise into a deep knowledge graph. This will establish a knowledge base to create highly personal and contextual experiences for consumers. For the commerce industry, thiswill allowus to get a clearer understanding of shoppers intent and to service them in a more personalized way.

Keyword search for shopping is not enough anymore. The ability to use text, voice, and photos is becoming the new norm because these avenues provide users with a much richer and more efficient way to express their initial shopping intent. We call this multimodal shopping. And these new types of consumer interactions yield a tremendous amount of user data that can be poured right back into AI algorithms to improve contextual understanding, predictive modeling, and deep learning.

Across the three spectrums of multimodal AI, were starting to get much better at understanding our customers and the way they like to interact with us. A few good examples of this have to do with how our personal shopping assistant, eBay ShopBot on Facebook Messenger, remembers you. It can keep track of your shirt size or the brands you like, so it wont keep suggesting Nike when you prefer Adidas. The assistant also uses computer vision it can find similar products it knows you like based on a similar image or an exact photo match.

Innovating on a canvas of AI provides many new opportunities to create highly contextual and personalized shopping experiences. From our perspective, every company should be investing heavily in AI, and it shouldnt just be about using cognitive services. Companies should actually be developing their own models that keep them on the cutting edge of technology. While there is still a lot of work to be done in this area, one thing is clear. The companies that chart the right course in this exciting endeavor will prosper. The ones that dont face extinction.

JapjitTulsi is the VP of Engineering ateBay.

Read more from the original source:

AI is not optional for retail - VentureBeat

Google creates AI that can make its own plans and envisage consequences of its actions – The Independent

Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.

Jet Capsule/Cover Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S

Reuters

The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018

AP

Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan

Reuters

Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China

Reuters

The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Singulato Motors' concept car Tigercar P0

Reuters

A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016

Getty

A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo

Getty

An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo

Getty

An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo

Getty

An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea

Reuters

Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea

Getty

Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA

Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan

EPA

Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles

EPA

The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components

Getty

The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight

Getty

Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit

AP

Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.

AP

Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany

EPA

See the rest here:

Google creates AI that can make its own plans and envisage consequences of its actions - The Independent

Samsung Galaxy S8’s Bixby AI could beat Google Assistant on this front – CNET

My AI is smarter than your AI.

That's the taunt that Samsung Galaxy S8 owners may be able to lob at Google Pixel users if the S8's rumored Bixby Assistant launches with seven or eight languages, as reported by SamMobile.

Samsung's Bixby AI will go after Google Assistant, Apple's Siri and Amazon Alexa for phones

In the Google Pixel, Assistant currently supports two languages, according to Google's website: English and German. The Google Allo app, which also uses Google Assistant and works on more phones, supports five languages: English, German, Hindi, Japanese and Portuguese. (You can still use Google's voice search/Google Now with many more languages on the Pixel phones, but the Google Assistant launch gesture turns off when you switch your primary language to, say, Spanish.)

Launching its own smart AI assistant is an important move for Samsung and its future Galaxy and Note phones. The company, which strives to dominate the smartphone world against Apple's iPhone, stands to win fans if its Bixby assistant can outperform Google's Assistant, Apple's Siri and Amazon's Alexa, which will land on its first phone later this month.

This isn't the first time that Samsung has tried to out-Google Google either. The company hoped to supplant Google's voice search tool with Samsung's branded S Voice app, and introduced other software services of its own. The company has largely pulled back on preloaded apps and shuttered some of the services, so it'll be interesting to see how well Bixby AI will be able to compete with more established assistants, especially in these early days of AI on phones.

Bixby is rumored to:

The Samsung Galaxy S8 is expected to launch March 29 and sell in mid-April.

Samsung did not immediately respond to CNET's request for comment.

See the original post here:

Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET

The Next Generation Of Artificial Intelligence (Part 2) – Forbes

Deep learning pioneer Yoshua Bengio has provocative ideas about the future of AI.

For the first part of this article series, see here.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business?

My previous column covered three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. This article will cover three more.

AI is moving to the edge.

There are tremendous advantages to being able to run AI algorithms directly on devices at the edgee.g., phones, smart speakers, cameras, vehicleswithout sending data back and forth from the cloud.

Perhaps most importantly, edge AI enhances data privacy because data need not be moved from its source to a remote server. Edge AI is also lower latency since all processing happens locally; this makes a critical difference for time-sensitive applications like autonomous vehicles or voice assistants. It is more energy- and cost-efficient, an increasingly important consideration as the computational and economic costs of machine learning balloon. And it enables AI algorithms to run autonomously without the need for an Internet connection.

Nvidia CEO Jensen Huang, one of the titans of the AI business world, sees edge AI as the future of computing: AI is moving from the cloud to the edge, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, save power. In time, there will be trillions of these small autonomous computers, powered by AI.

But in order for this lofty vision of ubiquitous intelligence at the edge to become a reality, a key technology breakthrough is required: AI models need to get smaller. A lot smaller. Developing and commercializing techniques to shrink neural networks without compromising their performance has thus become one of the most important pursuits in the field of AI.

The typical deep learning model today is massive, requiring significant computational and storage resources in order to run. OpenAIs new language model GPT-3, which made headlines this summer, has a whopping 175 billion model parameters, requiring more than 350 GB just to store the model. Even models that dont approach GPT-3 in size are still extremely computationally intensive: ResNet-50, a widely used computer vision model developed a few years ago, uses 3.8 billion floating-point operations per second to process an image.

These models cannot run at the edge. The hardware processors in edge devices (think of the chips in your phone, your Fitbit, or your Roomba) are simply not powerful enough to support them.

Developing methods to make deep learning models more lightweight therefore represents a critical unlock: it will unleash a wave of product and business opportunities built around decentralized artificial intelligence.

How would such model compression work?

Researchers and entrepreneurs have made tremendous strides in this field in recent years, developing a series of techniques to miniaturize neural networks. These techniques can be grouped into five major categories: pruning, quantization, low-rank factorization, compact convolutional filters, and knowledge distillation.

Pruning entails identifying and eliminating the redundant or unimportant connections in a neural network in order to slim it down. Quantization compresses models by using fewer bits to represent values. In low-rank factorization, a models tensors are decomposed in order to construct sparser versions that approximate the original tensors. Compact convolutional filters are specially designed filters that reduce the number of parameters required to carry out convolution. Finally, knowledge distillation involves using the full-sized version of a model to teach a smaller model to mimic its outputs.

These techniques are mostly independent from one another, meaning they can be deployed in tandem for improved results. Some of them (pruning, quantization) can be applied after the fact to models that already exist, while others (compact filters, knowledge distillation) require developing models from scratch.

A handful of startups has emerged to bring neural network compression technology from research to market. Among the more promising are Pilot AI, Latent AI, Edge Impulse and Deeplite. As one example, Deeplite claims that its technology can make neural networks 100x smaller, 10x faster, and 20x more power efficient without sacrificing performance.

The number of devices in the world that have some computational capability has skyrocketed in the last decade, explained Pilot AI CEO Jon Su. Pilot AIs core IP enables a significant reduction in the size of the AI models used for tasks like object detection and tracking, making it possible for AI/ML workloads to be run directly on edge IoT devices. This will enable device manufacturers to transform the billions of sensors sold every yearthings like push button doorbells, thermostats, or garage door openersinto rich tools that will power the next generation of IoT applications.

Large technology companies are actively acquiring startups in this category, underscoring the technologys long-term strategic importance. Earlier this year Apple acquired Seattle-based Xnor.ai for a reported $200 million; Xnors technology will help Apple deploy edge AI capabilities on its iPhones and other devices. In 2019 Tesla snapped up DeepScale, one of the early pioneers in this field, to support inference on its vehicles.

And one of the most important technology deals in yearsNvidias pending $40 billion acquisition of Arm, announced last monthwas motivated in large part by the accelerating shift to efficient computing as AI moves to the edge.

Emphasizing this point, Nvidia CEO Jensen Huang said of the deal: Energy efficiency is the single most important thing when it comes to computing going forward....together, Nvidia and Arm are going to create the world's premier computing company for the age of AI.

In the years ahead, artificial intelligence will become untethered, decentralized and ambient, operating on trillions of devices at the edge. Model compression is an essential enabling technology that will help make this vision a reality.

Todays machine learning models mostly interpet and classify existing data: for instance, recognizing faces or identifying fraud. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content. To put it simply, generative AI takes artificial intelligence beyond perceiving to creating.

Two key technologies are at the heart of generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs).

The more attention-grabbing of the two methods, GANs were invented by Ian Goodfellow in 2014 while he was pursuing his PhD at the University of Montreal under AI pioneer Yoshua Bengio.

Goodfellows conceptual breakthrough was to architect GANs with two separate neural networksand then pit them against one another.

Starting with a given dataset (say, a collection of photos of human faces), the first neural network (called the generator) begins generating new images that, in terms of pixels, are mathematically similar to the existing images. Meanwhile, the second neural network (the discriminator) is fed photos without being told whether they are from the original dataset or from the generators output; its task is to identify which photos have been synthetically generated.

As the two networks iteratively work against one anotherthe generator trying to fool the discriminator, the discriminator trying to suss out the generators creationsthey hone one anothers capabilities. Eventually the discriminators classification success rate falls to 50%, no better than random guessing, meaning that the synthetically generated photos have become indistinguishable from the originals.

In 2016, AI great Yann LeCun called GANs the most interesting idea in the last ten years in machine learning.

VAEs, introduced around the same time as GANs, are a conceptually similar technique that can be used as an alternative to GANs.

Like GANs, VAEs consist of two neural networks that work in tandem to produce an output. The first network (the encoder) takes a piece of input data and compresses it into a lower-dimensional representation. The second network (the decoder) takes this compressed representation and, based on a probability distribution of the original datas attributes and a randomness function, generates novel outputs that riff on the original input.

In general, GANs generate higher-quality output than do VAEs but are more difficult and more expensive to build.

Like artificial intelligence more broadly, generative AI has inspired both widely beneficial and frighteningly dangerous real-world applications. Only time will tell which will predominate.

On the positive side, one of the most promising use cases for generative AI is synthetic data. Synthetic data is a potentially game-changing technology that enables practitioners to digitally fabricate the exact datasets they need to train AI models.

Getting access to the right data is both the most important and the most challenging part of AI today. Generally, in order to train a deep learning model, researchers must collect thousands or millions of data points from the real world. They must then have labels attached to each data point before the model can learn from the data. This is at best an expensive and time-consuming process; at worst, the data one needs is simply impossible to get ones hands on.

Synthetic data upends this paradigm by enabling practitioners to artificially create high-fidelity datasets on demand, tailored to their precise needs. For instance, using synthetic data methods, autonomous vehicle companies can generate billions of different driving scenes for their vehicles to learn from without needing to actually encounter each of these scenes on real-world streets.

As synthetic data approaches real-world data in accuracy, it will democratize AI, undercutting the competitive advantage of proprietary data assets. In a world in which data can be inexpensively generated on demand, the competitive dynamics across industries will be upended.

A crop of promising startups has emerged to pursue this opportunity, including Applied Intuition, Parallel Domain, AI.Reverie, Synthesis AI and Unlearn.AI. Large technology companiesamong them Nvidia, Google and Amazonare also investing heavily in synthetic data. The first major commercial use case for synthetic data was autonomous vehicles, but the technology is quickly spreading across industries, from healthcare to retail and beyond.

Counterbalancing the enormous positive potential of synthetic data, a different generative AI application threatens to have a widely destructive impact on society: deepfakes.

We covered deepfakes in detail in this column earlier this year. In essence, deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.

The first use case to which deepfake technology has been widely applied is pornography. According to a July 2019 report from startup Sensity, 96% of deepfake videos online are pornographic. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos that feature famous celebrities or personal contacts.

From these dark corners of the Internet, the use of deepfakes has begun to spread to the political sphere, where the potential for harm is even greater. Recent deepfake-related political incidents in Gabon, Malaysia and Brazil may be early examples of what is to come.

In a recent report, The Brookings Institution grimly summed up the range of political and social dangers that deepfakes pose: distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.

The core technologies underlying synthetic data and deepfakes are the same. Yet the use cases and potential real-world impacts are diametrically opposed.

It is a great truth in technology that any given innovation can either confer tremendous benefits or inflict grave harm on society, depending on how humans choose to employ it. It is true of nuclear energy; it is true of the Internet. It is no less true of artificial intelligence. Generative AI is a powerful case in point.

In his landmark book Thinking, Fast And Slow, Nobel-winning psychologist Daniel Kahneman popularized the concepts of System 1 thinking and System 2 thinking.

System 1 thinking is intuitive, fast, effortless and automatic. Examples of System 1 activities include recognizing a friends face, reading the words on a passing billboard, or completing the phrase War And _______. System 1 requires little conscious processing.

System 2 thinking is slower, more analytical and more deliberative. Humans use System 2 thinking when effortful reasoning is required to solve abstract problems or handle novel situations. Examples of System 2 activities include solving a complex brain teaser or determining the appropriateness of a particular behavior in a social setting.

Though the System 1/System 2 framework was developed to analyze human cognition, it maps remarkably well to the world of artificial intelligence today. In short, todays cutting-edge AI systems excel at System 1 tasks but struggle mightily with System 2 tasks.

AI leader Andrew Ng summarized this well: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.

Yoshua Bengios 2019 keynote address at NeurIPS explored this exact theme. In his talk, Bengio called on the AI community to pursue new methods to enable AI systems to go beyond System 1 tasks to System 2 capabilities like planning, abstract reasoning, causal understanding, and open-ended generalization.

We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge, Bengio said.

There are many different ways to frame the AI disciplines agenda, trajectory and aspirations. But perhaps the most powerful and compact way is this: in order to progress, AI needs to get better at System 2 thinking.

No one yet knows with certainty the best way to move toward System 2 AI. The debate over how to do so has coursed through the field in recent years, often contentiously. It is a debate that evokes basic philosophical questions about the concept of intelligence.

Bengio is convinced that System 2 reasoning can be achieved within the current deep learning paradigm, albeit with further innovations to todays neural networks.

Some people think we need to invent something completely new to face these challenges, and maybe go back to classical AI to deal with things like high-level cognition, Bengio said in his NeurIPS keynote. [But] there is a path from where we are now, extending the abilities of deep learning, to approach these kinds of high-level questions of cognitive system 2.

Bengio pointed to attention mechanisms, continuous learning and meta-learning as existing techniques within deep learning that hold particular promise for the pursuit of System 2 AI.

Others, though, believe that the field of AI needs a more fundamental reset.

Professor and entrepreneur Gary Marcus has been a particularly vocal advocate of non-deep-learning approaches to System 2 intelligence. Marcus has called for a hybrid solution that combines neural networks with symbolic methods, which were popular in the earliest years of AI research but have fallen out of favor more recently.

Deep learning is only part of the larger challenge of building intelligent machines, Marcus wrote in the New Yorker in 2012, at the dawn of the modern deep learning era. Such techniques lack ways of representing causal relationships and are likely to face challenges in acquiring abstract ideas....They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.

Marcus co-founded robotics startup Robust.AI to pursue this alternative path toward AI that can reason. Just yesterday, Robust announced its $15 million Series A fundraise.

Computer scientist Judea Pearl is another leading thinker who believes the road to System 2 reasoning lies beyond deep learning. Pearl has for years championed causal inferencethe ability to understand cause and effect, not just statistical associationas the key to building truly intelligent machines. As Pearl put it recently: All the impressive achievements of deep learning amount to just curve fitting.

Of the six AI areas explored in this article series, this final one is, purposely, the most open-ended and abstract. There are many potential paths to System 2 AI; the road ahead remains shrouded. It is likely to be a circuitous and perplexing journey. But within our lifetimes, it will transform the economy and the world.

See the article here:

The Next Generation Of Artificial Intelligence (Part 2) - Forbes

AI hiring tools aim to automate every step of recruiting – Quartz

The firms that sell AI tools to automate recruiting have started to work the pandemic into their pitches to prospective clients: As the economy tanks and the hiring process moves almost entirely online, AI recruiting tools offer a chance to save some money and make use of new troves of digital data on prospective candidates.

In fact, the field is expected to expand during the crisis and has been attracting new investment. Its not just automated resume-sifting: There are firms competing to automate every stage of the hiring process. And while the machines seldom make hiring decisions on their own, critics say their use can perpetuate discrimination and inequality.

AI firm Textio claims it can optimize every word of a job posting, using a machine learning model that correlates certain turns of phrase with better hiring outcomes. Companies hiring in California, for example, are advised to describe things as awesome to appeal to local job seekers, while New York employers are counseled to avoid the adjective.

Big name firms like LinkedIn and ZipRecruiter use matchmaking algorithms to comb through hundreds of millions of job postings to connect candidates with compatible companies. Smaller competitors, like GoArya, seek to differentiate themselves by scraping data from the internetincluding social media profilesto inform recruiting decisions.

Firms like Mya promise to automate the task of reaching out to candidates via email, text, WhatsApp, or Facebook Messenger, using natural language processing to have open-ended, natural, and dynamic conversations. The companys chatbots even conduct basic screening interviews, filtering out early-stage applicants who dont meet the employers qualifications. Other companies, like XOR and Paradox, sell chatbots designed to schedule interviews and field applicants questions.

Some AI vendorsincluding Ideal, CVViZ, Skillate, and SniperAIpromise to cut the drudgery of hiring by automatically comparing applicants resumes with those of current employees. Tools like these have faced criticism for recreating existing inequalities: Even if the algorithms are programmed to ignore traits like race or gender, they might learn from past hiring data to pick up on proxies for these traitsfor example, prioritizing candidates who played lacrosse or are named Jared. Amazon developed its own screener and quickly scrapped it in 2018 after finding it was biased against women.

Recruiting firm HireVue, which boasts 700 corporate clients including Hilton and Goldman Sachs, sells an AI tool that analyzes interviewees facial movements, word choice, and speaking voices to assign them an employability score. The platform is so ubiquitous in industries like finance and hospitality that some colleges have taken to coaching interviewees on how to speak and move to appeal to the platforms algorithms.

AI firm Humantic offers to understand every individual without spending your time or theirs by using AI to create psychological profiles of applicants based on the words they use in resumes, cover letters, LinkedIn profiles, and any other piece of text they submit.

Meanwhile, Pymetrics puts current and prospective employees through a series of 12 games to glean data about their personalities. Its algorithms use the data to to find applicants that fit company culture. In a 2017 presentation, a Pymetrics representative demonstrated a game that required users to react when a red circle appears, but do nothing when they see a green circle. That game was actually looking at your levels of impulsivity, it was looking at your attention span, and it was looking at how you learn from your mistakes, she told the crowd. Critics suggest the games might just measure which candidates are good at puzzles.

Read the original here:

AI hiring tools aim to automate every step of recruiting - Quartz

Gartner vision quest sees Microsoft, Google and IBM nipping at Amazon Web Services’ heels in cloud AI – The Register

Gartner analysts have exhaled a "Magic Quadrant" report on Cloud AI developer services, concluding that while AWS is fractionally ahead, rivals Microsoft and Google are close behind, and that IBM is the only other company deserving a place in the "Leaders" section of the chart.

Gartner's team of five mystics reckon that this is a significant topic. "By 2023, 40 per cent of development teams will be using automated machine learning services to build models that add AI capabilities to their applications, up from less than 2 per cent in 2019," they predicted. The analysts also said that 50 per cent of "data scientist activities" will be automated by 2025, alleviating the current shortage of skilled humans.

The companies studied were Aible, AWS, Google, H20ai, IBM, Microsoft, Previson.io, Salesforce, SAP and Tencent. Alibaba and Baidu were excluded because of a requirement that products span "at least two major regions".

Gartner's Magic Quadrant for Cloud AI developer services

AWS was praised for its wide range of services, including SageMaker AutoPilot, announced late last year, which automatically generates machine-learning models. However, some shortcomings in SageMaker were addressed during the course of the research, said the analysts. It is a complex portfolio, though, and can be confusing. In addition: "When users move from development to production environments, the cost of execution may be higher than they anticipated." Gartner suggested developers attempt to model production costs early on, and even that they plan to move compute-intensive workloads on-premises as this may be more cost-effective.

Google was ranked just ahead of Microsoft on "completeness of vision" but fractionally behind on "ability to execute". Gartner's analysts were impressed with its strong language services, as well as its "what-if" tool, which lets you inspect ML models to assist explainability, the art of determining why a AI system delivers the results it does. Another plus was that Google's image recognition service can be deployed in a container on-premises. Snags? The report identified a lack of maturity in Google's cloud platform: "The organization is still undergoing substantial change, the full impact of which will not be apparent for some time."

Microsoft won plaudits for the deployment flexibility of its AI services, on Azure or on-premises, as well as its wide selection of supported languages and its high level of investment in AI. A weakness, said the analysts, was lack of NLG (Natural Language Generation) services, though these are on the roadmap. The report also noted: "Microsoft can be challenging to engage with, due to a confusing branding strategy that spans multiple business units and includes Azure cognitive services and Cortana services. This overlap often confuses customers and can frustrate them." In addition, "it can be difficult to know which part of Microsoft to contact."

IBM is placed a little behind the other three, but still identified as having a "robust set of AI ML services". Further, "according to its users, developing conversational agents on IBMs Watson Assistant platform is a relatively painless experience." That said, like Microsoft, IBM can be difficult to work with, having "different products, from different divisions, being handled by various development teams and having various pricing schemes," said the analysts.

All four contenders can maybe take some comfort from Gartner's report, which places the three leaders close together and IBM, with its smaller cloud product overall, not that far behind. Other considerations, such as existing business relationships, or points of detail in the AI services you want to use, could shift any one of them into the top spot for a specific project.

One of the points the researchers highlighted is that it can be cheaper to run compute-intensive workloads on-premises. Using standard tools gives the most flexibility, and in this respect Google's recent announcement of Kubeflow 1.0, which lets devs run ML workflows on Kubernetes (K8s), is of interest. A developer can use Kubeflow on any K8s cluster including OpenShift. Google said it will support running ML workloads on-premises using Anthos in an upcoming release.

Sponsored: Detecting cyber attacks as a small to medium business

Here is the original post:

Gartner vision quest sees Microsoft, Google and IBM nipping at Amazon Web Services' heels in cloud AI - The Register