VC Investments In Enterprise Tech And AI – Forbes

According to Toptal, the venture capital sector has grown by 12.1% annually since the financial crisis. The same source tells us that the amount of capital raised per year has grown by 100% over the decade.

Hundreds of venture capitalists are backing up startups and entrepreneurs with billions of dollars each year. Many businesses rely on these VC investments and entire economies depend on it.

When choosing to back up projects, most investors look for innovation, expertise, and profitable opportunities. Im going to take a look at some of the top tier venture capitalists and their investments in the field of enterprise tech and AI.

Companies with a focus on AI have collected over 9.3 billion dollars in the US during 2018. The number of venture capital investments keeps growing on a global scale, opening up new opportunities for startups and entrepreneurs who are looking for their golden ticket to the enterprise tech and AI space.

As stated on Kurtosys, venture capital deals ranged between $10 million and $25 million in the US ten years ago. Today, there is a trend of $50 million plus deals getting a greater share of total investment.

Top tier macro venture capitalists in the startup ecosystem include Benchmark, Index Ventures, Felicis Ventures, and Union Square Ventures.

Even micro and local venture capitalists such as Northstar Ventures and Base Ventures are hitting these large numbers. On the local micro VC side, Aybuben Ventures, the first Pan-Armenian venture capitalist fund focused on Armenian tech entrepreneurs.

With a fund of over $50 million, Aybuben Ventures is not limited to people in Armenia only. On the contrary, the fund is open to Armenians all over the world who are engaged in enterprise tech business and development. Armenians live all over the world and they are proud of their culture and dont want to lose their identity. Potentially this creates a huge global pool of entrepreneurs, professionals, capital, companies and knowledge which can be leveraged and scaled in any of the world's economies. That said, we welcome interest in our foundation, from any organization and without regard to nationality, said Alexander Smbatyan, one of the founding partners of Aybuben Ventures.

Overall, the venture capital space keeps growing, providing technology startups with sufficient funding for growth and expansion. There is an innate disposition to develop companies that make extensive use of technologies such as artificial intelligence, machine learning, biotechnology and more, Smbatyan added as one of the reasons why it is worth to invest in the space of enterprise tech and AI.

Read the original post:

VC Investments In Enterprise Tech And AI - Forbes

How Apple reinvigorated its AI aspirations in under a year – Engadget

Well, technically, it's been three years of R&D, but Apple had a bit of trouble getting out of its own way for the first two. See, back in 2010, when Apple released the first version of Siri, the tech world promptly lost its mind. "Siri is as revolutionary as the Mac," the Harvard Business Review crowed, though CNN found that many people feared the company had unwittingly invented Skynet v1.0. But for as revolutionary as Siri appeared to be at first, its luster quickly wore off once the general public got ahold of it and recognized the system's numerous shortcomings.

Fast forward to 2014. Apple is at the end of its rope with Siri's listening and comprehension issues. The company realizes that minor tweaks to Siri's processes can't fix its underlying problems and a full reboot is required. So that's exactly what they did. The original Siri relied on hidden Markov models -- a statistical tool used to model time series data (essentially reconstructing the sequence of states in a system based only on the output data) -- to recognize temporal patterns in handwriting and speech recognition.

The company replaced and supplemented these models with a variety of machine learning techniques including Deep Neural Networks and "long short-term memory networks" (LSTMNs). These neural networks are effectively more generalized versions of the Markov model. However, because they posses memory and can track context -- as opposed to simply learning patterns as Markov models do -- they're better equipped to understand nuances like grammar and punctuation to return a result closer to what the user really intended.

The new system quickly spread beyond Siri. As Steven Levy points out, "You see it when the phone identifies a caller who isn't in your contact list (but who did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar."

By the WWDC 2016 keynote, Apple had made some solid advancements in its AI research. "We can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically," Apple senior vice president Craig Federighi told the assembled crowd.

The company also released during WWDC 2016 its neural network API running Basic Neural Network Subroutines, an array of functions enabling third party developers to construct neural networks for use on devices across the Apple ecosystem.

However, Apple had yet to catch up with the likes of Google and Amazon, both of whom had either already released an AI-powered smart home companion (looking at you, Alexa) or were just about to (Home would be released that November). This is due in part to the fact that Apple faced severe difficulties recruiting and retaining top AI engineering talent because it steadfastly refused to allow its researchers to publish their findings. That's not so surprising coming from a company so famous for its tight-lipped R&D efforts that it once sued a news outlet because a drunk engineer left a prototype phone in a Palo Alto bar.

"Apple is off the scale in terms of secrecy," Richard Zemel, a professor in the computer science department at the University of Toronto, told Bloomberg in 2015. "They're completely out of the loop." The level of secrecy was so severe that new hires to the AI teams were reportedly directed not to announce their new positions on social media.

"There's no way they can just observe and not be part of the community and take advantage of what is going on," Yoshua Bengio, a professor of computer science at the University of Montreal, told Bloomberg. "I believe if they don't change their attitude, they will stay behind."

Luckily for Apple, those attitudes did change and quickly. After buying Seattle-based machine learning AI startup Turi for around $200 million in August 2016, Apple hired AI expert Russ Salakhutdinov away from Carnegie Mellon University that October. It was his influence that finally pushed Apple's AI out of the shadows and into the light of peer review.

In December 2016, while speaking at the the Neural Information Processing Systems conference in Barcelona, Salakhutdinov stunned his audience when he announced that Apple would begin publishing its work, going so far as to display an overhead slide reading, "Can we publish? Yes. Do we engage with academia? Yes."

Later that month Apple made good on Salakhutdinov's promise, publishing "Learning from Simulated and Unsupervised Images through Adversarial Training". The paper looked at the shortcomings of using simulated objects to train machine vision systems. It showed that while simulated images are easier to teach than photographs, the results don't work particularly well in the real world. Apple's solution employed a deep-learning system, known as known as Generative Adversarial Networks (GANs), that pitted a pair of neural networks against one another in a race to generate images close enough to photo-realistic to fool a third "discriminator" network. This way, researchers can exploit the ease of training networks using simulated images without the drop in performance once those systems are out of the lab.

In January 2017, Apple further signaled its seriousness by joining Amazon, Facebook, Google, IBM and Microsoft in the Partnership on AI. This industry group seeks to establish ethical, transparency and privacy guidelines in the field of AI research while promoting research and cooperation between its members. The following month, Apple drastically expanded its Seattle AI offices, renting a full two floors at Two Union Square and hiring more staff.

"We're trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers," Apple's director of machine learning Carlos Guestrin told GeekWire.

By March 2017, Apple had hit its stride. Speaking at the EmTech Digital conference in San Francisco, Salakhutdinov laid out the state of AI research, discussing topics ranging from using "attention mechanisms" to better describe the content of photographs to combining curated knowledge sources like Freebase and WordNet with deep-learning algorithms to make AI smarter and more efficient. "How can we incorporate all that prior knowledge into deep-learning?" Salakhutdinov said. "That's a big challenge."

That challenge could soon be a bit easier once Apple finishes developing the Neural Engine chip that it announced this May. Unlike Google devices, which shunt the heavy computational lifting required by AI processes up to the cloud where it is processed on the company's Tensor Processing Units, Apple devices have traditionally split that load between the onboard CPU and GPU.

This Neural Engine will instead handle AI processes as a dedicated standalone component, freeing up valuable processing power for the other two chips. This would not only save battery life by diverting load from the power-hungry GPU, it would also boost the device's onboard AR capabilities and help further advance Siri's intelligence -- potentially exceeding the capabilities of Google's Assistant and Amazon's Alexa.

But even without the added power that a dedicated AI chip can provide, Apple's recent advancements in the field have been impressive to say the least. In the span between two WWDCs, the company managed to release a neural network API, drastically expand its research efforts, poach one of the country's top minds in AI from one of the nation's foremost universities, reverse two years of backwards policy, join the industry's working group as a charter member and finally -- finally -- deliver a Siri assistant that's smarter than a box of rocks. Next year's WWDC is sure to be even more wild.

Image: AFP/Getty (Federighi on stage / network of photos)

Continue reading here:

How Apple reinvigorated its AI aspirations in under a year - Engadget

Descripts new podcast editor includes an AI voice double for dubbing over mistakes – The Verge

Multimedia editing and transcription provider Descript is today announcing a redesigned version of its audio editing software thats geared toward podcast producers. The product, officially called Descript Podcast Studio, features a lot of the forward-thinking approaches to audio editing the company, created by Groupon founder and former CEO Andrew Mason, was founded on.

Most prominently, that includes the ability to easily edit an artificial intelligence-generated transcription of your audio file as if you were editing a word document. Essentially, Descript turns your audio into text, broken up by whos speaking, and it then lets you manipulate those audio files as if you were editing on a text version of the script in a word processor. Delete a sentence or two, and Descript will automatically shorten the file to make the recording sound smooth and natural.

The service has been available for roughly two years in a beta-like state since Mason spun it out of his walking tour app Detour, which he created after being unceremoniously shown the door as Groupons chief executive. Since then, Descript has worked with professional audio editors at NPR and other outlets to improve the design and feature set ahead of todays official 1.0 release.

With Descript Podcast Studio, the companys software now supports simultaneous and collaborative multitrack editing in the style of Google Docs, with changes synced in real time to the cloud. Descript can also just be used as a transcription service, with the company providing pro-grade transcription that includes both AI and human-aided audio-to-text services at 15 cents a minute for free users and 7 cents a minute for those who subscribe to its $10-a-month plan.

But Descripts new podcast product will also come with an all-new unique AI tool that Mason says can completely overhaul the editing process. Its called Overdub, and it will allow you to create what Descript is calling an AI voice double that can be used to overdub flubbed words or phrases and can even generate entirely new sentences all on its own in your voice. It relies on technology developed by a Montreal-based AI startup called Lyrebird, which Descript says it has acquired and transformed into its AI research division.

Lyrebird was founded by a trio of PhD students at the MILA research institute In Canada, and the companys technology can create a convincing replica of your voice by training a series of machine learning algorithms on organic voice data. Descript has turned Lyrebirds tech into Overdub, which will create your AI voice double by asking you to read out loud a series of randomly generated sentences. Mason says this can only be done for your own voice and only after going through the live data gathering process. That way, it cant be used to create convincing audio deepfakes of other people.

Were very lucky in the same way Apple has a business model that allows them to take a customer-friendly stance on privacy, our business model is such that we can take a socially friendly civically friendly approach to the issue of deepfakes, Mason tells me. The reason we wanted to build it was to solve our own problem or the problem anyone has experienced when they want to record something, which is that the process of getting it right is incredibly tedious. Wouldnt it be great to make editorial corrections to the audio content youve recorded as it is to do that with text?

Mason says Overdub will only be useable if youre editing audio of your own voice or you have the permission of the owner of the voice to make those kinds of edits to the audio recording. The goal is to make the process of fixing a brief but noticeable stumble or correcting an obvious error much less time-intensive. Mason says the point of Overdub is saving you a trip back to the recording booth.

Audio is the easiest medium of content to create but one of the hardest to edit. Going back in there and recording a new take and splicing it back in so it sounds good is a time-consuming process, he says. To ensure its making its approach to this type of controversial AI-based technology clear, Descript has an ethics policy published on its website outlining how Overdub works and the limitations Mason says will prevent it from being abused.

Overdub is just one standout feature as part of Descripts Podcast Studio software. The companys entire approach treating audio editing as if it were as easy as word document editing means the Descript app is stuffed full of interesting tricks for fast and efficient audio manipulation.

In true Google Docs style, the collaborative editing tools include commenting and annotation, and for audio nerds, Mason says Descript Podcast Studio will come with a head-spinning amount of editing features you get with pricier software like Adobe Audition and Pro Tools. That includes non-destructive editing, crossfading, volume automation, loudness normalization, and track groupings, to name a few. Descript also supports exporting to programs like Audition, Final Cut Pro, and Pro Tools, for those who rely on any of that software for their professional workflow.

Descript is available now for Mac and Windows, and Mason hopes its unique approach to audio editing, combined with the truly next-generation Overdub feature, will ease the editing pain for the scores of new podcast makers entering the scene. Weve seen podcasting entering a golden age and more and more people and companies... are creating audio content, Mason says. But they still need to get an audio engineer involved to do anything good. This makes it so much more accessible.

Excerpt from:

Descripts new podcast editor includes an AI voice double for dubbing over mistakes - The Verge

GANs and NFTs: AI Artists in the Crypto Space ARTnews.com – ARTnews

When Christies hosted its first Art + Tech Summit in 2018, the topic was the blockchain. The second edition, in June 2019, focused on artificial intelligence. Blockchain and AI are two big, buzzy topics, and they have intersected in unexpected ways, especially during this years crypto art boom. Artists whose work uses generative adversarial networks (GANs) algorithms that pit computers against each other to produce original machine-made output approximating the human-made training datahave turned to crypto platforms not only to sell their work, but also to explore ways of critically and creatively engaging the blockchain.

People who make creative work with AI tend to be self-taught, as artists or engineers or both. Theyre drawn to new technologies and ideas taking shape at the margins of culture. Theres a provocative friction between the figure of the tinkering outsider and the reputations of AI and blockchains, in the popular imagination, as rapidly growing forms of technological infrastructure with massive resources invested in them, behemoths that are transforming the shape of everyday life by digitizing more and more of it. Artists who sell their work as NFTs have been criticized for contributing to an ecologically destructive, toxically libertarian culture; artists who make work with AI have drawn fire for normalizing the technologies that enable corporate surveillance and predictive policing. The artists who take up these tools despite the problems associated with them arent utopians. However, they see firsthand the reality that new technologies are not monoliths but evolving systems, rife with flaws and potentials.

View post:

GANs and NFTs: AI Artists in the Crypto Space ARTnews.com - ARTnews

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 – Yahoo Finance

SHANGHAI, July 10, 2020 /PRNewswire/ -- Hellobike, China's two-wheel transport industry leader, has unveiled three revolutionary shared mobility AI technologies at the 2020 World Artificial Intelligence Conference (WAIC2020), taking place virtually between 9 and 11 July. In line with the conference theme, 'Intelligent Connectivity, Indivisible Community', Hellobike showcased its independent research and development into solutions that enable cities to create convenient, greener urban transportation ecosystems.

Hellobike's non-motorized vehicle safety management system

During its presentation on 10 July, Hellobike unveiled three innovative technologies that leverage AI, big data, cloud infrastructure and the IoT: the Hermes road safety system, non-motorized vehicle safety management system, and fixed-point return. Hellobike's participation in WAIC2020 follows its highly successful debut at the conference last year, where the company unveiled exciting AI projects including the Hello Brain smart transportation OS and the Argus visual interaction system.

Hellobike's new model A40

"We are honored to take part in WAIC2020 for the second year running. As the shared bike industry leader, WAIC2020 is the ultimate platform for us to demonstrate how we harness AI technology and work hand-in-hand with the state to build the city of the future," said Li Kaizhu, President of Hellobike.

Hellobike's latest technologies usher in the 3.0 era of China's bike-sharing industry: a new model that sees shared bicycles organically integrated into the urban public transportation ecosystem. Through strengthened cooperation between transport providers and municipal governments, the 3.0 era provides a systematic mechanism to help Chinese cities tackle unique operational challenges, address parking management, and streamline shared bike deployment and distribution.

Hellobike's Hermes road safety system integrates AI algorithms to provide users with a better, safer shared transport experience. Built as a scenario-based solution, Hermes automatically performs failsafe tests on both user behavior and the bike at the beginning, middle and end of their riding journey. If the system detects technical issues, dangerous operation or user violations, Hellobike delivers a risk warning to the user through the bike's built-in speaker.

Based on insights gathered from mining big data, Hellobike also found that the use of non-motorized vehicles can lead to chaotic, unsafe road conditions. To address this, Hellobike has partnered with local governments to develop non-motorized vehicle safety management systems tailored to each city's unique traffic conditions. Using video AI technology for data collection and situation analysis, as well as spatial data, Hellobike helps cities establish new vehicle management systems built upon data visualization, intelligent data processing and smart decision-making applications.

Story continues

Furthermore, Hellobike has cooperated with city officials to promote improved traffic safety, simplified parking and enhanced city appearance through a shared bike management operation plan. Hellobike has established a number of convenient fixed-point return locations using electronic fencing, Bluetooth road studs, AI and the IoT. Fixed-point return encourages users to park at designated locations, while making it easier for staff to locate and redistribute vehicles across the city.

Hellobike President Li Kaizhu and Chief Scientist Liu Xingliang will also take part in WAIC2020's AI TALK and big data forum alongside entrepreneurs from leading local and global tech companies to discuss the applications of AI technology. In addition, Hellobike plans to host its first Technology Open Day on 31 July at its Shanghai headquarters, where users can tour the space, test new vehicles, and discover the technological innovations behind Hellobike.

About Hellobike

Hellobike has continuously built user-friendly and sustainable transport services in sectors such as shared bicycles, shared e-bikes and car-pooling. As a business leader in two-wheeled transport, users have taken more than 12 billion trips on Hellobike vehicles over the past three years. Hellobike now operates in more than 360 Chinese cities.

Photo - https://photos.prnasia.com/prnh/20200710/2854753-1-a Photo - https://photos.prnasia.com/prnh/20200710/2854753-1-b

SOURCE Hellobike

See the rest here:

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 - Yahoo Finance

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids – TechCrunch

Google is bringing a set of new kids-focused features such as parent-controlled watchlists and AI-powered suggestions to Google TV, the latest in a series of efforts from the Android-maker as it attempts to broaden the offerings of its TV operating system for family consumption.

The company said it is adding these features to the kids profiles to improve content recommendation and exploration. Parents can directly push titles to the must watch lists for kids from their profiles (by just tapping the watchlist button on the titles they came across and pressing add), the company explained in a blog post.

Image Credits: Google

The company is also introducing AI-powered recommendations for kids because Google loves AI. Children can now look at popular shows and movies on their Google TV home screen based on installed apps and parent-defined ratings levels. If they dont like a title that has been recommended to them and dont wish to see it again, they can press and hold the select button and then tap hide to remove the suggestion from the list.

Image Credits: Google

The new additions are Googles ongoing efforts to make its services more appropriate for kids.Google introduced supervised accounts for YouTube last year that helps children migrate from YouTube Kids to the main YouTube app in a safe manner.

Parents can additionally create restrictions on content exploration. It allows guardians to define three levels of access: Explore for content suitable for viewers 9 and above; Explore more for viewers 13 and above; and Most of YouTube to enable access to all videos sans the age-restricted content.

Image Credits: YouTube

The search giant said it is also bringing this supervised experience to Google TV so kids can access the main YouTube app with appropriate content restrictions. Notably, when parents set up these supervised accounts, they provide consent for the collection and use of data collection from kids profiles for COPPA compliance a U.S. privacy law that defines limits for websites providing services to children.

The company first introduced kids profiles on Google TV last year that allows parents to set limits on app access and screen time.

Google said these features are rolling out starting today on the Chromecast with Google TV (both 4K and HD variants) and other Google TV devices from manufacturers like Hisense and Philips, Sony and TCL.

The rest is here:

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids - TechCrunch

What Does the Next Wave of AI Innovation Look Like? – TechDay News

Technology may shape the way people live, but sometimes the inverse is also true. In many ways, the issues of 2020 will drive the innovations of 2021, especially in AI. You can determine what AI research and development will look like by looking at where it needs to go from here.

AI expert Mark Gorenberg predicts that the pandemic will spur an AI revolution, just as the Great Recession did for big data. As the outbreak and subsequent recession reveal the world's shortcomings, AI will rise to fix them. The next generation of AI will be one that addresses the problems of today.

Medical Research

If the pandemic has highlighted one area of need, it's the health and medicine sector. Healthcare systems need better tools to be able to predict and respond to any outbreaks in the future. The predictive power of AI offers a solution.

Researchers are already using AI to find drugs that could potentially fight COVID-19. A National Science Foundation-funded supercomputer program is using machine learning to run simulations about how the virus interacts with different compounds. Using the results from these simulations, scientists could find potential vaccines to start testing.

As people recognize the value of systems like these, it will lead to further research and development. In the coming years, you'll see a broader emphasis on healthcare technologies in the AI industry. AI could help scientists predict and prevent outbreaks or treat them faster if they do occur.

Navigating New Data Regulations

As big data becomes more of standard practice, data governance is a more pressing concern. People are becoming more aware of how companies are gathering their information, which will likely lead to more data regulations. In response, companies will turn to AI to ensure they don't violate any privacy laws with their data use.

AI solutions can help institutions balance convenience for their customers with privacy and security. With the help of AI, organizations like banks can manage data across multiple platforms, keeping customer information safe while still making it accessible. Handling these things manually could make it more challenging to stay within increasing guidelines.

Automating data management will become increasingly critical to businesses as both data and regulations grow. AI that can understand and follow restrictions like the GDPR will become a necessity.

National Security

The past couple of years have also brought new emphasis to the importance of cybersecurity. AI in cybersecurity is nothing new, and many organizations employ it already, but you don't see it on a national level. As cyber-risks have become more prominent, though, AI will play a more significant role in national security.

Government adoption of technology is typically slower than that you see in the private sector. AI has already established itself in the commercial world, so the logical next step is the government. For national security agencies to implement these technologies, though, AI will have to prove its reliability and security.

Governments in the U.S. and Europe experienced several cyber attacks from threat actors like North Korea this year. To combat these rising threats, agencies will have to turn to AI. As a result, cybersecurity AI will evolve rapidly over the next few years.

AI in the IoT

The convergence of separate technologies is a natural step in development. One of the most noteworthy you'll see in the future is the marriage of AI and the IoT. As the IoT grows, so will AI functionality in these devices.

The IoT is a provides AI with the landscape necessary for it to see wider adoption and implementation. AI technologies like self-driving cars will need to take advantage of edge computing, which requires the IoT. AI development in the coming years will shift towards IoT platforms.

AI-enabled IoT devices will also make smart cities a possibility. In the face of growing environmental and sociological concerns, that's a needed improvement. You can already see this trend starting to take place, and it will only increase from here.

An AI Revolution Is Coming Soon

The world stands on the cusp on a technological revolution. AI may not be new technology, but it's still growing, changing, and driving innovation. In the next few years, AI will see much wider adoption and an unprecedented period of advancement.

Technological shifts typically follow significant cultural or societal events. The tumultuous period that has been 2020 will spur the next wave of AI technologies.

Original post:

What Does the Next Wave of AI Innovation Look Like? - TechDay News

Surprise: Samsung is building a Bixby-powered AI speaker – Engadget

Very little beyond the Vega name is known about the development -- according to WSJ its features and specs are yet to be decided, and there's no sign of a release date. To stand out in an already squeezed market it'll certainly need to boast something impressive.

In a blog post from March, Samsung executive vice president Injong Lee laid out the company's aspirations for Bixby, and it seems likely Vega will be incorporated into these ambitious plans.

"Starting with our smartphones, Bixby will be gradually applied to all our appliances. In the future, you would be able to control your air conditioner or TV through Bixby. Since Bixby will be implemented in the cloud, as long as a device has an internet connection and simple circuitry to receive voice inputs, it will be able to connect with Bixby," Lee wrote.

However, Bixby has proven something of a headache for Samsung so far. Despite centering a lot of its marketing around the feature during the launch of the Galaxy S8, Samsung eventually released the phone without Bixby in the US -- it's only gradually being rolled out now, and is unlikely to be completed before the second half of July, according to WSJ sources.

If Samsung can get on top of its existing Bixby issues and offer something unique with its debut smart speaker, then Vega could be a hit. The market is crowded, but there is demand. The number of Americans using voice-activated speakers will reach about 36 million this year, according to eMarketer -- over double last year's figure.

Go here to read the rest:

Surprise: Samsung is building a Bixby-powered AI speaker - Engadget

Watch AI help basketball coaches outmaneuver the opposing team – Science Magazine

By Edd GentSep. 27, 2019 , 8:00 AM

When it comes to teaching basketball players how to execute a winning drive to the hoop, a tactic board can be a coachs best friend. But this top-down view of the court has a major limitation: It doesnt reveal how the opposing team will respond. A new program powered by artificial intelligence (AI) could change that.

Heres how the technology works. A coach sketches plays on a virtual tactic board on theircomputer, representing their own players as red dots and the defending team as blue dots. Once they drag their virtual players around to indicate movements and passes, an AI program trained with player movement data from the National Basketball Association converts these simplified sketches into a realistic simulation of how both offensive and defensive players would move during the play.

The underlying mechanism is a generative adversarial network, which pits two AI programs against each other. One takes sketches and tries to generate realistic player movements; the other provides feedback on how closely these match real-world data. Over time, this results in increasingly realistic plays.

The system could show coaches and players how defenders are likely to react to new movesand how they should, in turn, change their tactics, the researchers will report next month at the Association for Computing Machinery International Conference on Multimedia inNice, France. Although basketball fans and nonfans couldnt reliably distinguish simulations from real plays, top-level players often could. That suggests the movements are still not entirely realistic, and the model still needs refinement.

Continue reading here:

Watch AI help basketball coaches outmaneuver the opposing team - Science Magazine

Entelo steps up its AI game with $20M Series C – TechCrunch

The race to crown a winner in the AI-powered recruiting software space is on. With both Workey and Mya nabbing rounds in the last few weeks, the timing is prime for a few players to seek advantage in the form of growth capital. This seems to be exactly what Entelo, a six-year-old player in the space, is doing. The company is announcing a $20 million Series C round of financing today led byU.S. Venture Partners with Battery Ventures, Shasta Ventures and Correlation Ventures participating.

Entelo crawls the internet to automatically generate profiles of potential hiring prospects. The company then works to match prospects to its enterprise customers looking to identify and recruit top talent. Unlike LinkedIn, Entelo doesnt currently let individuals create their own accounts. Instead, all operations happen in the background, with the exception of opt-out controls that allow anyone to request their profile be deleted at any time.

Jon Bischke, CEO of Entelo, told me that his priority for the company is improving the matching process that occurs behind the scenes. Accomplishing this will require Entelo to both collect additional unstructured data from non-traditional sources like GitHub and implement additional machine learning capabilities to allow enterprises to quickly identify and target top candidates.

Entelo faces competition from both sides of the market younger AI-first startups and legacy players like LinkedIn. For the time being, it behooves enterprises to take an everything but the kitchen sink approach to gain an edge in recruiting, but that mentality might not last forever.

Bischke believes his positioning best prepares Entelo to thrive when the dust settles. He told me in an interview that without adequate data many AI-first HR startups will struggle. On the other side of the equation, LinkedIn has a lot of work ahead of it to remain innovative in the midst of apotentially distracting acquisition.

To date, Entelo has signed over 600 companies up for its platform. These businesses includeFacebook, GE, Northrop Grumman, and Target. The company will be looking to hire additional data scientists and to add to its sales team moving forward.

Read more:

Entelo steps up its AI game with $20M Series C - TechCrunch

Using Bandwidth Management To Get In Front Of The AI And Automation Waves – Forbes

I believe the most critical leadership skill of this coming decade will be bandwidth management the ability to purposefully and intentionally manage your time, energy and attention. Are we ready? My own research suggests that we have our work cut out for us.

A shocking 73% of leaders feel that their teams do not intentionally manage their bandwidth often enough. Less than 9% see their team members always or almost always actively managing their bandwidth. This is data I collected from 139 HR leaders and executives during a recent webinar delivered on behalf of the World Business and Executive Coach Summit. The results were troubling but also unsurprising and consistent with what I see daily.

Poor Bandwidth Management Impacts Individuals And Organizations

Since the digital boom in the late 1990s, we've been increasingly overconsuming and becoming consumed. We're consumed by information and uncertainty. We're also consumed by the pure fatigue of trying to keep up in our always-on, always-connected work world. In the process, we've stopped managing how and where we spend our time, energy and attention. This has grave consequences for individuals and organizations.

On an individual level, the impact of failing to manage bandwidth is far-reaching. It leaves individuals distracted, tired and struggling to keep up with daily tasks. But there are also longer-term and complex consequences for failing to manage bandwidth that impact not only individuals but groups. Consider the following scenario.

You bring 12 people into a room with polarized political views, and you ask them to come to a consensus on a controversial issue. If they had a few weeks to develop relationships, hash out problems together and gain perspective, they might eventually discover their common ground. If they only have a few hours and no time to gain perspective, it is far more likely that these people will stand their ground. In other words, confirmation bias (a tendency to cling to information that reinforces our assumptions) will also kick in. In the end, they won't only fail to collaborate but may also end up in an increasingly polarized situation.

When we fail to manage our bandwidth, we not only risk running ourselves into the ground, we risk compromising our ability to collaborate, weigh different opinions and see things from a new perspective. For leaders in business who must take multiple perspectives into account this is a serious concern.

The Cost To Businesses Is High

To understand how low bandwidth impacts individuals and businesses, it is useful to look at the current business landscape.

While exact numbers vary, we know that automation and artificial intelligence (AI) are currently upending how we work and that there will be casualties. According to the World Economic Forum, up to 30% of existing jobs across Organization for Economic Cooperation and Development (OECD) nations may be lost to automation by the mid-2030s. We also know that some responses are likely going to be more effective than others. For example, re-skilling one's existing workforce offers a much higher return on investment than hiring new workers to fill the higher-skill jobs connected to automation and AI. One McKinsey study found that on average, replacing an employee can cost 20-30% of an annual salary, while re-skilling costs less than 10%.

Of course, if you want to retrain rather than let go of thousands of employees, you need to have the foresight to do so before you reach a point of crisis. The problem is that all signs indicate that most leaders aren't ready for this change. With low bandwidth, they are struggling to manage day-to-day operations rather than get ahead of the automation and AI wave.

Effective bandwidth management has just a few key components, but understanding how to bandwidth management works and how to put it into practice is critical.

The first step is to build awareness. Ask yourself, what is your energy, attention and focus level?

Second, reflect on what enables you to be at your best and what's draining you. For example, heighten your awareness about the things that may be draining your energy, attention and focus. This step is all about auditing how you're working and what is and is not supporting your ability to focus on issues that truly matter.

Third, start building your agility. We're living and working in a new world. Tasks once carried out by humans are increasingly being carried out by machines. This even holds true for a growing number of high-level decision-making tasks. To survive, you have to hone the ability to adapt, respond and adjust (that is, you have to learn to work differently).

The final step entails taking action. Now that you know what makes you tick and drains your energy, start taking concrete steps to change how you work. For example, this may mean putting up filters, so you're being bombarded with less information daily, and proactively delegating more work to other members of your team.

The bottom line is simple. Leaders who want to proactively prepare for the significant disruptions the 2020s will bring across sectors don't need a crystal ball to predict the future. What they need is bandwidth to gain perspective and proactively prepare for these inevitable changes.

Read this article:

Using Bandwidth Management To Get In Front Of The AI And Automation Waves - Forbes

Google’s Deep Mind Explained! – Self Learning A.I. – YouTube

Subscribe here: https://goo.gl/9FS8uFBecome a Patreon!: https://www.patreon.com/ColdFusion_TVVisual animal AI: https://www.youtube.com/watch?v=DgPaC...

Hi, welcome to ColdFusion (formally known as ColdfusTion).Experience the cutting edge of the world around us in a fun relaxed atmosphere.

Sources:

Why AlphaGo is NOT an "Expert System": https://googleblog.blogspot.com.au/20...

Inside DeepMind Nature video:https://www.youtube.com/watch?v=xN1d3...

AlphaGo and the future of Artificial Intelligence BBC Newsnight: https://www.youtube.com/watch?v=53YLZ...

http://www.nature.com/nature/journal/...

http://www.ft.com/cms/s/2/063c1176-d2...

http://www.nature.com/nature/journal/...

https://www.technologyreview.com/s/53...

https://medium.com/the-physics-arxiv-...

https://www.deepmind.com/

http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/#5dc388ee4674

https://medium.com/the-physics-arxiv-...

http://www.theverge.com/2016/3/10/111...

https://en.wikipedia.org/wiki/Demis_H...

https://en.wikipedia.org/wiki/Google_...

//Soundtrack//

Disclosure - You & Me (Ft. Eliza Doolittle) (Bicep Remix)

Stumbleine - Glacier

Sundra - Drifting in the Sea of Dreams (Chapter 2)

Dakent - Noon (Mindthings Rework)

Hnrk - fjarlg

Dr Meaker - Don't Think It's Love (Real Connoisseur Remix)

Sweetheart of Kairi - Last Summer Song (ft. CoMa)

Hiatus - Nimbus

KOAN Sound & Asa - This Time Around (feat. Koo)

Burn Water - Hide

Google + | http://www.google.com/+coldfustion

Facebook | https://www.facebook.com/ColdFusionTV

My music | t.guarva.com.au/BurnWater http://burnwater.bandcamp.com or http://www.soundcloud.com/burnwater https://www.patreon.com/ColdFusion_TV Collection of music used in videos: https://www.youtube.com/watch?v=YOrJJ...

Producer: Dagogo Altraide

Editing website: http://www.cfnstudios.com

Coldfusion Android Launcher: https://play.google.com/store/apps/de...

Twitter | @ColdFusion_TV

Follow this link:

Google's Deep Mind Explained! - Self Learning A.I. - YouTube

Army Tests New All Domain Kill Chain: From Space To AI – Breaking Defense

M270A1 Multiple Launch Rocket System (MLRS) firing at the Grafenwoehr training range in Germany.

UPDATED with Lt. Gen. Karbler remarks WASHINGTON: A successful field test in Germany shows how satellite surveillance, artificial intelligence, and long-range artillery could combine to devastating effect in future war, a senior Army official said this morning. The data from that test will feed into both computer models and future field experiments later this year in the US part of the ambitious Project Convergence exercise and in the Pacific.

We have valuable data now on actually how the real operational system works, said Willie Nelson, de facto head of space efforts at Army Futures Command. To be able to provide that data back to the warfighter near real time I know the word game-changer is overused, but frankly, that is a game-changer.

Willie Nelson addresses a Defense News webcast

This is not just science experiments, he told a Defense News webcast following on yesterdays virtual Space & Missile Defense conference. We actually used the fielded equipment M777 towed howitzers and M270 MLRS rocket/missile launchers with live fires on the range in Germany. The upcoming tests will add Extended Range Cannon Artillery (ERCA) and Grey Eagle drones.

Eventually we want this stuff on every ground platform and even down to the soldier, he said, but those are a couple of years away. (While Nelson didnt mention it, the new IVAS targeting goggles about to enter service will eventually link their wearers to AI target recognition system).

Connecting an ever-wider network of different sensors to shooters with AI accelerating the data flow over land, sea, air, space, and cyberspace is central to the Pentagons evolving concept of Joint All-Domain Operations.

In this years tests, were doing that entire fires kill chain from initial detection to final destruction, Nelson said. Were able to receive that [satellite] data in theater, process that data, be able to develop targeting coordinates from that, put it directly into the [artillery] firing system, AFATDS, and be able to launch weapons on target, he said. And were doing that now very successfully in a very short time.

Nelson isnt given to overstatement. In fact, hes among the most reserved of Futures Commands eight Cross Functional Team directors, and, not coincidentally, the only civilian among them. Nominally, his CFT handles Assured Precision, Navigation, & Timing (APNT) in laymans terms, alternatives for GPS if its jammed but its mandate has expanded to include the Armys use of satellites. Nelsons team has been working closely with the Armys Network CFT, ISR Task Force, and AI Task Force on this technology.

Lockheeds prototype Precision Strike Missile (PrSM) fires from an Army HIMARS launcher truck in its first flight test, December 2019.

Satellites, AI & Humans

Today, artillery units rely on scout helicopters, drones, and forward observers on the ground to spot targets. But to counter Russian and Chinese long-range missiles, the Army is developing a family of long-range weapons including hypersonics with a thousand-plus-mile range that can hit targets much farther away than any earthbound asset can see, while recon aircraft are prime targets for ever-more-sophisticated surface-to-air weapons.

You cant assume air superiority, at least very early on, and so quite frankly the only avenue we have is to sense from space, Nelson said. The good news is, our access to space has greatly improved over the years, through a lot of innovation through commercial capabilities, primarily.

A growing variety of government and commercial satellites in Low Earth Orbit, Nelson said, can provide data of multiple types. The Army has said it doesnt need to build its own satellites for intelligence, surveillance, and reconnaissance.

UPDATEWith the standup of the space force, the Army should not be in the business of its own launch services [for] satellites, Lt. Gen. Daniel Karbler, who heads Army Space & Missile Defense Command, told reporters this afternoon. Army space starts and ends on the ground.

While the Army has experimented with small satellites in the past, Karbler and his staff today made clear the service isnt interested in building its own constellations. (That doesnt rule out Army payloads hosted on satellites built and launched by others, however). We identify the requirements that we needand we hand those requirements on the space enterprise, he said. That could be DoD, that could be commercial, that could be other agencies. UPDATE ENDS

Those rapidly expanding constellations of non-Army satellites provide a wide array of data, from straightforward photographic imagery, to triangulated sources of radio emissions, to Synthetic Aperture Radar. its SAR that Nelson is most excited about, not only because it can penetrate cloud cover and the dark of night, but because commercial R&D has improved the tech to where it can fit aboard a small LEO satellite.

Historically, each of these different data sources would go to a different human analyst. It would take multiple specialists in different fields to piece together for example how a blurry photo, an indecipherable radio transmission, and a ghostly radar image, each inconclusive on its own, could together pinpoint a potential target. Then a different set of specialists would take the list of targets and match it against the commanders priorities and the available weapons.

In future conflicts, we dont have time for that, Nelson said. Thats where the AI comes in. It can radically reduce the timeline by automating the grunt work, with software replacing human analysts and planners but not, Nelson emphasized, human decision-makers.

There need to be humans in the loop when were talking about lethal power, he said. What we want to do is let machines do what machines do well[:] dull work, fast.

Heres Nelsons outline of how this kill chain works, translated into laymans terms:

Who are the humans who must ensure this AI-accelerated cycle doesnt spin out of control? The Army is figuring that out at the same time it field-tests the technology.

Its a strategic weapon, said Lt. Gen. Neil Thurgood, who is developing the Armys Long-Range Hypersonic Weapon. Its not long-range artillery.

Lt. Gen. Neil Thurgood

The last time the Army had something comparable, it was nuclear-tipped: All the US military hypersonics now in development will be precision weapons with conventional warheads. So the Army cant reprint old doctrine from the Cold War era, any more than it can use existing field artillery manuals for hypersonics.

The Armys is exploring new kinds of organization, such as Theater Fires Command. Lt. Gen. Charles Flynn, the Armys deputy chief of staff for operations, and Maj. Gen. Kenneth Kamper, head of the artillery school at Fort Sill, Okla., are leading an effort to develop new doctrine, Thurgood said. Theyre collaborating closely with the all-service Strategic Command, which controls Air Force strategic bombers, ICBMs, and Navy nuclear missile submarines, and which will likely have a role in Army hypersonics too.

Think of hypersonics as a strategic weapon, Thurgood told yesterdays SMD conference. It literally is bringing the Army back into the days where we had weapons systems like Pershing, where we had strategic weapons that were part of the combatant commands war plans, and those were held at the STRATCOM level.

The mission planning for the hypersonic weapon is at that level, he said. It is not happening at the battery level.

It is not a traditional field artillery mission, Thurgood said at this mornings follow-up webinar. This is a detailed, preplanned set of events that will happen at the STRATCOM level all the way down to the battery level.

See the original post here:

Army Tests New All Domain Kill Chain: From Space To AI - Breaking Defense

This Artificial Intelligence Stock Raised Its Dividend on "Black Thursday" – Motley Fool

As many now know, last Thursday was an historic day in the stock market. On March 13, 2020, the S&P 500 plunged 9.5% in a single day, the worst daily drop since "Black Monday" in 1987. The plunge came the day after President Trump delivered an underwhelming speech that included a European travel ban. However, stocks rallied on Friday after news of more government stimulus, emergency measures to boost testing, and the purchasing of oil for the country's strategic reserve. Negotiations for a comprehensive support package for the economy are also ongoing.

However, one tech company was tuning out the noise. Semiconductor equipment maker Applied Materials (NASDAQ:AMAT) decided to announce an increase in its dividend on the exact same day the market went into freefall. Is that a sign of confidence, or foolishness?

Image source: Getty Images.

Applied Materials announced that it would raise its quarterly dividend by a penny, from $0.21 to $0.22, a 4.8% boost. Applied's dividend yield is now 1.86%, but that's with a very modest 27.5% payout ratio. The higher dividend will be paid out on June 11, to shareholders of record as of May 21. CEO Gary Dickerson said: "We are increasing the dividend based on our strong cash flow performance and ongoing commitment to return capital to shareholders. ... We believe the AI-Big Data era will create exciting long-term growth opportunities for Applied Materials."

Semiconductors and semiconductor equipment companies have historically been known to be cyclical parts of the tech industry. However, it appears Applied Materials believes the overarching trends for faster and smarter semiconductors should help the company power through a near-term economic disruption. As chip-makers make smaller and more advanced chips, Applied's machines are a necessary expenditure.

But can the long-term trends buffer the company in a times of a potential global recession?

It should be known that the semiconductor industry was already in a downturn last year in 2019, and was beginning to come out of it in early 2020. For Applied, last quarter's results exceeded the high end of its previous guidance, with revenue up 11% and earnings per share up 21%.On Feb. 12, management also guided for solid sequential growth in Q2 even while lowering its prior numbers by $300 million because of coronavirus as of that date.

On a Feb. 12 conference call with analysts, Dickerson reiterated that optimism:

We believe we can deliver strong double-digit growth in our semiconductor business this year as our unique solutions accelerate our customers' success in the AI-Big Data era... our current assessment is that the overall impact for fiscal 2020 will be minimal. However, with travel and logistics restrictions, we do expect changes in the timing of revenues during the year. We are actively managing the situation in collaboration with our customers and suppliers.

While many businesses across the world have seen severe interruptions, it's unclear if the chip industry will be affected as much as others, despite its reputation for cyclicality. While consumer-related electronics may take a temporary hit to demand, a more stay-at-home economy means the need for faster connections, which could actually increase demand for servers and base stations.

Memory chip research website DrameXchange released a report on March 13, outlining its current projections for the DRAM and NAND flash industries as of March 1, along with an updated "bear case" scenario should the coronavirus crisis escalate into a global recession, which was updated on March 12.

Category

Current 2020 Projections

Bear Case 2020 Projections

Notebook computer shipments

(2.6%)

(9%)

Server shipments

5.1%

3.1%

Smartphone shipments

(3.5%)

(7.5%)

DRAM price growth

30%

20%

NAND flash price growth

15%

(5%)

Data source: DrameXchange.

Notice that the enterprise-facing server industry looks poised to withstand a potential severe downturn much better than consumer-facing notebook or smartphone industry. In addition, DRAM prices are poised to increase in 2020 even in a recession, as prices had already crashed last year and the industry cut back on capacity. NAND flash had an earlier downturn than DRAM, and was already beginning to come out of it, so it has more potential with a decline in pricing.

In addition, the largest global foundry Taiwan Semiconductor (NYSE:TSM), just said on March 11 that its capacity for leading-edge 5nm chip production was already "fully booked," and that volume production would begin in April. That indicates continued strong demand for leading-edge logic chips.

So while there may be some more softness in certain parts of the chip industry, there are still relatively strong segments as well. Therefore, Applied may not face revenue declines in 2020, but rather a mere absence of previously forecast growth. Yet even if that happens, growth will likely be deferred to 2021, not totally lost, as eventually the demand for chips will increase.

After its decline, Applied Materials stock trades at just 17 times trailing earnings, and just 14.7 times projected 2020 earnings, though 2020 projections may come down. Still, that's a reasonable price to pay for Applied, especially in a zero-interest rate environment. The company has just as much cash as debt, and its recent dividend raise on the market's darkest day in recent history shows long-term confidence. Risk-tolerant investors with a long enough time horizon thus may want to give Applied -- and the entire chip sector -- a look after the dust settles.

Read more:

This Artificial Intelligence Stock Raised Its Dividend on "Black Thursday" - Motley Fool

AI Platform | Microsoft Azure

Sign up for the AI newsletter to follow the latest Microsoft AI news, features, events, and community activities surrounding Cognitive Services, Bot Framework, Machine Learning, and Cognitive Toolkit.

First name

Last name

Email

Country/region Afghanistan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bailiwick of Guernsey Bailiwick of Jersey Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bonaire Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory British Virgin Islands Brunei Darussalam Bulgaria Burkina Faso Cameroon Canada Cayman Islands Central African Republic Chile China Colombia Commonwealth of Dominica Commonwealth of the Northern Mariana Islands Cook Islands Cooperative Republic of Guyana Costa Rica Cte d'Ivoire Country of Sint Maarten Croatia Curaao Cyprus Czech Republic Democratic People's Republic of Korea Democratic Republic of So Tom and Prncipe Democratic Republic of the Congo Democratic Republic of Timor-Leste Denmark Department of Guadeloupe Department of Guiana Department of Martinique Department of Mayotte Department of Runion Dominican Republic Ecuador Egypt El Salvador Estonia Ethiopia Falkland Islands Faroe Islands Federal Republic of Somalia Federated States of Micronesia Fiji Finland France Gabonese Republic Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guatemala Honduras Hong Kong Hong Kong Hungary Iceland Independent State of Papua New Guinea Independent State of Samoa India Indonesia Iraq Ireland Islamic Republic of Iran Islamic Republic of Mauritania Isle of Man Israel Italy Jamaica Jan Mayen Japan Jordan Kazakhstan Kenya Kingdom of Cambodia Kingdom of Lesotho Kingdom of Swaziland Kingdom of Tonga Korea Kosovo Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Libya Liechtenstein Lithuania Luxembourg Macao Macedonia (FYRO) Malaysia Malta Mauritius Mexico Moldova Monaco Mongolia Montenegro Montserrat Morocco Namibia Nepal Netherlands New Zealand Nicaragua Nigeria Niue Norway Oman Overseas Collectivity of Saint Barthlemy Overseas Collectivity of Saint Martin Overseas Collectivity of Saint Pierre and Miquelon Overseas Collectivity of the Wallis and Futuna Islands Overseas Lands of French Polynesia Pakistan Palestinian Authority Panama Paraguay Peru Philippines Pitcairn, Henderson, Ducie, and Oeno Islands Poland Portugal Public Body of Saba Public Body of Sint Eustatius Puerto Rico Qatar Republic of Burundi Republic of Cabo Verde Republic of Chad Republic of Cuba Republic of Djibouti Republic of Equatorial Guinea Republic of Guinea Republic of Guinea-Bissau Republic of Haiti Republic of Kiribati Republic of Liberia Republic of Madagascar Republic of Malawi Republic of Maldives Republic of Mali Republic of Mozambique Republic of Nauru Republic of Niger Republic of Palau Republic of San Marino Republic of Seychelles Republic of Sierra Leone Republic of South Sudan Republic of Suriname Republic of the Congo Republic of The Gambia Republic of the Marshall Islands Republic of the Sudan Republic of the Union of Myanmar Republic of Vanuatu Romania Russia Rwanda Saint Helena, Ascension, and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Saudi Arabia Senegal Serbia Singapore Slovakia Slovenia Solomon Islands South Africa South Georgia and the South Sandwich Islands Spain Sri Lanka State of Eritrea State of the Vatican City Svalbard Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania Territory of Christmas Island Territory of Cocos (Keeling) Islands Territory of Guam Territory of Heard Island and McDonald Islands Territory of New Caledonia and Dependencies Territory of Norfolk Island Territory of the French Southern and Antarctic Lands Thailand Togolese Republic Tokelau Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine Union of the Comoros United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Venezuela Vietnam Virgin Islands of the United States Yemen Zambia Zimbabwe

Microsoft may use your contact information to provide updates and special offers about Microsoft Azure. You can unsubscribe at any time. To learn more, read the privacy statement.

I accept

Subscribe

Go here to read the rest:

AI Platform | Microsoft Azure

Researchers find AI is bad at predicting GPA, grit, eviction, job training, layoffs, and material hardship – VentureBeat

A paper coauthored by over 112 researchers across 160 data and social science teams found that AI and statistical models, when used to predict six life outcomes for children, parents, and households, werent very accurate even when trained on 13,000 data points from over 4,000 families. They assert that the work is a cautionary tale on the use of predictive modeling, especially in the criminal justice system and social support programs.

Heres a setting where we have hundreds of participants and a rich data set, and even the best AI results are still not accurate, said study co-lead author Matt Salganik, a professor of sociology at Princeton and interim director of the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs. These results show us that machine learning isnt magic; there are clearly other factors at play when it comes to predicting the life course.

The study, which was published this week in the journal Proceedings of the National Academy of Sciences, is the fruit of the Fragile Families Challenge, a multi-year collaboration that sought to recruit researchers to complete a predictive task by predicting the same outcomes using the same data. Over 457 groups applied, of which 160 were selected to participate, and their predictions were evaluated with an error metric that assessed their ability to predict held-out data (i.e., data held by the organizer and not available to the participants).

The Challenge was an outgrowth of the Fragile Families Study (formerly Fragile Families and Child Wellbeing Study) based at Princeton, Columbia University, and the University of Michigan, which has been studying a cohort of about 5,000 children born in 20 large American cities between 1998 and 2000. Its designed to oversample births to unmarried couples in those cities, and to address four questions of interest to researchers and policymakers:

When we began, I really didnt know what a mass collaboration was, but I knew it would be a good idea to introduce our data to a new group of researchers: data scientists, said Sara McLanahan, the William S. Tod Professor of Sociology and Public Affairs at Princeton. The results were eye-opening.

The Fragile Families Study data set consists of modules, each of which is made up of roughly 10 sections, where each section includes questions about a topic asked of the childrens parents, caregivers, teachers, and the children themselves. For example, a mother who recently gave birth might be asked about relationships with extended kin, government programs, and marriage attitudes, while a 9-year-old child might be asked about parental supervision, sibling relationships, and school. In addition to the surveys, the corpus contains the results of in-home assessments, including psychometric testing, biometric measurements, and observations of neighborhoods and homes.

The goal of the Challenge was to predict the social outcomes of children aged 15 years, which encompasses 1,617 variables. From the variables, six were selected to be the focus:

Contributing researchers were provided anonymized background data from 4,242 families and 12,942 variables about each family, as well as training data incorporating the six outcomes for half of the families. Once the Challenge was completed, all 160 submissions were scored using the holdout data.

In the end, even the best of the over 3,000 models submitted which often used complex AI methods and had access to thousands of predictor variables werent spot on. In fact, they were only marginally better than linear regression and logistic regression, which dont rely on any form of machine learning.

Either luck plays a major role in peoples lives, or our theories as social scientists are missing some important variable, added McLanahan. Its too early at this point to know for sure.

Measured by the coefficient of determination, or the correlation of the best models predictions with the ground truth data, material hardship i.e., whether 15-year-old childrens parents suffered financial issues was .23, or 23% accuracy. GPA predictions were 0.19 (19%), while grit, eviction, job training, and layoffs were 0.06 (6%), 0.05 (5%), and 0.03 (3%), respectively.

The results raise questions about the relative performance of complex machine-learning models compared with simple benchmark models. In the Challenge, the simple benchmark model with only a few predictors was only slightly worse than the most accurate submission, and it actually outperformed many of the submissions, concluded the studys coauthors. Therefore, before using complex predictive models, we recommend that policymakers determine whether the achievable level of predictive accuracy is appropriate for the setting where the predictions will be used, whether complex models are more accurate than simple models or domain experts in their setting, and whether possible improvement in predictive performance is worth the additional costs to create, test, and understand the more complex model.

The research team is currently applying for grants to continue studies in this area, and theyve also published 12 of the teams results in a special issue of a journal called Socius, a new open-access journal from the American Sociological Association. In order to support additional research, all the submissions to the Challenge including the code, predictions, and narrative explanations will be made publicly available.

The Challenge isnt the first to expose the predictive shortcomings of AI and machine learning models. The Partnership on AI, a nonprofit coalition committed to the responsible use of AI, concluded in its first-ever report last year that algorithms are unfit to automate the pre-trial bail process or label some people as high-risk and detain them. The use of algorithms in decision making for judges has been known to produce race-based unfair results that are more likely to label African-American inmates as at risk of recidivism.

Its well-understood that AI has a bias problem. For instance, word embedding, a common algorithmic training technique that involves linking words to vectors, unavoidably picks up and at worst amplifies prejudices implicit in source text and dialogue. A recent study by the National Institute of Standards and Technology (NIST) found that many facial recognition systems misidentify people of color more often than Caucasian faces. And Amazons internal recruitment tool which was trained on resumes submitted over a 10-year period was reportedly scrapped because it showed bias against women.

A number of solutions have been proposed, from algorithmic tools to services that detect bias by crowdsourcing large training data sets.

In June 2019, working with experts in AI fairness, Microsoft revised and expanded the data sets it uses to train Face API, a Microsoft Azure API that provides algorithms for detecting, recognizing, and analyzing human faces in images. Last May, Facebook announced Fairness Flow, which automatically sends a warning if an algorithm is making an unfair judgment about a person based on their race, gender, or age. Google recently released the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework. Not to be outdone, IBM last fall released AI Fairness 360, a cloud-based, fully automated suite that continually provides [insights] into how AI systems are making their decisions and recommends adjustments such as algorithmic tweaks or counterbalancing data that might lessen the impact of prejudice.

The rest is here:

Researchers find AI is bad at predicting GPA, grit, eviction, job training, layoffs, and material hardship - VentureBeat

This AI can discover the hidden links between great works of art – ZDNet

When MIT CSAIL PhD student Mark Hamilton saw the "Rembrandt and Velazquez" exhibit in Amsterdam's Rijksmuseum last year, he was surprised to see that some works of art that have no connection on paper, can look eerily similar in reality.

The show's curators had paired Francisco de Zurbarn's The Martyrdom of Saint Serapion, a 17th century Spanish religious painting, with Jan Asselijn's The Threatened Swan, a Dutch canvass from a similar age. While the artists never met each other during their lives, the two works show some clear visual resemblance.

The researchers were inspired by an unlikely, yet similar pairing: Francisco de Zurbarn's, The Martyrdom of Saint Serapion (left) and Jan Asselijn's The Threatened Swan (right).

It got Hamilton thinking about the other hidden links that could be uncovered in the history of art. The researcher and his team, in partnership with Microsoft, have now unveiled a new algorithm that takes image retrieval technology a step further, to run through millions of paintings across thousands of years and find unexpected parallels in themes, motifs, and visual styles.

Dubbed "MosAIc", the system is currently running on the databases of works from the Metropolitan Museum of Art and the Rijksmuseum. From a single image, the tool can uncover connections in whatever culture or media the user is interested in, and quickly reach a number of closest possible works that match the original query.

MosAIc, for instance, was presented with the Dutch Double Face Banyan, an anonymous item of clothing from the late 18th century, and found similarities with a Chinese ceramic figurine. The connection can be traced to the flow of porcelain and iconography from Chinese to Dutch markets between the 16th and 20th centuries.

MosAIc was presented with the Dutch Double Face Banyan, and found similarities with a Chinese ceramic figurine.

To develop MosAIc, the research team used an image retrieval system and the well-known "k-nearest neighbors" (KNN) algorithm, which is widely used to find objects based on similarity, for product recommendation for example.

Typically, however, image retrieval systems that are enabled by the KNN algorithm present some limitations. The scope of a query is effectively limited: in the case of paintings, users could only ask for similar artwork from a specific artist. Or, they could run so-called "unconditional" queries, and gradually filter their way through results until they got an accurate answer, a process that is costly and time consuming.

Hamilton and his team, instead, created a conditional image retrieval system (CIR), which delegates the filtering to the algorithm. The researchers still used the KNN algorithm, but enabled it to add "conditions", like texture, content, color or pose, while the program is running, until it reaches the closest match for the original query.

The process is called a conditional KNN tree: the algorithm groups similar images together in a tree-like structure, and starting from the trunk, applies new filters as it climbs up, following the most promising branch until it finds the most accurate image.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Hamilton said: "Restricting an image retrieval system to particular subsets of images can yield new insights into relationships in the visual world. We aim to encourage a new level of engagement with creative artifacts."

While recognizing that the technology does not break speed records, the team of researchers said that CIR can improve result diversity in a simple and efficient way.

And the new technology is not limited to artwork queries. Hamilton and his colleagues anticipate a number of applications for the new algorithm, including using MosAIc to better study deepfakes, and particularly where deepfakes most struggle to model reality.

The algorithm, while working its way to the top of the tree to find an image that best matches a real picture, at the same time leaves behind on its branches the pictures that it believes fail to represent the original input.

By going back to those branches, the researchers could visualize which images are deepfakes, as well as which conditions, or filters, convinced the algorithm to leave them behind typically, because the deepfake failed to accurately represent a certain element of reality, like a microphone or a hat.

Although sometimes invisible to the human eye, those "blind spots" are what distinguish a sophisticated deepfake from a genuine image.

Hamilton hopes that MosAIc will be used in many other fields ranging from social science to medicine. "These fields are rich with information that has never been processed with these techniques and can be a source for great inspiration for both computer scientists and domain experts," he said.

By going back to those branches, the researchers could visualize which images are deepfakes, as well as which conditions, or filters, convinced the algorithm to leave them behind.

The rest is here:

This AI can discover the hidden links between great works of art - ZDNet

Ai-Da, the First Robot Artist To Exhibit Herself – Entrepreneur

As a critic of modern life and technology, Ai-Da can draw thanks to artificial intelligence.

Stay informed and join our daily newsletter now!

February15, 20212 min read

Ai-Da , a humanoid artificial intelligence robot, will exhibit a series of self-portraits that she created by "looking" into a worm with her cameras on her eyes. It sounds strange? A little bit, we will tell you how it works and why the idea came up.

The robot was named Ai-Da after the 19th century mathematician Ada Lovelace. According to its creators, it is capable of drawing real people using its camera eye and a pencil in hand.

She 'looks' in the mirror that is integrated with her camera eyes and with the help of algorithm programs transforms it into coordinates. The hand of the artistic robotic, calculates a virtual route and interprets the coordinates to create the artwork.

The idea for Ai-Da came from the owner of the Oxford art gallery, Aidan Meller and the art curator Lucy Seal.

Seal commented that the self-portraits are meant to be a critique of our current reliance on data-driven technology.

In an interview with The Sunday Times she said that we live in a culture of selfies, but we are giving our data to the tech giants, who use it to predict our behavior. Through technology, we outsource our own decisions .

"The work invites us to think about artificial intelligence, technological uses and abuses in today's world."

His work will be exhibited at the Design Museum in London between May and June, if sanitary conditions permit. However, this would be the second exhibition, in 2019 the robot was presented and explored the limits between artificial intelligence, technology and organic life in drawing, painting, sculpture and video art.

Continue reading here:

Ai-Da, the First Robot Artist To Exhibit Herself - Entrepreneur

Adobe’s New AI Tool Can Recommend Different Headlines and Images To The Varying Audience Of A Blog – Digital Information World

Continuing with their legacy of innovation, Adobe has yet again brought a new way to personalize a blog post for different users with the help of artificial intelligence.

Known by the name of Adobe Sensei, the technology will recommend different headlines, images (taken from the library of Adobe Stock), and also preview blurbs based on improving the experience for the targeted audience.

The new tool has come out as a part of the Adobe Sneaks program which employees use to develop their new ideas in the form of proper demos and then showcase what they have made at the Adobe Summit every year. So, while a lot of people consider Sneaks as merely demos, Adobe Experience Cloud Senior Director Steve Hammond disagree by telling that almost 60% of the Sneaks turn out into real products later after the Summit. Furthermore, Hyman Chung, a senior product manager for Adobe Experience Cloud state that Sneaks, in particular, can be more useful for content creators and content marketers who are already enjoying a great hile in traffic during the coronavirus pandemic and may now be looking for more unique ways to make readers engage more by doing less work.

Chung showed the magic of Experience Cloud with a test blog based on a tourism company. One blog post about traveling to Australia was presented differently to thrill-seekers, frugal travelers, partygoers, and others in the demo. The feature also provides the liberty to writers and editors to make changes in the preview according to the desired audience and even go through the Snippet Quality Score for what Sensei recommends.

Hammond also explained that the demo only illustrates Adobes approach to AI as the company majorly focuses on delivering automation in specific user cases with AI rather than going for building bigger platforms. So, in Senseis case, AI will not change the content but only how it is promoted on the site.

For privacy matters, Hammond has clearly mentioned that the audience personas are only built on what kind of information the user decides to share with the website or brand.

Read next: This New AI-Based Algorithm Created By Microsoft Helps To Restore Old Photos

View post:

Adobe's New AI Tool Can Recommend Different Headlines and Images To The Varying Audience Of A Blog - Digital Information World