Stanford University finds that AI is outpacing Moores Law – ComputerWeekly.com

Stanford Universitys AI Index 2019 annual report has found that the speed of artificial intelligence (AI) is outpacing Moores Law.

Moores Law maps out how processor speeds double every 18 months to two years, which means application developers can expect a doubling in application performance for the same hardware cost.

But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. Prior to 2012, AI results closely tracked Moores Law, with compute doubling every two years., the report said. Post-2012, compute has been doubling every 3.4 months.

The study looked at how AI algorithms have improved over time, by tracking the progress of the ImageNet image identification program. Given that image classification methods are largely based on supervised machine learning techniques, the reports authors looked at how long it takes to train an AI model and associated costs, which they said represents a measurement of the maturity of AI development infrastructure, reflecting advances in software and hardware.

Their research found that over 18 months, the time required to train a network on cloud infrastructure for supervised image recognition fell from about three hours in October 2017 to about 88 seconds in July 2019. The report noted that data on ImageNet training time on private cloud instances was in line with the public cloud AI training time improvements.

The reports authors used the ResNet image classification model to assess how long it takes algorithms to achieve a high level of accuracy. In October 2017, 13 days of training time were required to reach just above 93% accuracy. The report found that training an AI-based image classification over 13 days to achieve 93% accuracy would have cost about $2,323 in 2017.

The study reported that the latest benchmark available on Stanford DAWNBench , using a cloud TPU on GCP to run the ResNet model to attain image classification accuracy slightly above 93% accuracy, cost just over $12 in September 2018.

The report also explored how far computer vision had progressed, looking at innovative algorithms that push the limits of automatic activity understanding, which can recognise human actions and activities from videos using the ActivityNet Challenge.

One of the tasks in this challenge, called Temporal Activity Localisation, uses a long video sequences that depict more than one activity, and the algorithm is asked to find a given activity. Today, algorithms can accurately recognise hundreds of complex human activities in real time, but the report found that much more work is needed.

After organising the International Activity Recognition Challenge (ActivityNet) for the last four years, we observe that more research is needed to develop methods that can reliably discriminate activities, which involve fine-grained motions and/or subtle patterns in motion cues, objects and human-object interactions, said Bernard Ghanem, associate professor of electrical engineering at King Abdullah University of Science and Technology, in the report.

Looking forward, we foresee the next generation of algorithms to be one that accentuates learning without the need for excessively large manually curated data. In this scenario, benchmarks and competitions will remain a cornerstone to track progress in this self-learning domain.

See the rest here:

Stanford University finds that AI is outpacing Moores Law - ComputerWeekly.com

Related Posts

Comments are closed.