Page 149«..1020..148149150151..160170..»

Category Archives: Artificial Intelligence

Highflying Artificial Intelligence Chip Play Seen Flying Even Higher – Investor’s Business Daily

Posted: July 25, 2017 at 12:16 pm

Highflying chip stock Nvidia (NVDA) received a bullish report on Monday from investment bank Canaccord Genuity forits booming data-center processor business.

XAutoplay: On | OffCanaccord analyst Matthew Ramsay reiterated his buy rating on Nvidia and raised his price target on the stock to 180 from 155.

Nvidia shares fell1.2% to close at 166.15 on the stock market today. It droppedan additional 1.4% in after-hours trading Monday. Nvidia hit an all-time high of 169.30 on Friday.

Nvidia has diversified from graphics processors for PCs and gaming consoles into high-end computing processors for data centers, artificial intelligence, machine learning and self-driving cars.

"Our overall bullish thesis on GPU (graphics processing unit) computing continues to accelerate (particularly data center) and we believe Nvidia's emergence as a platform computing company (of which gaming is just one important piece) is now cemented," Ramsay said in a note to clients.

IBD'S TAKE: Nvidia is one of eight chip industry companies on the IBD 50 list of top-performing growth stocks. It is currently ranked No. 6.

Nvidia should be able to hold its own against increased competition in the data-center market from Intel (INTC) and Advanced Micro Devices (AMD), Ramsay said.

"We remain very impressed how quickly Nvidia has segmented the product roadmap to provide more application-specific silicon to both the quickly evolving data-center and automotive markets," he said.

On Saturday,at the annual Computer Vision and Pattern Recognition conference in Honolulu, Nvidia Chief ExecutiveJensen Huang unveil the company's latest GPU, the Nvidia Tesla V100, based on its Volta architecture.

RELATED:

These Non-Tech Firms Are Making Big Bets On Artificial Intelligence

These 5 Top Chip Stocks Are Near Buy Zones

10:38 AM ET Build and maintain your watch list with the latest news and analysis of the market's top-rated growth stocks.

10:38 AM ET Build and maintain your watch list with the latest news...

Link:

Highflying Artificial Intelligence Chip Play Seen Flying Even Higher - Investor's Business Daily

Posted in Artificial Intelligence | Comments Off on Highflying Artificial Intelligence Chip Play Seen Flying Even Higher – Investor’s Business Daily

Brainpower is so yesterday leave it to AI – Kansas City Star

Posted: at 12:16 pm


Kansas City Star
Brainpower is so yesterday leave it to AI
Kansas City Star
Smart people are starting to worry about the brainpower of machines. A recent report from Harvard said the emergence of artificial intelligence as a weapon poses as much game-changing potential as the airplane and the nuclear bomb. They worry it could ...

Read this article:

Brainpower is so yesterday leave it to AI - Kansas City Star

Posted in Artificial Intelligence | Comments Off on Brainpower is so yesterday leave it to AI – Kansas City Star

How artificial intelligence is defining the future of brick-and-mortar shopping – TNW

Posted: July 24, 2017 at 8:13 am

While online shopping has taken great strides in recent years, the brick-and-mortar retail hasnt managed to keep pace.

Artificial intelligence now permeates every aspect of ecommerce platforms, especially where customer interactions are involved. Smart product suggestions, AI-powered search, cognitive customer service agents are just some of the innovations that have helped make online shopping more personalized and enjoyable for the customerand more profitable for the retailer of course.

Meanwhile, AI advances in brick-and-mortar retail have mostly remained in inventory management and back store operations. The few innovations that have happened in the customer-facing aspects of in-person retail have little or no AI involved, and have failed to make tangible positive impact in the shopping experience and gain wide adoption.

Fortunately, this is something that is fast changing as technological developments enable retailers to gather in-store data and deploy AI-powered solutions. Artificial intelligence can help fix old problems in retail tech as well as introduce new possibilities that were previously inconceivable. Here are some of the trends that are worth watching.

A few years ago, in-store beacons were supposed to be the biggest thing that happened to brick-and-mortar retail, but didnt live up to its hype. Part of the problem with beacons is that they introduce new complexities without solving the real problems customers are facing. Beacons require customers to install an app that does little more than pop up annoying promotions that in no way rivals the personalized suggestions of online shopping platforms.

Now, retailers are experimenting with a new generation of apps powered by machine learning algorithms, whose value go beyond displaying prices and coupons. IBM Watson, a leader in cognitive computing and natural language processing, has partnered with several large retailers to help them better understand and serve the needs of their customers.

An example is Macys On Call, a mobile web application that uses the Watsons cognitive computing power and location-based software to help shoppers get information while theyre navigating the companys stores. The application is able to parse and understand natural language queries about such things as the location of products, departments and services in a particular store, and it responds in a relevant way. As is with all machine learningbased platforms, every customer interaction makes On Call smarter.

Sears Automotive is using the same technology for its Digital Tire Journey in-store web app, which helps shoppers navigate their way through the stores wide assortment of tires using a conversational interface and find whats best for their needs.

While providing value to customers, these apps are enabling retailers to gather a wealth of customter-related data that can in turn be used to fuel other AI-powered solutions.

Retailers annually lose a collective $45 billion to shrinkage, due to non-scans and other errors occurring at the point of sale. This is an especially serious problem at self-checkouts, the technology that was supposed reduce friction and streamline the customer experience but ended up opening a Pandoras box of new problems.

A handful of companies are working toward addressing this problem in real time through artificial intelligence. Everseen, a software company founded in Cork, Ireland, uses computer vision and AI algorithms to analyze video feeds from retailers staffed registers and self-checkout feeds and automatically detect when a product is left unscanned. Whenever Everseen detects unusual activity, it sends a notification to store management via smartwatch, tablet or other mobile device. This will help prevent theft, but it will also help provide assistance at self-checkouts, which are the source of much customer frustration. The companys current AI technology is in use by five of the worlds 10 largest retailers.

StopLift is another company that offers a similar technology. StopLift uses computer vision and video analytics to detect a number of common scams and errors at checkouts. The system compares the items it detects on video to actual POS data to track items that have not been scanned.

Both solutions become better over time as they gather more data and tune themselves to the specifics of each store.

Many believe that in the future, retail will be fully automated by AI, eliminating long lines and obviating the need for checkouts altogether. This means customers can enter a store, grab the items they need and exitwithout getting arrested for shoplifting.

Though the concept is far from mature, a number of companies are making headways in this direction. Last year, Amazon announced Go, a checkout-free retail store that is still in the experimental stages. Go uses computer vision, machine learning algorithms and IoT sensors to understand customers interactions across the store. The technology automatically updates the shopping cart in an associated mobile app whenever a customer picks up or returns an item from a store shelf.

Amazons plan to open its store to the public in 2017 has hit some hurdles. But the complexities have done nothing to deter the online retail giants resolve in creating the store of the future, and its $13.7 billion acquisition of the Whole Foods might have something to do with it.

Neither has Amazons difficulties prevented other companies from making similar moves, including Walmart, the largest retailer in the U.S., which is taking serious strides to incorporate AI in its retail stores.

Everseen, which has been working on a similar concept since 2012, plans to introduce its own checkout-free technology soon. Called 0Line, the solution will provide retailers with an AI-powered network of video cameras, sensors and biometric data to recognize customers. All of this will interact with inventory, POS and a mobile-based payment solution that will enable instant transactions. By the time customers leave the store, their accounts will have been charged and an itemized virtual receipt will be made available to them.

Thanks to a number of developments, AIs reach is fast expanding into every domain of the physical world. These examples show that brick-and-mortar retail is bound for some major transformations. In a few years the in-store shopping experience may look much different from what were used to, maybe even smarter than its online counterpart.

This post is part of our contributor series. The views expressed are the author's own and not necessarily shared by TNW.

Read next: From Uber to Postmates: A tipping guide for the sharing economy

Go here to see the original:

How artificial intelligence is defining the future of brick-and-mortar shopping - TNW

Posted in Artificial Intelligence | Comments Off on How artificial intelligence is defining the future of brick-and-mortar shopping – TNW

China artificial intelligence bid seeks $59 billion industry – The Denver Post

Posted: at 8:13 am

China aims to make the artificial intelligence industry a new, important driver of economic expansion by 2020, according to a development plan issued by the State Council.

Policymakers want to be global leaders, with the AI industry generating more than 400 billion yuan ($59 billion) of output per year by 2025, according to an announcement from the Cabinet late Thursday. Key development areas include AI software and hardware, intelligent robotics and vehicles, virtual reality and augmented reality, it said.

Artificial intelligence has become the new focus of international competition, the report said. We must take the initiative to firmly grasp the next stage of AI development to create a new competitive advantage, open the development of new industries and improve the protection of national security.

The plan highlights Chinas ambition to become a world power backed by its technology business giants, research centers and military, which are investing heavily in AI. Globally, the technology will contribute as much as $15.7 trillion to output by 2030, according to a PwC report last month. Thats more than the current combined output of China and India.

The positive economic ripples could be pretty substantial, said Kevin Lau, a senior economist at Standard Chartered Bank in Hong Kong. The simple fact that China is embracing AI and having explicit targets for its development over the next decade is certainly positive for the continued upgrading of the manufacturing sector and overall economic transformation.

Chinese AI-related stocks advanced Friday. CSG Smart Science & Technology Co. climbed as much as 9.3 percent in Shenzhen before closing 3.1 percent higher, while intelligent management software developer Mesnac Co. surged 9.8 percent after hitting the 10 percent daily limit in earlier trading.

AI will have a significant influence on society and the international community, according to an opinion piece by East China University of Political Science and Law professor Gao Qiqi published Wednesday in the Peoples Daily, the flagship newspaper of the Communist Party.

PwC found that the worlds second-biggest economy stands to gain more than any other from AI because of the high proportion of output derived from manufacturing.

Another report from Accenture and Frontier Economics last month estimated that AI could increase Chinas annual growth rate by 1.6 percentage point to 7.9 percent by 2035 in terms of gross value added, a close proxy for GDP, adding more than $7 trillion.

The State Council directive also called for Chinas businesses, universities and armed forces to work more closely in developing the technology.

We will further implement the strategy of integrating military and civilian developments, it said. Scientific research institutes, universities, enterprises and military units should communicate and coordinate.

More AI professionals and scientists should be trained, the State Council said. It also called for promoting interdisciplinary research to connect AI with other subjects such as cognitive science, psychology, mathematics and economics.

Excerpt from:

China artificial intelligence bid seeks $59 billion industry - The Denver Post

Posted in Artificial Intelligence | Comments Off on China artificial intelligence bid seeks $59 billion industry – The Denver Post

What sort of silicon brain do you need for artificial intelligence? – The Register

Posted: at 8:13 am

The Raspberry Pi is one of the most exciting developments in hobbyist computing today. Across the world, people are using it to automate beer making, open up the world of robotics and revolutionise STEM education in a world overrun by film students. These are all laudable pursuits. Meanwhile, what is Microsoft doing with it? Creating squirrel-hunting water robots.

Over at the firms Machine Learning and Optimization group, a researcher saw squirrels stealing flower bulbs and seeds from his bird feeder. The research team trained a computer vision model to detect squirrels, and then put it onto a Raspberry Pi3 board. Whenever an adventurous rodent happened by, it would turn on the sprinkler system.

Microsofts sciurine aversions arent the point of that story its shoehorning of a convolutional neural network onto an ARM CPU is. Itshows how organizations are pushing hardware further to support AI algorithms. AsAI continues to make the headlines, researchers are pushing its capabilities to make it increasingly competent at basic tasks such as recognizing vision and speech.

As people expect more of the technology, cramming it into self-flying drones and self-driving cars, the hardware challenges are increasing. Companies are producing custom silicon and computing nodes capable of handling them.

Jeff Orr, research director at analyst firm ABI Research, divides advances in AI hardware into three broad areas: cloud services, ondevice, and hybrid. The first focuses on AI processing done online in hyperscale data centre environments like Microsofts, Amazons and Googles.

At the other end of the spectrum, he sees more processing happening on devices in the field, where connectivity or latency prohibit sending data back to the cloud.

Its using maybe a voice input to allow for hands-free operation of a smartphone or a wearable product like smart glasses, he says. That will continue to grow. Theres just not a large number of real-world examples ondevice today. Heviews augmented reality as a key driver here. Ortheres always this app, we suppose.

Finally, hybrid efforts marry both platforms to complete AI computations. This is where your phone recognizes what youre asking it but asks cloud-based AI to answer it, for example.

The clouds importance stems from the way that AI learns. AImodels are increasingly moving to deep learning, which uses complex neural networks with many layers to create more accurate AI routines.

There are two aspects to using neural networks. The first is training, where the network analyses lots of data to produce a statistical model. This is effectively the learning phase. The second is inference, where the neural network then interprets new data to generate accurate results. Training these networks chews up vast amounts of computing power, but the training load can be split into many tasks that run concurrently. This is why GPUs, with their double floating point precision and huge core counts, are so good at it.

Nevertheless, neural networks are getting bigger and the challenges are getting greater. Ian Buck, vice president of the Accelerate Computing Group at dominant GPU vendor Nvidia, says that theyre doubling in size each year. The company is creating more computationally intense GPU architectures to cope, but it is also changing the way it handles its maths.

Itcan be done with some reduced precision, he says. Originally, neural network training all happened in 32bit floating point, but it has optimized its newer Volta architecture, announced in May, for 16bit inputs with 32bit internal mathematics.

Reducing the precision of the calculation to 16 bits has two benefits, according to Buck.

One is that you can take advantage of faster compute, because processors tend to have more throughput at lower resolution, he says. Cutting the precision also increases the amount of available bandwidth, because youre fetching smaller amounts of data for each computation.

The question is, how low can you go? asks Buck. Ifyou go too low, it wont train. Youll never achieve the accuracy you need for production, or it will become unstable.

While Nvidia refines its architecture, some cloud vendors have been creating their own chips using alternative architectures to GPUs. The first generation of Googles Tensor Processing Unit (TPU) originally focused on 8bit integers for inference workloads. The newer generation, announced in May, offers floating point precision and can be used for training, too. These chips are application-specific integrated circuits (ASICs). Unlike CPUs and GPUs, they are designed for a specific purpose (youll often see them used for mining bitcoins these days) and cannot be reprogrammed. Their lack of extraneous logic makes them extremely high in performance and economic in their power usage but very expensive.

Google's scale is large enough that it can swallow the high non-recurring expenditures (NREs) associated with designing the ASIC in the first place because of the cost savings it achieves in AIbased data centre operations. Ituses them across many operations, ranging from recognizing Street View text to performing Rankbrain search queries, and every time a TPU does something instead of a GPU, Google saves power.

Its going to save them a lot of money, said Karl Freund, senior analyst for high performance computing and deep learning at Moor Insights and Strategy.

He doesnt think thats entirely why Google did it, though. Ithink they did it so they would have complete control of the hardware and software stack. If Google is betting the farm on AI, then it makes sense to control it from endpoint applications such as self-driving cars through to software frameworks and the cloud.

When it isnt drowning squirrels, Microsoft is rolling out field programmable gate arrays (FPGAs) in its own data centre revamp. These are similar to ASICs but reprogrammable so that their algorithms can be updated. They handle networking tasks within Azure, but Microsoft has also unleashed them on AI workloads such as machine translation. Intel wants a part of the AI industry, wherever it happens to be running, and that includes the cloud. To date, its Xeon Phi high-performance CPUs have tackled general purpose machine learning, and the latest version, codenamed Knights Mill, ships this year.

The company also has a trio of accelerators for more specific AI tasks, though. For training deep learning neural networks, Intel is pinning its hopes on Lake Crest, which comes from its Nervana acquisition. This is a coprocessor that the firm says overcomes data transfer performance ceilings using a type of memory called HBM2, which is around 12times faster than DDR4.

While these big players jockey for position with systems built around GPUs, FPGAs and ASICs, others are attempting to rewrite AI architectures from the ground up.

Knuedge is reportedly prepping 256-core chips designed for cloud-based operations but isnt saying much.

UK-based Graphcore, due to release its technology in 2017, has said a little more. Itwants its Intelligence Processing Unit (IPU) to use graph-based processing rather than the vectors used by GPUs or the scalar processing in CPUs. The company hopes that this will enable it to fit the training and inference workloads onto a single processor. One interesting thing about its technology is that its graph-based processing is supposed to mitigate one of the biggest problems in AI processing getting data from memory to the processing unit. Dell has been the firms perennial backer.

Wave Computing is also focusing on a different kind of processing, using what it calls its data flow architecture. Ithas a training appliance designed for operation in the data centre that it says can hit 2.9 PetaOPs/sec.

Whereas cloud-based systems can handle neural network training and inference, Client-side devices from phones to drones focus mainly on the latter. Their considerations are energy efficiency and low-latency computation.

You cant rely on the cloud for your car to drive itself, says Nvidias Buck. Avehicle cant wait for a crummy connection when making a split second decision on who to avoid, and long tunnels might also be a problem. Soall of the computing has to happen in the vehicle. He touts the Nvidia P4 self-driving car platform for autonomous in-car smarts.

FPGAs are also making great strides on the device side. Intel has Arria, an FGPA coprocessor designed for low-energy inference tasks, while over at startup KRTKL, CEO Ryan Cousens and his team have bolted a low-energy dual-core ARM CPU to an FPGA that handles neural networking tasks. Itis crowdsourcing its platform, called Snickerdoodle, for makers and researchers that want wireless I/O and computer vision capabilities. You could run that on the ARM core and only send to the FPGA high-intensity mathematical operations, he says.

AI is squeezing into even smaller devices like the phone in your pocket. Some processor vendors are making general purpose improvements to their architectures that also serve AI well. For example, ARM is shipping CPUs with increasingly capable GPU areas on the die that should be able to better handle machine learning tasks.

Qualcomms SnapDragon processors now feature a neural processing engine that decides which bits of tailored logic machine learning and neural inference tasks should run in (voice detection in a digital signal processor and image detection on a builtin GPU, say). Itsupports the convolutional neural networks used in image recognition, too. Apple is reportedly planning its own neural processor, continuing its tradition of offloading phone processes onto dedicated silicon.

This all makes sense to ABIs Orr, who says that while most of the activity has been in cloud-based AI processors of late this will shift over the next few years as device capabilities balance them out. Inaddition to areas like AR, this may show up in more intelligent-seeming artificial assistants. Orr believes that they could do better at understanding what we mean.

They cant take action based on a really large dictionary of what possibly can be said, he says. Natural language processing can become more personalised and train the system rather than training the user.

This can only happen using silicon that allows more processing at given times to infer context and intent. Bybeing able to unload and switch through these different dictionaries that allow for tuning and personalization for all the things that a specific individual might say.

Research will continue in this space as teams focus on driving new efficiencies into inference architectures. Vivienne Sze, professor at MITs Energy-Efficient Multimedia Systems Group, says that in deep neural network inferencing, it isnt the computing that slurps most of the power. The dominant source of energy consumption is the act of moving the input data from the memory to the MAC [multiply and accumulate] hardware and then moving the data from the MAC hardware back to memory, she says.

Prof Sze works on a project called Eyeriss that hopes to solve that problem. In Eyeriss, we developed an optimized data flow (called row stationary), which reduces the amount of data movement, particularly from large memories, she continues.

There are many more research projects and startups developing processor architectures for AI. While we dont deny that marketing types like to sprinkle a little AI dust where it isnt always warranted, theres clearly enough of a belief in the technology that people are piling dollars into silicon.

Ascloud-based hardware continues to evolve, expect hardware to support AI locally in drones, phones, and automobiles, as the industry develops.

In the meantime, Microsofts researchers are apparently hoping to squeeze their squirrel-hunting code still further, this time onto the 0.007mm squared Cortex M0 chip. That will call for a machine learning model 1/10,000th the size of the one it put on the Pi. They must be nuts.

We'll be covering machine learning, AI and analytics and specialist hardware at MCubed London in October. Full details, including early bird tickets, right here.

See the original post:

What sort of silicon brain do you need for artificial intelligence? - The Register

Posted in Artificial Intelligence | Comments Off on What sort of silicon brain do you need for artificial intelligence? – The Register

Artificial intelligence holds great potential for both students and teachers but only if used wisely – The Conversation AU

Posted: at 8:13 am

Data big and small have come to education, from creating online platforms to increasing standardised assessments.

Artificial intelligence (AI) enables Siri to recognise your question, Google to correct your spelling, and tools such as Kinect to track you as you move around the room.

Data big and small have come to education, from creating online platforms to increasing standardised assessments. But how can AI help us use and improve it?

Researchers in AI in education have been investigating how the two intersect for several decades. While its tempting to think that the primary dream for AI in education is to reduce marking load a prospect made real through automated essay scoring the breadth of applications goes beyond this.

For example, researchers in AI in education have:

These are new approaches to learning that rely heavily on students engaging with new kinds of technology. But researchers in AI, and related fields such as learning analytics, are also thinking about how AI can provide more effective feedback to students and teachers.

One perspective is that researchers should worry less about making AI ever more intelligent, instead exploring the potential that relatively stupid (automated) tutors might have to amplify human intelligence.

So, rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification or, going back to its intellectual roots, intelligence augmentation. This is the use of technology including AI to provide people with information that helps them make better decisions and learn more effectively.

This approach combines computing sciences with human sciences. It takes seriously the need for technology to be integrated into everyday life.

Keeping people in the loop is particularly important when the stakes are high, and AI is far from perfect. So, for instance, rather than focusing on automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.

And while some are considering if they can replace nurses with robots, we are seeking to design better feedback to help them become high-performance nursing teams.

But for the use of AI to be sustainable, education also needs a second kind of change: what we teach.

To be active citizens, students need a sound understanding of AI, and a critical approach to assessing the implications of the datafication of our lives from the use of Facebook data to influence voting, to Google DeepMinds access to medical data.

Students also need the skills to manage this complexity, to work collaboratively and to innovate in a changing environment. These are qualities that could perhaps be amplified through effective use of AI.

The potential is not only for education to be more efficient, but to think about how we teach: to keep revolution in sight, alongside evolution.

Another response to AIs perceived threat is to harness the technologies that will automate some forms of work, to cultivate those higher-order qualities that make humans distinctive from machines.

Amid growing concerns about the pervasive role of algorithms in society, we must understand what algorithmic accountability means in education.

Consider, for example, the potential for predictive analytics in flexi-pricing degrees based on a course-completion risk-rating built on online study habit data. Or the possibility of embedding existing human biases into university offers, or educational chatbots that seek to discern your needs.

If AI delivers benefits only to students who have access to specific technologies, then inevitably this has the potential to marginalise some groups.

Significant work is under way to clarify how ethics and privacy principles can underpin the use of AI and data analytics in education. Intelligence amplification helps counteract these concerns by keeping people in the loop.

A further concern is AIs potential to result in a de-skilling or redundancy of teachers. This could possibly fuel a two-tier system where differing levels of educational support are provided.

The future of learning with AI, and other technologies, should be targeted not only at learning subject content, but also at cultivating curiosity, creativity and resilience.

The ethical development of such innovations will require both teachers and students to have a robust understanding of how to work with data and AI to support their participation in society and across the professions.

See the original post here:

Artificial intelligence holds great potential for both students and teachers but only if used wisely - The Conversation AU

Posted in Artificial Intelligence | Comments Off on Artificial intelligence holds great potential for both students and teachers but only if used wisely – The Conversation AU

AI is impacting you more than you realize – VentureBeat

Posted: at 8:13 am

In todays age of flying cars, robots, and Elon Musk, if you havent heard of artificial intelligence (AI) or machine learning (ML) then you must be avoiding all types of media. To most, these concepts seem futuristic and not applicable to everyday life, but when it comes to marketing technology, AI and ML actually touch everyone that consumes digital content.

But how exactly are these being deployed for marketing technology and digital media? We hear about AI being applied in medical and military fields, but usually not in something as commonplace as media. Utilizing these advanced technologies actually enables martech and adtech companies to create highly personalized and custom digital content experiences across the web.

The ultimate goal of all marketers is to drive sales through positive brand-consumer engagements. But a major problem is that marketers have so much content (oftentimes more than they even realize) and millions of potential places to show it, but dont know how to determine the optimal place for each piece of content to reach specific audiences.

With all of these possible placements, it would be incredibly inefficient, if not impossible, for a human being to amass, organize, and analyze this data comprehensively and then make the smartest buying decision in real time based on the facts. Trying to test an infinite number of combinations of creative ideas and placements is like solving a puzzle that keeps adding more and more pieces while you are trying to assemble them.

So how can marketers put this data to work to efficiently and distribute their content across the digital universe using the right messaging to drive the best results?

Human beings can make bad decisions based on incomplete data analysis. For example, someone might block a placement from a campaign based one or two prior experiences with incomplete or statistically insignificant data, but it actually may perform very well. An optimization engine can leverage machine learning to understand the variance in placement performance by campaign and advertiser vertical holistically. This is why computers are simply better than humans at certain tasks.

This does not discount the value of humans, for superior customer service and relationships will always be critical. But the combination of human power plus machine learning will yield a much better result, not only in marketing technology but across all industries that are leveraging this advanced technology.

Machine learning and AI address the real inefficiencies present in digital media and have made tremendous progress pushing the industry toward personalization. Delivering personalized content experiences to todays consumer is incredibly important, especially given the always-on, constantly connected, multi-device life that we all lead.

The power of machine learning and artificial intelligence lies in their ability to achieve massive scale that is not otherwise possible, while also maintaining relevancy. This demand for personalization escalates the number of combinations that would need to be tested to an unimaginable degree. For example, if a marketer wants to build a campaign with a personalized experience based on past browsing behavior, it becomes difficult to glean insight from the millions of combinations of the context in which their advertisement will appear and the variety of different browsing behaviors people exhibit. Even with fast, granular reporting, it is impossible to make all the necessary adjustments in a timely manner due to the sheer volume of the dataset.

Furthermore, it is often impossible to draw a conclusion from the data that can be gathered by running a single campaign. A holistic approach that models the interaction between users and a variety of different advertising verticals is necessary to have a meaningful predictor of campaign performance. This is where the real impact of a bidder powered by machine learning lies, because individual marketers are not able to observe these trends due to the fact that they may only have experience running campaigns in a specific vertical.

An intelligent bidder determines how each placement has performed in previous campaigns. If one specific placement performed poorly for multiple advertisers with similar KPIs, similar advertisers in the future will not waste money testing that placement. The learning happens very quickly and precisely. Instead of humans taking these learnings and adjusting the algorithms, the technology is making the changes as they are detected.

By leveraging the billions of historical data points from digital campaigns, predictions are made for future campaigns and then real-time performance data is applied to revisions. This is not a one-off process. The technology is constantly taking insights from user behavior and feeding them back into the algorithms, enabling personalized content experiences at scale.

The advertising industry has faced major challenges in relevancy for consumers and brand safety for marketers. Lack of relevancy in advertising has led to the advent of ad blockers and poor engagement, causing brands to become even more unsure of where their budgets are going and how users are responding to content. The controversy around brand safety further calls into question not only how budgets are being spent, but potential negative consequences for a brands image.

Machine learning holds the promise of overcoming these challenges by delivering better, smarter ads to engaged consumers and restoring trust for brands in advertising spend and the technology that executes content and media.

Kris Kalish is the Director of Optimization at Bidtellect, a native advertising platform.

Link:

AI is impacting you more than you realize - VentureBeat

Posted in Artificial Intelligence | Comments Off on AI is impacting you more than you realize – VentureBeat

Time to get smart on artificial intelligence – The Hill (blog)

Posted: July 22, 2017 at 8:12 am

One of the biggest problems with Washington is that more often than not the policy conversation isnt grounded in the facts. We see this dysfunction clearly on technology policy, where Congress is largely uninformed on what the future of artificial intelligence (AI) technology will look like and what the actual consequences are likely to be. In this factual vacuum, we run the risk of ultimately adopting at best irrelevant or at worst extreme legislative responses.

Thats why I was particularly interested to see the comments by Tesla CEO Elon Musk to the National Governors Association that AI is a fundamental existential risk for human civilization. Musk is a tremendous innovator and someone who understands technology deeply, and while I dont agree with his assessment, his dramatic statement is a challenge to lawmakers to start seriously examining this topic.

The AI Caucus is working to bring together experts from academia, government and the private sector to discuss the latest technologies and the implications and opportunities created by these new changes. Already this year, weve been briefed by a variety of specialists and fellow policymakers from both Europe and the United States and the caucus participated in events this month organized by IBM.

Congress needs to have a better grasp of what AI actually looks like in practice, how it is being deployed and what future developments likely will be, and thats where the AI Caucus comes in. AI wont just impact one specific field or region and the issues it will raise will not fall under the jurisdiction of a single committee; ironically, AI is potentially such a big change that we might not see the forest for the trees.

It is clear that we are on the verge of a technological revolution. Artificial intelligence promises to be one of the paradigm-shifting developments of the next century, with the potential to reshape our economy just as fully as the internal combustion engine or the semiconductor. Contrary to some portrayals, AI is less about the Terminator and more about using powerful cognitive computing to find new treatments for cancer, improve crop yields and make structures like oil rigs safer. AI programming is a key component of emerging driverless car technology, new advances in designing robots to perform tasks that are too dangerous for humans to do and boosting fraud protection programs to combat identity theft.

As a former entrepreneur, I believe that innovation should always be encouraged, because its fundamental to economic growth. Imagine if wed tried to put the brakes on the development of telephone or radio technology a century ago, personal computer technology a generation ago or cell phone technology a decade ago. Innovation creates new opportunities that are hard to predict, new jobs, even entirely new industries. Innovation can also boost productivity and wages and reduce costs to consumers.

But that doesnt mean that there arent relevant concerns about the disruption that AI could bring. Again, its all about the facts, and in the past, new technologies have hurt certain jobs. While the overall impact might have been positive, there have still been industries and regions that have been hurt by automation. In manufacturing especially, weve seen automation reduce the number of jobs in recent years, in some cases to devastating effect.

We need to be honest about the fact that AI technology will replace some jobs, just as what happened under advances. In my view, we need to start the conversation now and take a hard look at how we can help those individuals who will be hurt. As policymakers, we should be thinking about those people who are working in jobs that are at risk and seeing what we can do to get them through this eventual change. We should focus on preparing our country for this next wave of innovation.

As I think about policies that help anticipate AI and the changes it will bring, it is my view that the country needs to become more entrepreneurial and more innovative. That means we should make it easier to start a business and encourage more startups, invest more in things like research and infrastructure, all to become a more dynamic economy. We have to think through how we can make benefits more portable and how we can create a more flexible high-skill workforce. Combined with long-term trends that will create an older society, we must anticipate that the shape of the economy and the job market will look very different in the decades to come. The emergence of AI is also another reminder of making sure that our social safety net programs will be able to meet the needs of the future. AI will also create new ethical and privacy concerns and these are issues that need to be worked out. I believe that it is imperative that we tackle these emerging issues thoughtfully and not rush into new programs or regulations prematurely.

My colleagues on the AI Caucus each have their own ideas and concerns and part of the caucuss function is to also facilitate a dialogue between lawmakers. Our choice is to either get caught flatfooted or to proactively anticipate how things will change and work on smart policies to make sure that the country benefits as much as possible overall. The only way to do that is to become focused on the facts and focused on the future and the AI Caucus is a bipartisan effort to make that happen.

Congressman John K. Delaney represents Marylands Sixth District in the House of Representatives and is the founder of the AI Caucus. Delaney is the only former CEO of a publicly-traded company in the House and was named one of the Worlds Greatest Leaders by Fortune in 2017.

The views expressed by this author are their own and are not the views of The Hill.

Read the original here:

Time to get smart on artificial intelligence - The Hill (blog)

Posted in Artificial Intelligence | Comments Off on Time to get smart on artificial intelligence – The Hill (blog)

Artificial intelligence, analytics help speed up digital workplace … – ZDNet

Posted: at 8:12 am

Artificial intelligence (AI) and analytics are helping to speed up the pace of digital workplace transformation in industries such as energy and utilities, financial services, manufacturing, and pharmaceuticals, according to a new report from Dimension Data.

Digital Transformation: A CXO's Guide

Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right.

Gaining competitive advantage and improving business processes are among the top goals of digital transformation strategies, according to the report, "The Digital Workplace Report: Transforming Your Business," which is based on a survey of 850 organizations in 15 countries.

While AI technology is still in its "infancy," it is sufficiently advanced to be working its way into companies in the form of virtual assistants, Dimension said. Manifested as bots embedded into specific applications, virtual assistants draw on AI engines and machine learning technology to respond to basic queries.

"It's no longer enough to simply implement these technologies," said Krista Brown, senior vice president, group end-user computing at Dimension Data. "Organizations have grown their use of analytics to understand how these technologies impact their business performance.

About three quarters of the organizations surveyed (64 percent) use analytics to improve customer services, and 58 percent use analytics to benchmark their workplace technologies. Thirty percent of organizations said they are far along in their digital transformation initiatives and are already reaping the benefits.

Others are still in the early stages of creating a plan. One factor that could be holding some companies back from deploying a digital workplace is their corporate culture. In a lot of cases, technology and corporate culture inhibit rather than encourage workstyle change, the report noted.

Still, the top barrier to successful adoption of new workstyles was IT issues. The complexity of the existing IT infrastructure can present a huge hurdle to implementing new collaboration and productivity tools to support flexible workstyles, Brown said. Successful transformations are achieved when IT works closely with line-of-business leaders, she said.

IT leaders in the survey were asked to rank which technologies were most important to their digital workplace strategies, and they most often cited communications and collaboration tools, as well as business applications. Half said conferencing systems have resulted in business processes that have become much more streamlined and effective.

"The digital workplace is transforming how employees collaborate, how customers are supported, and ultimately how enterprises do business," the report said. "However, the digital workplace is not a destination that most--or many--enterprises have arrived at. It is a journey that enterprises have started to take and that remains ongoing."

Making workplace technologies available to employees and other stakeholders, while important, should not be the first step, Dimension said. "Actually improving processes is a complicated set of tasks that requires more than an investment in new technology."

Results from the study show that a successful digital workplace effort starts with a comprehensive strategy that a company's leadership team has carefully defined. Along the way, new technology is deployed and new working practices are introduced.

"A successful digital transformation strategy also must have clear and measurable goals from the start and must receive continued support throughout its implementation from heads of business units across the enterprise," the report said. "IT departments then need to make sure that the right digital tools are being made available to the right set of workers, and that those workers understand how best to use them."

Go here to see the original:

Artificial intelligence, analytics help speed up digital workplace ... - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence, analytics help speed up digital workplace … – ZDNet

China announces goal of leadership in artificial intelligence by 2030 – CBS News

Posted: July 21, 2017 at 12:16 pm

FILE PHOTO: A computer mouse is illuminated by a projection of a Chinese flag in this photo illustration from October 1, 2013.

REUTERS, Tim Wimborne

BEIJING -- China's government has announced a goal of becoming a global leader in artificial intelligence in just over a decade, putting political muscle behind growing investment by Chinese companies in developing self-driving cars and other advances.

Communist leaders see AI as key to making China an "economic power," said a Cabinet statement on Thursday. It calls for developing skills and research and educational resources to achieve "major breakthroughs" by 2025 and make China a world leader by 2030.

Play Video

It might not be long before machines begin thinking for themselves -- creatively, independently, and sometimes with better judgment than a human....

Artificial intelligence is one of the emerging fields along with renewable energy, robotics and electric cars where communist leaders hope to take an early lead and help transform China from a nation of factory workers and farmers into a technology pioneer.

They have issued a series of development plans over the past decade, some of which have prompted complaints Beijing improperly subsidizes its technology developers and shields them from competition in violation of its free-trade commitments.

Already, Chinese companies including Tencent Ltd., Baidu Inc. and Alibaba Group are spending heavily to develop artificial intelligence for consumer finance, e-commerce, self-driving cars and other applications.

Manufacturers also are installing robots and other automation to cope with rising labor costs and improve efficiency.

Play Video

Business leaders weigh in on the possibility of artificial intelligence replacing jobs

Thursday's statement gives no details of financial commitments or legal changes. But previous initiatives to develop Chinese capabilities in solar power and other technologies have included research grants and regulations to encourage sales and exports.

"By 2030, our country will reach a world leading level in artificial intelligence theory, technology and application and become a principal world center for artificial intelligence innovation," the statement said.

That will help to make China "in the forefront of innovative countries and an economic power," it said.

The announcement follows a sweeping plan issued in 2015, dubbed "Made in China 2025," that calls for this country to supply its own high-tech components and materials in 10 industries from information technology and aerospace to pharmaceuticals.

That prompted complaints Beijing might block access to promising industries to support its fledgling suppliers. The Chinese industry minister defended the plan in March, saying all competitors would be treated equally. He rejected complaints that foreign companies might be required to hand over technology in exchange for market access.

China has had mixed success with previous strategic plans to develop technology industries including renewable energy and electric cars.

Beijing announced plans in 2009 to become a leader in electric cars with annual sales of 5 million by 2020. With the help of generous subsidies, China passed the United States last year as the biggest market, but sales totaled just over 300,000.

2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten, or redistributed.

Continued here:

China announces goal of leadership in artificial intelligence by 2030 - CBS News

Posted in Artificial Intelligence | Comments Off on China announces goal of leadership in artificial intelligence by 2030 – CBS News

Page 149«..1020..148149150151..160170..»