Daily Archives: July 28, 2017

A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge

Posted: July 28, 2017 at 7:15 pm

Theres a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.

Thats why, when news spread on Twitter of an app that used revolutionary AI to identify mushrooms with a single picture, mycologists and fungi-foragers were worried. They called it potentially deadly, and said that if people used it to try and identify edible mushrooms, they could end up very sick, or even dead.

Part of the problem, explains Colin Davidson, a mushroom forager with a PhD in microbiology, is that you cant identify a mushroom just by looking at it. The most common mushroom near me is something called the yellow stainer, he told The Verge, and it looks just like an edible horse mushroom from above and the side. But if you eat a yellow stainer theres a chance youll be violently ill or even hospitalized. You need to pick it up and scratch it or smell it to actually tell what it is, explains Davidson. It will bruise bright yellow or it will smell carbolic.

And this is only one example. There are plenty of edible mushrooms with toxic lookalikes, and when identifying them you need to study multiple angles to find features like gills and rings, while considering things like whether recent rainfall might have discolored the cap or not. Davidson adds that there are plenty of mushrooms that live up to their names, like the destroying angel or the deaths cap.

but then your organs will start failing.

One eighth of a death cap can kill you, he says. But the worst part is, youll feel sick for a while, then you might feel better and get on with your day, but then your organs will start failing. Its really horrible.

The app in question was developed by Silicon Valley designer Nicholas Sheriff, who says it was only ever intended to be used as a rough guide to mushrooms. When The Verge reached out to Sheriff to ask him about the apps safety and how it works, he said the app wasnt built for mushroom hunters, it was for moms in their backyard trying to ID mushrooms. Sheriff added that hes currently pivoting to turn the app into a platform for chefs to buy and sell truffles.

When we tried the iOS-only software this morning, we found that Sheriff had changed its preview picture on the App Store to say identify truffles instantly with just a pic. However, the name of the app remains Mushroom Instant Mushroom Plants Identification, and the description contains the same claim that so worried Davidson and others: Simply point your phone at any mushroom and snap a pic, our revolutionary AI will instantly identify mushrooms, flowers, and even birds.

In our own tests, though, the app was unable to identify either common button or chestnut mushrooms, and crashed repeatedly. Motherboard also tried the app and found it couldnt identify a shiitake mushroom. Sheriff says he is planning on adding more data to improve the apps precision, and tells The Verge that his intention was never to try and replace experts, but supplement their expertise.

claims about revolutionary AI can be dangerous

And, of course, if you search the iOS or Android app stores, youll find plenty of mushroom identifying apps, most of which are catalogues of pictures and text. Whats different about this one, is that it claims to use machine vision and revolutionary AI to deliver its results terms that seem specifically chosen to give people a false sense of confidence. If youre selling an app to identify flowers, then this sort of language is merely disingenuous; when its mushrooms youre spotting, it becomes potentially dangerous.

As Davidson says: Im absolutely enthralled by the idea of it. I would love to be able to go into a field and point my phone at a mushroom and find out what it is. But I would want quite a lot of convincing that it would be able to work. So far, were not convinced.

See the rest here:

A 'potentially deadly' mushroom-identifying app highlights the dangers of bad AI - The Verge

Posted in Ai | Comments Off on A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge

Baidu Curbs Spending on Food Delivery to Prep for AI – AdAge.com

Posted: at 7:15 pm

Credit: Bloomberg News

Baidu's move to slash spending on services from food delivery to travel helped the search giant soundly beat estimates, as it recovers from Chinese government restrictions and prepares to invest more in artificial intelligence.

China's largest search engine reported a better-than- projected 83% leap in net income after both general and traffic acquisition costs shrank. It's also considering a change in its operating structure to allow a rapidly growing finance unit -- a source of concern to Moody's Investors Service, among others -- operate more independently.

Baidu forecast revenue for the third quarter of 23.1 billion yuan ($3.4 billion) to 23.8 billion yuan, versus the 23 billion yuan average of analysts' estimates compiled by Bloomberg. Net income soared to 4.4 billion yuan in the June quarter, sharply outpacing projections for 2.9 billion yuan.

The beat comes at a vital time for the company. It's asking investors to back investments in content and artificial intelligence projects such as autonomous driving, even though expensive forays into new businesses such as food delivery have failed to deliver market leadership. Group President Qi Lu has said the search giant can beat Alphabet Inc. at driverless cars within three to five years thanks to its Apollo program, which opens the technology up to partners.

"Our focus is to accelerate the commercialization of AI technologies," he told analysts on an earnings call.

Its U.S.-listed shares jumped 7.5% in extended trading. Baidu has cut back on costly subsidies and discounts for its struggling travel and food delivery units, part of an expansion into so-called online-to-offline or on-demand services.

But the company remains committed to spending big on TV and movie rights for a Netflix-like streaming video service called iQiyi, which has over 30 million paying subscribers. It also plans to buy content for a news aggregation service that relies on AI to target ads and content at 100 million daily active users.

"Marketing spending for O2O has come down quite visibly," said Kirk Boodry, an analyst with New Street Research. "While the numbers for the quarter looked good, we think the costs for their content this year are probably going to be back-loaded."

Revenue rose for a second straight quarter. Sales jumped 14% to 20.9 billion yuan in the June quarter versus projections for 20.7 billion yuan.

Online marketing revenue rose 5.6%, though the number of customers was down more than 20%. Baidu's ad business was hit hard last year after the government imposed harsher regulations and changed the tax status of a key product. The entire customer base had to re-register with stricter conditions and many chose to switch platforms, reducing the pool of advertisers. As a result, the company reported its first annual earnings decline since its 2005 initial public offering.

Baidu is now counting on AI projects to offset slowing growth in its core business of selling internet ads placed next to search results. One example is its financial services group, which lends money to students and others using the technology to determine credit risks. The push led Fitch Ratings and Moody's to place the company on review for a potential downgrade - both ratings agencies said the risks of such businesses were very different from its traditional strength as a search engine.

Baidu is now in the early stages of considering the structure of its finance arm. While Baidu's Lu didn't provide specifics, it may be trying to reduce risk while helping it get financial licenses available only to domestically controlled companies. Alibaba Group Holding Ltd. and JD.com Inc. have cited similar reasons when considering spinoffs of their own financial services businesses.

"We are beginning the process of working out a future operating structure that allows FSG to operate more independently to expand into areas that may require domestic licenses and enable stronger long term growth," Lu said.

-- Bloomberg News

Read the original post:

Baidu Curbs Spending on Food Delivery to Prep for AI - AdAge.com

Posted in Ai | Comments Off on Baidu Curbs Spending on Food Delivery to Prep for AI – AdAge.com

Face it, AI is better at data-analysis than humans – TNW

Posted: at 7:15 pm

Its time we stopped pretending that were computers and let the machines do their jobs.Anodot, a real-time analytics company, is using advanced machine-learning algorithms to overcome the limitations that humans bring to data analysis.

AI can chew up all your data and spit out more answers than youve got questions for, and the e-commerce businesses that dont integrate machine-learning into data analysis will lose money.

Weve all been there before: youve just launched a brand new product after spending millions on the development cycle and, for no discernible reason, your online marketplace crashes in a major market and you dont find out for several hours. All those sales: lost, like an unsaved file.

Okay, maybe we havent all been there but weve definitely been on the other end. Error messages on checkouts, product listings that lead nowhere, and worst of all shortages. If we dont get what we want when we want it well get it somewhere else. Anomalies in the market, and the ability to respond to them, can be the difference between profits and shutters to any business.

Data analysis isnt a popular water-cooler topic anywhere, presumably even at companies that specialize in it. Rebecca Herson, Vice President of Marketing for Anodot, explains the need for AI in the field:

Theres just so much data being generated, theres no way for a human to go through it all. Sometimes, when we analyze historical data for businesses were introducing Anodot to, they discern things they never knew where happening. Obviously businesses know if servers go down, but if you have a funnel leaking in a few different places it can be difficult to find all the problems.

The concern isnt just lost sales; theres also product-supply disruption and customer satisfaction to worry about. In numerous case studies Anodot found an estimated 80 percent of the anomalies its machine-learning software uncovered were negative factors, as opposed to positive opportunities. These companies were losing money because they werent aware of specific problems.

Weve seen data-analysis software before, but Anodots use of machine-learning is an entirely different application. Anodot is using unsupervised AI, which accesses deep-learning, to autonomously find new ways to categorize and understand data.

With customers like Microsoft, Google Waze, and Comcast, it would appear as though this software is prohibitvely complex and designed for the tech-elite, but Herson explains:

This is something that, while data scientist is the new sexy profession, you wont need one to use this. Its got the data scientist baked-in. If you have one, they can leverage this to provide immediate results. An e-commerce strategist can leverage the data and provide real-time analysis. This isnt something that requires a dedicated staff, your existing analysts can use this.

While we ponder the future of AI, companies like Anodot are applying it in all the right ways (see: non-lethal and money-saving). Automating data-analysis isnt quite as thrilling as an AI that can write speeches for the President, but its far more useful.

Read next: iTunes oversight practically confirms 4K Apple TV is imminent

Read more here:

Face it, AI is better at data-analysis than humans - TNW

Posted in Ai | Comments Off on Face it, AI is better at data-analysis than humans – TNW

Adobe Target Upgraded With Artificial Intelligence – MediaPost Communications

Posted: at 7:15 pm

Adobe announced Thursday that Adobe Target is being upgraded with Sensei, the companys framework for artificial intelligence.

Adobe Target is Adobes testing and optimization platform, and will now become the personalization engine of the marketing cloud, says Kevin Lindsay, director of product marketing at Adobe Target.

Adobe is aiming to make the creative development process easier with one-touch personalization across channels including email marketing, mobile, and connected devices.

Brands are really struggling with personalization because different people care about different lines of businesses, says Lindsay. This struggle is just compounded by mobile technology and connected experiences.

A/B testing enables the analysis of multiple variations of content, and is the most common way that email marketers personalize messages, but Adobe Target will now be able to significantly expand its testing capabilities with the help of Sensei by automatically targeting customers for personalized experiences instead.

advertisement

advertisement

Sensei uses machine learning to determine the best experience for the individual. Marketers can then quickly deliver that experience across digital properties with one-click personalization, as Adobe Target uses Sensei to continuously update a customers profile and preferences after every action.

An automated offer feature also helps marketers send the best promotional offer to subscribers who are predicted to be most interested in that content. This could be especially helpful for email marketers during times of high traffic, such as the holidays. Adobe has also added a backup policy, so marketers can hold back a certain amount of traffic and only send new content to a control group.

Sensei takes manual optimization to the next level because it takes all variables in to consideration, says Lindsay. We dont believe that any human can possible appreciate whats happening on a large scale when millions of people are coming to a site. Real-time signals can tell a story that might not immediately be obvious to us mere mortals.

The solution also contains visualization of analytics so marketers can see trends and quickly react to them, and variables that are having an impact on an experience can be pulled to the surface.

Lindsay says creative teams should not feel threatened by the machines.

Putting creative in front of the right people at scale will allow marketers to be even more creative, he says. It can better ensure the right people are seeing the right products and efforts.

In addition, Adobe announced plans to a launch the beta version of a new recommendations algorithm in September. Lindsay says the recommendation engine has been outperforming previous algorithms by as much as 60%, and will be available for email marketers.

Its inspired by natural language, explains Lindsay. It basically does an analysis of a customers entire online behavior, and starts to look at the signal these interactions provide. As the machine ingests more and more data, it begins to interpret them almost like words.

Finally, Adobe hinted at a new open technology the company is working on that will allow brands to plug in their own proprietary algorithms into Adobe Target for customized personalization. The tool is not yet available, but Adobe says they it will provide more details once product development is completed next year.

The standard pricing for Adobe Target includes more rules-based targeting, such as A/B testing, but the premium version has now been upgraded with Sensei. Subscribers to the Adobe Marketing Premium product will be able to access the new personalization solution Friday morning when they enter their office.

More:

Adobe Target Upgraded With Artificial Intelligence - MediaPost Communications

Posted in Artificial Intelligence | Comments Off on Adobe Target Upgraded With Artificial Intelligence – MediaPost Communications

Artificial Intelligence Develops Its Own Language – IGN

Posted: at 7:15 pm

Share.

We haven't quite reached the terrifying sci-fi hellscape described by the Terminator franchise, but researchers at Facebook have brought us just a bit closer to the age of the machines. Recently, they pulled the plug on an artificial intelligence system after it developed its own language.

The AI in question was actually designed to maximize efficiency in language, but according to Fast Co. Design, the researchers forgot to add a crucial rule in its programming: the language had to be English. So the "two AI agents" moved on with their programming to communicate as efficiently as their programming would allow, putting the conversation between the two outside the understanding of humans.

"Agents will drift off understandable language and invent codewords for themselves," Georgia Tech research scientist Dhruv Batra said. This isn't anything new, either. It's something that keeps cropping up when researchers experiment with this type of AI.

The purpose of these particular Facebook AI agents is to communicate in English, so programmers reworked the code to get the AI back on track. But if AI is allowed to keep to its own devices, Fast Co. Design said, it eventually creates a language all its own. One that can't be understood by human beings.

Now is the perfect time to prepare yourself for the end of humanity's rein over Earth by watching the new 4K Blu-ray of Terminator 2. It seems less a blockbuster action film from the '90s and more of a dark fortelling of our grim future under the emotionless rule of the machines. Regardless of our impending doom, it's a great movie.

Seth Macy is IGN's weekend web producer and just wants to be your friend. Follow him on Twitter @sethmacy, or subscribe to Seth Macy's YouTube channel.

Read more:

Artificial Intelligence Develops Its Own Language - IGN

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Develops Its Own Language – IGN

Should Artificial Intelligence Be Regulated? – HuffPost

Posted: at 7:15 pm

By Anthony Aguirre, Ariel Conn and Max Tegmark

Should artificial intelligence be regulated? Can it be regulated? And if so, what should those regulations look like?

These are difficult questions to answer for any technology still in development stages regulations, like those on the food, pharmaceutical, automobile and airline industries, are typically applied after something bad has happened, not in anticipation of a technology becoming dangerous. But AI has been evolving so quickly, and the impact of AI technology has the potential to be so great that many prefer not to wait and learn from mistakes, but to plan ahead and regulate proactively.

In the near term, issues concerning job losses, autonomous vehicles, AI- and algorithmic-decision making, and bots driving social media require attention by policymakers, just as many new technologies do. In the longer term, though, possible AI impacts span the full spectrum of benefits and risks to humanity from the possible development of a more utopic society to the potential extinction of human civilization. As such, it represents an especially challenging situation for would-be regulators.

Already, many in the AI field are working to ensure that AI is developed beneficially, without unnecessary constraints on AI researchers and developers. In January of this year, some of the top minds in AI met at a conference in Asilomar, Calif. A product of this meeting was the set of Asilomar AI Principles. These 23 principles represent a partial guide, its drafters hope, to help ensure that AI is developed beneficially for all. To date, over 1,200 AI researchers and over 2,300 others have signed on to these principles.

Yet aspirational principles alone are not enough, if they are not put into practice, and a question remains: is government regulation and oversight necessary to guarantee that AI scientists and companies follow these principles and others like them?

Among the signatories of the Asilomar Principles is Elon Musk, who recently drew attention for his comments at a meeting of the National Governors Association, where he called for a regulatory body to oversee AI development. In response, news organizations focused on his concerns that AI represents an existential threat. And his suggestion raised concerns with some AI researchers who worry that regulations would, at best, be unhelpful and misguided, and at worst, stifle innovation and give an advantage to companies overseas.

But an important and overlooked comment by Musk related specifically to what this regulatory body should actually do. He said:

The right order of business would be to set up a regulatory agency initial goal: gain insight into the status of AI activity, make sure the situation is understood, and once it is, put regulations in place to ensure public safety. Thats it. Im talking about making sure theres awareness at the government level.

There is disagreement among AI researchers about what the risk of AI may be, when that risk could arise, and whether AI could pose an existential risk, but few researchers would suggest that AI poses no risk. Even today, were seeing signs of narrow AI exacerbating problems of discrimination and job loss, and if we dont take proper precautions, we can expect problems to worsen, affecting more people as AI grows smarter and more complex.

The number of AI researchers who signed the Asilomar Principles as well as the open letters regarding developing beneficial AI and opposing lethal autonomous weapons shows that there is strong consensus among researchers that we need to do more to understand and address the known and potential risks of AI.

Some of the principles that AI researchers signed directly relate to Musks statements, including:

3) Science Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

The right policy and governance solutions could help align AI development with these principles, as well as encourage interdisciplinary dialogue on how that may be achieved.

The recently founded Partnership on AI, which includes the leading AI industry players, similarly endorses the idea of principled AI development their founding document states that where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy and aligned with the ethics and preferences of people who are influenced by their actions.

And as Musk suggests, the very first step needs to be increasing awareness about AIs implications among government officials. Automated vehicles, for example, are expected to eliminate millions of jobs, which will affect nearly every governor who attended the talk (assuming theyre still in office), yet the topic rarely comes up in political discussion.

AI researchers are excited and rightly so about the incredible potential of AI to improve our health and well-being: its why most of them joined the field in the first place. But there are legitimate concerns about the possible misuse and/or poor design of AI, especially as we move toward advanced and more general AI.

Because these problems threaten society as a whole, they cant be left to a small group of researchers to address. At the very least, government officials need to learn about and understand how AI could impact their constituents, as well as how more AI safety research could help us solve these problems before they arise.

Instead of focusing on whether regulations would be good or bad, we should lay the foundations for constructive regulation in the future by helping our policy-makers understand the realities and implications of AI progress. Lets ask ourselves: how can we ensure that AI remains beneficial for all, and who needs to be involved in that effort?

See the original post:

Should Artificial Intelligence Be Regulated? - HuffPost

Posted in Artificial Intelligence | Comments Off on Should Artificial Intelligence Be Regulated? – HuffPost

Researchers: Artificial Intelligence Can Help Fight Deforestation in Congo – Voice of America

Posted: at 7:15 pm

LONDON

A new technique using artificial intelligence to predict where deforestation is most likely to occur could help the Democratic Republic of Congo (DRC) preserve its shrinking rainforest and cut carbon emissions, researchers have said.

Congo's rainforest, the world's second-largest after the Amazon, is under pressure from farms, mines, logging and infrastructure development, scientists say.

Protecting forests is widely seen as one of the cheapest and most effective ways to reduce the emissions driving global warming.

But conservation efforts in DRC have suffered from a lack of precise data on which areas of the country's vast territory are most at risk of losing their pristine vegetation, said Thomas Maschler, a researcher at the World Resources Institute (WRI).

"We don't have fine-grain information on what is actually happening on the ground," he told the Thomson Reuters Foundation.

To address the problem Maschler and other scientists at the Washington-based WRI used a computer algorithm based on machine learning, a type of artificial intelligence.

The computer was fed inputs, including satellite derived data, detailing how the landscape in a number of regions, accounting for almost a fifth of the country, had changed between 2000 and 2014.

The program was asked to use the information to analyze links between deforestation and the factors driving it, such as proximity to roads or settlements, and to produce a detailed map forecasting future losses.

Overall the application predicted that woods covering an area roughly the size of Luxembourg would be cut down by 2025 releasing 205 million metric tons of carbon dioxide (CO2) into the atmosphere.

The study improved on earlier predictions that could only forecast average deforestation levels in DRC over large swathes of land, said Maschler.

"Now, we can say: 'actually the corridor along the road between these two villages is at risk'," Maschler said by phone late on Thursday.

The analysis will allow conservation groups to better decide where to focus their efforts and help the government shape its land use and climate change policy, said scientist Elizabeth Goldman who co-authored the research.

The DRC has pledged to restore 3 million hectares (11,583 square miles) of forest to reduce carbon emissions under the 2015 Paris Agreement, she said.

But Goldman said the benefits of doing that would be outweighed by more than six times by simply cutting predicted forest losses by 10 percent.

Go here to read the rest:

Researchers: Artificial Intelligence Can Help Fight Deforestation in Congo - Voice of America

Posted in Artificial Intelligence | Comments Off on Researchers: Artificial Intelligence Can Help Fight Deforestation in Congo – Voice of America

Artificial intelligence can turn 2D photos into real-world objects – Science Magazine

Posted: at 7:15 pm

Purdue University

By Matthew HutsonJul. 27, 2017 , 4:15 PM

People have no trouble looking at a photo and understanding the 3D shapes of the objects withinpeople, cars, Shiba Inus. But computers, with little experience in the real world, arent so smartyet. Now, scientists have created a new unwrapping method that comes much closer. They started by teaching an algorithm to treat 3D objects as 2D surfaces. Imagine, for example, hollowing out a mountainous globe and flattening it into a rectangular map, with each point on the surface displaying latitude, longitude, and altitude. After much practice, the new machine-learning algorithm learned to translate photos of 3D objects (like the first row of planes, above) into 2D surfaces, which can then be stitched into 3D forms. Researchers trained it to reconstruct cars, airplanes, and hands in almost any posture. Whereas an earlier method warped sedans into hatchbacks and rendered planes birdlike (see the second row of airplanes, above), this new method could more accurately infer 3D shapes from photos, the authors reported this week at the Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition, in Honolulu. The new program, called SurfNet (after the word surface), could also invent brand new, realistic-looking 3D shapes for cars, planes, and hands. Future applications might include designing objects for virtual and augmented reality, creating 3D maps of rooms for robot navigation, and designing computer interfaces controlled with hand gestures. Thumbs up.

The rest is here:

Artificial intelligence can turn 2D photos into real-world objects - Science Magazine

Posted in Artificial Intelligence | Comments Off on Artificial intelligence can turn 2D photos into real-world objects – Science Magazine

Artificial Intelligence: Apple’s Second Revolutionary Offering – Seeking Alpha

Posted: at 7:15 pm

In an earlier article on Augmented Reality, I noted that Apple (NASDAQ:AAPL) faces challenges for growth of its iPhone business, as many worldwide markets have become saturated, and the replacement rate for existing customers has dropped. I noted that Apple has weathered this change by continuing to charge premium prices for its product (against the predictions of many naysayers), and it can do this for two reasons.

1- Its design and build quality is unsurpassed, and

2- Its always on the cutting edge of new technology.

For these reasons, customers feel that there is value in the iconic product.

Number two leads the investor to the question:

While the earlier articles centered on augmented reality this will focus on Artificial Intelligence (AI), and Machine Learning (ML), this is an important topic for the investor as it is a critical part of the answer to the question above.

Most analysts focus on the easily visible aspects of devices, ignoring the deeper innovations because they dont understand them. For example, when Apple stunned the tech world in 2013 by introducing the first 64-bit mobile system on a chip (processor), the A7, many pundits played down the importance of the move. They argued that it made little difference, and listed a variety of reasons. Yet they ignored the real important advantages particularly the tremendously strengthened encryption features. This paved the way for the enhanced security features that include complete on-device data encryption, Touch ID and Apple Pay.

Apples foray into AR and now ML are further examples of this. While AR captures the imagination of many people and the new interface has been covered, the less understood Machine Learning interface has been virtually ignored in spite of the fact that going forward it will be a very important enabling technology. Product differentiation and performance are key to Apple maintaining its position, and thus key to the investor's understanding.

Machine Learning is a type of program that gives a response to input without having been explicitly programmed with the knowledge. Instead, it is trained by being presented with a set of inputs and the desired response. From these, the program learns to judge a new input.

This is different from earlier Knowledge Based Systems. These were explicitly programmed. For example, in a simple wine program I developed for a class, there were a long list of rules, essentially of the form:

- IF (type = RED) AND (acidity = LOW) THEN respond with XXX

- IF (type = RED) AND (acidity = HIGH) THEN respond with ZZZ

In a ML system, these rules do not exist. Instead a set of samples are presented and the system learns how to infer the correct responses.

There are a lot of different configurations for such learning systems, many using the Neural Network concept. This is based on the interconnected network of the brain. Here each individual neuron (brain cell) receives a connection from many other neurons, and then in turn connects to many others. As a person experiences new things, the connections between the excited cells get strengthened or facilitated so that a given network is more easily excited in the future if the same or similar input is given.

Computer neural nets work analogously, though obviously digitally. The program defines as set of cells into some series of levels. Each is influenced by some subset of the others and in turn influence yet other cells, until a final level produces a result. The degree to which the value of one cell changes the value of another cell to which it is connected is specified by the weight of the connection. This is where the magic lies.

During training, when a pattern is presented to it, the strong connections are strengthened (and others possibly weakened). This is repeated for various inputs. Eventually, the system is deemed trained, and the set of connections is saved as a trained model for use in an application. (Some systems allow for continued training after deployment.)

(For an interesting anecdote on how this works in the brain, see this story.)

Many people think of AI as some big thing on mainframes such as Watson by IBM (IBM), which championed at Jeopardy, or in research labs at Google (GOOG) (NASDAQ:GOOGL) or Microsoft (MSFT). They think that this is for the big problems of industry.

Research at Google is at the forefront of innovation in Machine Intelligence, with active research exploring virtually all aspects of machine learning, including deep learning and more classical algorithms. Exploring theory as well as application, much of our work on language, speech, translation, visual processing, ranking and prediction relies on Machine Intelligence. In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, applying learning algorithms to understand and generalize. (Google page)

But this is not the case. ML applications are running on your smartphone and home computer now. Text prediction on your keyboard, facial recognition in your photos be it in your photos app or in Facebook (FB) and speech recognition such as Siri, Amazons (AMZN) Echo, etc., all use ML systems to perform the tasks. Many of these are actually sent off to servers in the cloud to do the heavy lifting computing, because it is indeed heavy lifting that is, it requires a great deal of compute power. NVidia (NVDA) is surging precisely because of its new Tesla (NASDAQ:TSLA) series products on the server end of this industry.

So, what has Apple done?

A few weeks ago, Apple (AAPL) held its Developers Conference (WWDC) opening with the keynote address where Tim Cook and friends introduced new features of their line of products. While many focused on the iPad Pro, the new iOS and Mac OS features or the HomePod speaker, for the long term, the real news for the investor is the AR and ML toolkits introduced.

Investors may be wondering:

What Core ML does is simple, it allows app writers to incorporate an ML model into their app by simply dragging it into the program code window. It also provides a single, simple method to send target data into that model and retrieve an answer.

The purpose of a model is to categorize or provide some other simple answer to a set of data. Input might be one piece of data, such as an image, or several, as a stream of words.

The model is a different story altogether. This is the complicated part.

Apple provides access to a lot of standard models. The programmer can simply select one of these, and plop it into the program. If not, then the programmer, or an AI specialist, would go to one of a number of available ML tools to specify a network and train it. Apple has provided tools to translate these trained models into the format that the Core ML process uses. (Apple has provided its format as open source for other developers to use.)

The amazing thing is that one can pull a model into their program code, and then write as little as three or four lines of new code to use it. That is, once you have the model, you can create a simple app to use it literally in a matter of minutes. This is an dazzling accomplishment.

An interesting thing is that the programmers call to the model to send in data and retrieve the response is exactly the same no matter what the model. Obviously one needs to send in the correct type of data (image, sound file, text), but the manner of doing so is exactly the same no matter what type of data is assessed or what the inherent structure is of the model itself. This enormously simplifies programming. The presenters continually emphasized that the developers should focus on the user experience, not on implementation details.

One of the great things about Core ML is that the apps perform all the calculations, on the device. Nothing is sent to a remote server. This provides the following benefits:

One area of interest (at least for the technophile) is some of the benefits of the actual implementation.

Software on a computer (and a smartphone is a computer) is layered, where each layer creates a logical view of the world, but really is no more than a bunch of code using the layer below it. Thus, a developer can call a routine to create a window (sending in a variety of parameters for the size and location, color, etc.), and this will perform the enormous number of operations from the lower levels that are required to open up a graphic display that we recognize as a window. In some cases, the upper layers of abstraction are the same for different devices, in spite of very different real implementations.

The illustration shows Apples implementation of Core ML and how it sits on top of other layers. In this case, there are ML layers for vision, etc. that sit on top of the Core ML itself. But the important thing here is that we can see how Core ML sits on top of Accelerate and Metal Performance Shaders.

Metal is the Apple graphics interface for accelerating graphics performance. It improves this immensely. Shaders are the units that actually perform the calculations in a Graphics Processing Unit (see GPU section of this post).

One might wonder why ML services would be built on top of graphics processors. As noted in the post on GPUs mentioned above, a graphic (photo, drawing, video frame) consists of thousands or millions of tiny picture elements, or pixels. Editing the frame consists of applying some mathematical operation on each of the pixels sometimes depending on its neighbors. This means you want to perform the same operation on millions of different data pieces. As I noted earlier, a neural network consists of many cells each with many connections. One system boasts 650K neurons with 630M connections. Yet the actual adjustments of the weights of the connections is a simple arithmetic operation. So a GPU is actually spectacular at ML processing performing the same calculation on hundreds, or even thousands of cells in parallel. Apples Metal technology lets the ML programs access the GPU compute cells directly.

The important thing to understand here is that Apple has built the Core ML engines on top of these high performance technologies. Thus, it comes for free to the app developer. All the hard work of programming an ML engine has been done, fine tuned, accelerated, and debugged. The importance of this is really hard to convey to the person who does not know the development process. It gives every app developer the benefit of literally scores of programmers working for several years to make their little app, effective, correct, and robust.

Finally, there is one last card in apples hand, yet to be officially shown. Back in May, Bloomberg reported that they had reliable sources tell them that Apple is working on a dedicated ML chip, called the Neural Engine.

This makes a lot of sense. A standard GPU is great for doing ML computations, but in the end, it was designed first to handle graphics. The design would probably be quite similar, but totally tailored to the ML tasks. My guess is that this Neural Engine will make its debut on the iPhone 8 that is expected to be released in the fall (along with updated iPhone 7s/Plus). It would be a tantalizing incentive for buyers, a major differentiator for the line. With time, it would become available on all new phones (perhaps not the low end SE). With this chip, I believe Siri would move completely onto the device. It could also be used on Macs.

ML models require a tremendous amount of computation. As such, they consume a great deal of battery power. As new generations of chips have emerged with continually shrinking transistor size (thus increasing compute power and efficiency), it has become more realistic to run some models locally. Additionally, the GPUs that Apple has built on their A-series chips have grown at an extraordinary rate. Graphics performance in the new iPad Pro, with A10x processor, is an astounding 500 times that of the original iPad. According to Ash Hewson of Serif software, the performance is literally four times that of an Intel i7 quad core desktop PC.

Still, on a portable device, every drop of battery power is precious. So if Apple can save by designing its own specialty chips, then it will be worth it. They have the talent and the capacity.

And yet another motivation. There is still a lot of evidence that Apple is working on self driving car technology. It would be just like them to want to own the process from hardware to software. With their own ML processor, they would be free from worries that some other company would have control of a key technology. (This is why they created the browser Safari.) Metal is a software/hardware interface specification. It relies implicitly on a hardware platform that conforms to its specifications. Having their own Neural Engine chip will assure this, even as they move into self-driving cars.

As an aside it is interesting to note that the Core ML libraries (including Metal 2) will run on the Mac as well as iOS. Apple is gradually moving to unify the two platforms in many respects.

With the iPhone itself, one can try to predict sales and costs and come up with a guess as to revenue and profit for a given time frame. Both ML and AR projects have little in terms of applications at the moment, and so their impact on sales is rather ephemeral at this time. Still, this is an important investment in the future. I stated above that Core ML is an important enabling technology. The fact is simple with a huge lead in this arena, performance in ML tasks will far and away outstrip that from any competitor for many years to come.

At first the most visible will be AR titles since they tend to be very flashy. But AI titles will slowly begin to gain traction. Other platforms will be left in the dust in terms of performance. (Watch the Serif Affinity Photo demo in the WWDC keynote video time 1:40:10 - to see just how astoundingly fast the iPad Pro is.)

With these tools hardware and software Apple will assure itself of being far and away the leader in basic platform technology. This will allow them to attract new customers and encourage upgrades. Exactly what the investor wants.

Disclosure: I am/we are long IBM, AAPL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

More:

Artificial Intelligence: Apple's Second Revolutionary Offering - Seeking Alpha

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: Apple’s Second Revolutionary Offering – Seeking Alpha

The Most Influential Memes on the Internet – Fox Weekly

Posted: at 7:14 pm


Fox Weekly
The Most Influential Memes on the Internet
Fox Weekly
Dawkins theory of the meme gave rise to the field of memetics in the 1990's which seeks to link the scientific concept of the meme with identifiable evidence using the scientific method. The popular Internet meme is something people can imitate in ...

Read more from the original source:

The Most Influential Memes on the Internet - Fox Weekly

Posted in Memetics | Comments Off on The Most Influential Memes on the Internet – Fox Weekly