Daily Archives: April 19, 2017

The Great AI Recruitment War: Amazon Is On Top, And Apple Is … – Forbes

Posted: April 19, 2017 at 10:07 am


Forbes
The Great AI Recruitment War: Amazon Is On Top, And Apple Is ...
Forbes
The top 20 AI recruiters are spending more than $650 million annually to hire talent in this field.

and more »

See the original post:

The Great AI Recruitment War: Amazon Is On Top, And Apple Is ... - Forbes

Posted in Ai | Comments Off on The Great AI Recruitment War: Amazon Is On Top, And Apple Is … – Forbes

Google’s New Chip May Be the Future of AI Systems – The Motley Fool – Motley Fool

Posted: at 10:07 am

Alphabet's (NASDAQ:GOOGL) (NASDAQ:GOOG) Google announced at its I/O Developers Conference in May 2016 that it had designed a new chip, called the tensor processing unit (TPU), specifically designed for the demands of training artificial intelligence (AI) systems.The company didn't divulge much at the time, butin a blog post that same week, hardware engineer Norm Jouppi revealed that Google had been running the TPU in the company's data centers for more than a year and...

... found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law).

The chip was an application-specific integrated circuit (ASIC), a microchip designed for a specific application. Little else was known about the enigmatic TPU, and the mystery continued until last week, when Google pulled back the curtain to reveal the inner workings of this new groundbreaking advancement for AI.

Google's tensor processing unit could revolutionize AI processing. Image source: Google.

The TPU underlies TensorFlow, Google's open-source machine learning framework, a collection of algorithms that power the company's deep neural networks. These AI systems are capable of teaching themselves by processing large amounts of data. Google tailored the TPU to meet the unique demands of training its AI systems, which had previously been run primarily on graphics processing units (GPUs) manufactured by NVIDIA Corporation (NASDAQ:NVDA). While the company currently runs the TPU and GPU side by side (for now), this could have drastic implications for how AI systems are trained going forward.

Google released a study -- authored by more than 70 contributors --that provided a detailed analysis of the TPU. In a blog post earlier this month, Jouppi laid out the capabilities of the chip. He described how it processed AI production workloads 15 to 30 times faster than CPUs and GPUs performing the same task, and achieved a 30 to 80 times improvement in energy efficiency.

Google realized several years ago that if customers were to use Google voice search for just three minutes each day, that would require the company to double its existing number of data centers. The company also credits the TPU with providing faster response times for search, acting as the linchpin for improvements in Google Translate, and was a key factor in its AI system's defeat of a world champion in the ancient Chinese game of Go.

Companies are taking a variety of approaches to bring improvements to AI systems. Intel Corporation's (NASDAQ:INTC) recently acquired start-up Nervana has developed its own ASIC, the Nervana Engine, that eliminates components from the GPU not essential to the functions necessary for AI. The company also re-engineered the memory and believed it could realize 10 times the processing currently performed by GPUs.Intel is working to integrate this capability on its existing processor platforms to better compete with NVIDIA's offering.

A field-programmable gate array (FPGA) processor can be reprogrammed after installation and is another chip being leveraged for gains in AI. FPGAs have increasingly been used in data centers to accelerate machine learning.Apple Inc. (NASDAQ:AAPL) is widely believed to have installed this chip in its iPhone 7 to promote sophisticated AI advances locally on each phone. The company has emphasized not sacrificing user privacy to make advances in AI, so this would be a logical move for its smartphones.

NVIDIA Tesla P100 powers Facebook's AI server. Image source: NVIDIA.

Facebook, Inc. (NASDAQ:FB) has taken a different approach in optimizing its recently released data center server named Big Basin. The company created a platform that utilizes eight NVIDIA Tesla P100 GPU accelerators attached with NVLink connectors designed to reduce bottlenecks, in what it described as "the most advanced data center GPU ever built." The company revealed that this latest server is capable of training 30% larger machine learning data systems in about half the time.Facebook also indicated that thearchitecture was based NVIDIA's DGX-1 "AI supercomputer in a box."

Though we have been hearing about almost daily breakthroughs in AI, it is important to remember that the science is still in its infancy and new developments will likely continue at a rapid pace. These advances provide for more efficient systems and lay the foundation for future progress in the field. These necessary advances will propel future innovation, but are difficult to quantify in terms of dollars and cents, as well as the potential effects on future revenue and profitability.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares), Apple, and Facebook. Danny Vena has the following options: long January 2018 $85 calls on Apple, short January 2018 $90 calls on Apple, long January 2018 $640 calls on Alphabet (C shares), short January 2018 $650 calls on Alphabet (C shares), and long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Apple, Facebook, and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

Read the original here:

Google's New Chip May Be the Future of AI Systems - The Motley Fool - Motley Fool

Posted in Ai | Comments Off on Google’s New Chip May Be the Future of AI Systems – The Motley Fool – Motley Fool

Artificial Intelligence Comes to Hollywood – Studio Daily

Posted: at 10:07 am

Is Your Job Safe?

Last September, when the 20th Century Fox sci-fi thriller Morgan premiered, artificial intelligence (AI) took center stage for the first time not as a plot point but a tool. The film studio revealed that it had used IBMs Watson a supercomputer endowed with AI capabilities to make the movies trailer. IBM research scientists taught Watson about horror movie trailers by feeding it 100 such trailers, cut into scenes. Watson then analyzed the data, from the point of view of visuals, audio and emotions, to learn what makes a horror trailer scary. Then the scientists fed in the entire 90-minute Morgan. According to Engadget, Watson instantly zeroed in on 10 scenes totaling six minutes of footage.

The media buzz that followed both overstated and understated what had actually happened. In fact, an actual human being edited the trailer, using the scenes Watson chose. So AI didnt actually edit the trailer. But it was also a benchmark, tantalizing the Hollywood creatives (and studio executives) interested in how artificial intelligence might change entertainment.

Philip Hodgetts

The discussion about AI is still a bit premature; when todays products are described, machine learning is a more accurate description. The first person to posit that machines could actually learn was computer gaming pioneer Arthur Samuels, in 1959. Based on pattern recognition and dependent on enough data to train the computer, machine learning is used for any repetitive task. Philip Hodgetts, who founded two companies integrating machine learning, Intelligent Assistance and Lumberjack System, notes that theres a big leap from doing a task really well to a generalized intelligence that can do multiple self-directed tasks. Most experts agree that autonomous cars are the closest we have today to a real-world artificial intelligence.

Machine learning can and does form an important role in a growing number of applications aimed at the media and entertainment business, nearly all of them invisible to the end user. Perhaps the most obvious ones are the applications aimed at distribution of digital media.Iris.TV, which partners with numerous media companies from Time Warners Telepictures Productions to Hearst Digital Media, uses machine learning to create what it dubs personalized video programming. The company takes in the target companys digital assets and creates a taxonomy and structure, with the metadata forming the basis of recommendations. The APIs, which integrate with most video players, learn what the user watches, then create a playlist based on those preferences. The results are pretty impressive: The Hollywood Reporter, for example, was able to double its video views from 80 million in October 2016 to 210 million in February 2017.

Machine learning also plays an increasingly significant role in video post-production much more so than production, which is still a hands-on, very human job. The production process is dependent on bipedal mobility, notes Hodgetts wryly. Weve motorized cranes and so on, but itll be harder to replace a runner on set. Even so, the process of creating digital imagery will feel the impact of machine learning in the not-so-distant future. Adobe, for example, is working with the Beckman Institute for Advanced Science and Technology to use a kind of machine learning to teach a software algorithm how to distinguish and eliminate backgrounds. With the goal of automating compositing, the software has been taught to do so via a dataset of 49,300 training images.

Todays machine learning-enhanced tools fall under the umbrella of cognitive services, a term that covers any off-the-shelf programs that have already been trained at a task, whether its facial recognition or motion detection. At NAB 2017, Finnish company Valossa will debut its Alexa-integrated real-time video recognition platform, Val.ai.

Val.ai is intended to solve the problem of discoverability. Companies that have lots of media assets and want to monetize them better fall into this category, says Valossa chief executive founder Mika Rautiainen. Or they can also re-use archived material for new content. Increasingly, weve found other scenarios emerging in the years weve been creating the service related to content analytics. Deep content understanding correlated with user behavior lets media companies serve contextual advertising and other end-user experiences around media. The Valossa video intelligence engine is in beta at 120 companies, the majority of which are in the U.S. and the U.K.

Rautiainen states that content analytics can also be used to promote and sell items in a video, a capability that Valossa is not developing. But I was surprised how many companies are working around reinventing retail or the purchasing process, he says. Valossa also has a technology demo for facial-expression recognition, which Rautiainen calls a next-level intelligence, and Valossa Movie Finder, with a database of metadata from 140,000 movies.

Yvonne Thomas

Arvato Systems will debut its next-generation MAM system, Media Portal, at NAB 2017. Yvonne Thomas, the companys product manager for the broadcast solutions division, says Media Portal integrates analytics and machine learning via an API, and indexes/updates the respective media. It will also support the visualization for the user in the form of facets that can handle a wide range of data.

At Piksel, chief technology officer Mark Christie points out that machine learning capabilities have accelerated dramatically in recent years and, through natural-language processing techniques, they can now enable a deeper understanding of content. In 2016, Piksel acquired Lingospot, with its patented and patent-pending natural language processing, semantic search, image analysis and machine-learning technologies, and integrated it into Piksels Palette, to collect proprietary metadata on a scene-by-scene basis. Its Fuse, which is built on Piksel Palette, enriches metadata with cast and crew lists or other documentation from third-party sources and serves it across broadcast and OTTworkflows.

Although the advent of tools enhanced by machine learning is interesting, most people in the entertainment industry want to know how worried they should be about their jobs. Hodgetts has a simple answer. If you can teach someone your job in three days, it will be automated [via machine learning], he says.

At USC School of Cinematic Arts, professor and editor Norman Hollyn has been thinking about the implications of collecting metadata for a long time. In principle, automation of what used to be a tedious, labor-intensive job could wreak major changes on the job of the assistant editor. Hollyn has a more positive spin on the integration of these new tools.

About three years ago, I started realizing the value of machine learning and artificial intelligence, he says. With my background, I knew just how difficult it was for humans to collect data, and I started thinking about how much easier my work would be if database fields could be automatically filled.

He agrees that machine learning will change the job of the assistant editor. Historically, even back in the 35mm days, the assistant editor was really an incredibly specialized librarian, he says. Its not a huge difference today. But once machine learning takes over, the librarian work will easily be taken over.

But the results, he thinks, wont be all bad. On some productions, he believes, that there will be no assistants. On others, assistants may be involved in such tasks as world-building for cross-platform media or cutting trailers. When I think about what my students may be doing in five years, its bad news if they think they want to be assistant editors on a TV job, he says. But they can play a role in building the world out of which comes movies, TV series, games, VR and comic books. Different people have to organize that world-building and thats not a machine-learning capability yet.

The post-production environment always feels the downward budgetary pressure and probably offers less flexibility for facilityowners trying to keep afloat. AI will be good and bad for people in our industry, says AlphaDogs Chief Executive Terence Curren. The level of AI we currently have can already automate many tasks that used to employ people. Automated syncing and grouping of clips is just one example. As AI gets smarter, more jobs will be replaced, but the removal of the human element will also eliminate many mistakes that currently cost time down the pipeline. The bottom line is, if you do something that is repetitive all day, your job will be one of the first to get replaced. If you do something creative, that requires constantly changing approaches, your job will be safe for a long time.

For those worried about the ethical considerations of bringing machine learning and artificial intelligence into the workplace (as well as potentially hundreds of consumer-facing products and services), thats being addressed both by giant technology companies and the IEEE. In September 2016, Google, Facebook, Amazon, IBM and Microsoft formed the Partnership on Artificial Intelligence to Benefit People and Society, to advance public understanding of the technologies and come up with standards. The Partnership says it plans to conduct research, recommend best practices and publish research under an open license in areas such as ethics, fairness and inclusivity; transparency, privacy and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability and robustness of the technology. Apple just joined the group.

Meanwhile, the IEEE and its Standards Association created a new standards project, IEEE P700, a working group that intends to define a process model by which engineers and technologists can address ethical considerations throughout the various stages of system initiation, analysis and design for big data, machine learning and artificial intelligence.

Machine learning is here, and AI is coming, not just to the entertainment industry but many others. There will be winners and losers, but the very human talent of creativity a specialty in the entertainment industry is safe for the foreseeable future.

Go here to see the original:

Artificial Intelligence Comes to Hollywood - Studio Daily

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Comes to Hollywood – Studio Daily

How artificial intelligence learns to be racist – Vox – Vox

Posted: at 10:07 am

Open up the photo app on your phone and search dog, and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog looks like.

This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world. The appeal of these programs is immense: These machines can use cold, hard data to make decisions that are sometimes more accurate than a humans.

But know: Machine learning has a dark side. Many people think machines are not biased, Princeton computer scientist Aylin Caliskan says. But machines are trained on human data. And humans are biased.

Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators.

Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own. In other cases, machine learning programs make predictions about which rsums are likely to yield successful job candidates, or how a patient will respond to a particular drug.

Machine learning is a program that sifts through billions of data points to solve problems (such as can you identify the animal in the photo), but it doesnt always make clear how it has solved the problem. And its increasingly clear these programs can develop biases and stereotypes without us noticing.

Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked systematically. The reporters found that the software rated black people at a higher risk than whites.

Scores like this known as risk assessments are increasingly common in courtrooms across the nation, ProPublica explained. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants freedom.

The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.

This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias. If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long, ProPublica wrote.

But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.

Its stories like the ProPublica investigation that led Caliskan to research this problem. As a female computer scientist who was routinely the only woman in her graduate school classes, shes sensitive to this subject.

Caliskan has seen bias creep into machine learning in often subtle ways for instance, in Google Translate.

Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it always ends up as hes a doctor in a gendered language. The Turkish sentence didnt say whether the doctor was male or female. The computer just assumed if youre talking about a doctor, its a man.

Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.

Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word bottle. The computer begins to understand what the word means by noticing it occurs more frequently alongside the word container, and also near words that connote liquids like water or milk.

This idea to teach robots English actually comes from cognitive science and its understanding of how children learn language. How frequently two words appear together is the first clue we get to deciphering their meaning.

Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.

In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words male and engineer. But if a person lags on associating woman and engineer, its a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)

Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word pleasant than white names. And female names were more associated with words relating to family than male names. (In a weird way, the IAT might be better suited for use on computer programs than for humans, because humans answer its questions inconsistently, while a computer will yield the same answer every single time.)

Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. Thats not because African Americans are unpleasant. Its because people on the internet say awful things. And it leaves an impression on our young AI.

This is as much as a problem as you think.

Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at rsums. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.

Lets say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions, she says. And this might be the same for a women applying for a software developer or programmer position. Almost all of these programs are not open source, and were not able to see whats exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.

And that will be a challenge in the future. Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. (Theres early research on whether it can help predict mental health crises.)

But health data, too, is filled with historical bias. Its long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)

Might AI then recommend surgery at a lower rate for women? Its something to watch out for.

Inevitably, machine learning programs are going to encounter historical patterns that reflect racial or gender bias. And it can be hard to draw the line between what is bias and what is just a fact about the world.

Machine learning programs will pick up on the fact that most nurses throughout history have been women. Theyll realize most computer programmers are male. Were not suggesting you should remove this information, Caliskan says. It might actually break the software completely.

Caliskan thinks there need to be more safeguards. Humans using these programs need to constantly ask, Why am I getting these results? and check the output of these programs for bias. They need to think hard on whether the data they are combing is reflective of historical prejudices. Caliskan admits the best practices of how to combat bias in AI is still being worked out. It requires a long-term research agenda for computer scientists, ethicist, sociologists, and psychologists, she says.

But at the very least, the people who use these programs should be aware of these problems, and not take for granted that a computer can produce a less biased result than a human.

And overall, its important to remember: AI learns about how the world has been. It picks up on status quo trends. It doesnt know how the world ought to be. Thats up to humans to decide.

Original post:

How artificial intelligence learns to be racist - Vox - Vox

Posted in Artificial Intelligence | Comments Off on How artificial intelligence learns to be racist – Vox – Vox

U.S. Companies Raising $1 Billion or More to Fuel Artificial … – GlobeNewswire (press release)

Posted: at 10:07 am

April 18, 2017 07:30 ET | Source: Paysa

PALO ALTO, Calif., April 18, 2017 (GLOBE NEWSWIRE) -- Paysa, the only platform that uses artificial intelligence (AI) to deliver personalized career and hiring recommendations plus real-world salary insights, today announced new findings from a study on the market for artificial intelligence tech talent and associated skills.

According to the study, companies are currently investing in more than $650 million in annual salaries to fuel the AI talent race with more than 10,000 available positions at top employers across the country.

The total annual investment, on average, among the top 20 employers that are looking to hire AI talent is $33,292,647. Yet Amazon is allocating nearly 600 percent more and Google is investing nearly 300 percent more than this figure, indicating that their future success is heavily dependent upon AI technologies, and subsequently, the talent to create them.

According to a 2016 Markets and Markets Report, the artificial intelligence (AI) market is expected to be worth $16.06 Billion by 2022, growing at a CAGR of 62.9% from 2016 to 2022. Several U.S. companies have raised $1 billion dollars or more to fuel artificial intelligence (AI) development.

Companies that are hiring to fill AI jobs are currently seeking tech and engineering talent with deep learning, machine learning, artificial intelligence, computer vision, neural networks and reinforcement learning skills. The top 20 employers who are hiring for these jobs and their current average annual investment allocation, based on average net salary, are as follows.

The findings also show that 40 percent of the open AI positions are available at large enterprises with more than 10,000 employees, 10 percent are available at companies with 1001 10,000 employees and 7 percent are available at companies with 11-50 employees. Companies with 51-100 employees account for 5 percent of the open positions across the market while just 2 percent of the open AI positions are available at companies with 0-10 employees. Another 25 percent of todays open AI positions were reported as being at companies with employee counts that were unknown at the time of the study.

Where are these jobs? The top 15 U.S. cities where companies are hiring the most tech talent with artificial intelligence skills and expertise include:

Total investment by region, considering average salary and number of available jobs is as follows:

From powering the self-driving car to guiding the way we shop, manage our finances and even do routine tasks, AI technology plays an increasing a role in every aspect of life as we know it, so its no surprise that investment in this area is growing at a rapid pace and companies are having a hard time keeping up, said Chris Bolte, CEO of Paysa. This latest research reveals that AI and machine learning skills are in such high-demand that companies across the country at every funding stage are weighing the skills and track-records of individuals even more heavily than years-of-experience or their degree as they staff-up their development teams.

The explosion of AI talent needs is a critical event happening right now nationwide -- engineers with right skills can land a great job at a top tier company in any region of the U.S, added Bolte.

Although a majority or 35 percent of the jobs require a Ph.D, 26 percent require just a masters degree and 18 percent require only a bachelors degree. And not all open jobs require a specific degree level suggesting that for some 21 percent of the jobs, having the right skills are more important than graduating from a specific university or degree program.

The Paysa study also reveals that just five percent of the open jobs are executive level, calling for 10 or more years-of-experience, 28 percent of the available AI jobs are senior level positions, requiring five or more, 27 percent are mid-level, mandating two to five years, and less than two percent are junior level, asking for just one to two-years. Another 39 percent of open AI jobs have experience level requirements that are unspecified.

Finally, employers that are hiring for the majority of the positions (36 percent) are at companies that have been around 20 years or more while 21 percent of todays open AI jobs can be found at companies that are 10 to 20-years-old. Only six percent of the open positions are at start-ups or companies that have been around five years or less.

About Paysa Paysa offers personalized career and hiring recommendations plus real-world salary insights for maximizing opportunity, earning potential and value at all stages of an individuals career. Using proprietary artificial intelligence technology and machine learning algorithms, Paysa analyzes millions of data points including jobs, resumes and compensation information, providing professionals with actionable tools, insights, and research. They can then see and understand their individual worth in the market today, and how to increase their value. Paysa also empowers enterprises with the knowledge they need to be competitive in todays fierce tech hiring market. Employers can learn which skills, real-world company experience and educational background offers the greatest predictor of a candidates or employees future success at their organization.

Related Articles

Here is the original post:

U.S. Companies Raising $1 Billion or More to Fuel Artificial ... - GlobeNewswire (press release)

Posted in Artificial Intelligence | Comments Off on U.S. Companies Raising $1 Billion or More to Fuel Artificial … – GlobeNewswire (press release)

How Artificial Intelligence Might Transform the Engineering Industry – TrendinTech

Posted: at 10:07 am

Artificial intelligence is all around us. Since 1956, when the field of AI was founded its been a subject of public interest. But as great as it is, expectations for AI today are phenomenally high and designing such systems is not easy. The progress of these systems can be seen in IBMs Watson and Google DeepMinds AlphaGo program which demonstrates how increasingly powerful computing abilities are fostering AI. There are now several AIs in existence whose abilities exceed that of humans, and that number is continuously increasing. Its a technology that is becoming popular is various industries, including engineering. The following are a few examples of some of the current AI techniques and applications being used and how they might affect the engineering industry.

Machine learning: There are various methods of machine learning in use, but some of the most efficient are those based on the concept of artificial neural networks (ANNs). These are modeled upon the neurons found in the human brain and are equipped with a network of nodes connected with varying degrees of correlation. One method that has been used since the beginning of training ANNs is the perception algorithm. This algorithm teaches a network to sort inputs into one of two classes. It works by inputting training data, comparing the expected output to the actual node output and updates their weighting based on the difference.

Artificial Intelligence Applications: There are various accomplishments for the engineering industry about machine learning and AI techniques that can be seen over the past few years including Natural Language Processing (NLP), Image Processing, Disease Treatment, Autonomous Vehicles, and Data Structure Technology. The Internet of Things, or IoT, is another engineering achievement that will turn every appliance into a smart one. While big data is the mass collection of data that is the key promise of AI analytics.

Overall, AI is certain to bring about some significant changes within the engineering industry, one of which will be the automation of many low-level engineering tasks. However, this may not be as beneficial as it first implies. Artificial intelligence will render many of the simpler professional tasks redundant potentially replacing many of the tasks by which our younger engineers and other professionals learn the details of our trade, said Tim Chapman, director of the Arup Infrastructure Group. A study carried or by Stanford University looked into the impact of AI, and engineering jobs are not roles that are going to be affected too much over the next 15 years. And even if some jobs are lost to AI, new ones will be opened as people will still be needed to oversee the running of them.

More News to Read

comments

Excerpt from:

How Artificial Intelligence Might Transform the Engineering Industry - TrendinTech

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Might Transform the Engineering Industry – TrendinTech

Artificial intelligence is the future of customer experience – Telegraph.co.uk

Posted: at 10:07 am

Theres no doubt that customer experience is absolutely essential for brand survival. AI and analytics will increasingly be deployed to support the customer experience, as well as being the principal means to deliver it.

That makes trust and transparency every bit as important as technology in achieving success.

So what are the components of customer experience? Personalisation is one key element. But theres been a tendency to see personalisation in terms of the value and advantage brands accrue from exploiting ever-more granular customer data in real time.

Instead, the focus needs to be on what personalisation means for the consumer.

And some more forward-thinking brands are starting to do just that. Theyre looking at their relationships with customers and their data in a new way.

Mindful of the need to earn and retain digital trust, these brands are being more open and transparent with consumers. For example, some organisations are enabling their customers to see all the data they held on them. This allows them to modify and control how their interactions with the brand happen in the future.

A more open and transparent relationship with the customer and the concept of fair value exchange sit at the heart of the customer experience in the digital era. When this is done properly, consumers are willing to share more because they recognise the value that they receive and have a degree of control over how brand interactions take place.

Many of these interactions are now managed by artificial intelligence, machine learning, chatbots and virtual assistants. As more of the consumer experience of a brand is driven by AI, the emphasis on fair value becomes even more important. And its a crucial component underpinning the ability to build living brands that adapt and evolve with every consumer interaction. As such, this becomes a powerful potential source of competitive differentiation.

Some of the leading digital businesses are already securing significant advances in their use of AI for everyday dealings with the consumer.

In only a few years, its likely that most interactions wont require a keyboard. Instead, they will be based on voice, gesture and augmented or virtual-reality interactions. And as screen time declines, the ability to own an interface will become a critical goal and a potential source of disruption.

Of course, using AI interfaces as the primary source of interaction and a key source of data needs to strike a balance between offering cool features consumers value and safeguarding against creepy intrusions that turn customers off.

This reinforces the need to give consumers a degree of control that goes beyond simply setting channel preferences to provide a deeper understanding of how and when communications take place. It means that the right time rather than real time becomes the key attribute consumers appreciate and respond to.

So what do organisations need to think about when integrating AI as their spokesperson and first point of contact with the customer?

The right operating model and governance:

Pervasive use of AI to support the customer experience requires a radically different approach to operating models, processes and governance. Entrusting customer data to analytics, machine learning and AI requires the right kind of robust capabilities and controls.

Evolving the data supply chain:

Having AI, machine learning and analytics as the drivers of the customer experience relies on collecting enormous amounts of data. This data can be internal, external, structured and unstructured from right across the value chain, as well as being augmented from other sources. In addition, overlaid on this is derived data and consumer insight. Making all this work together depends on a sophisticated and evolving data supply chain to feed the AI.

Keeping pace with changes in technology:

The sophistication of analytics, AI and machine learning is increasing exponentially. Techniques are in play today that didnt exist a few months ago. So it is essential to make the right choices regarding technology, and have solutions that can keep pace with a rapid rate of change.

People and machines working in tandem:

Tools and techniques need to be augmented with people. Human intervention and control must support AI and its adoption within these enterprises as it becomes the foundation for the customer experience. Its critical to test, learn and develop technology in ways that keep in step with the lightning-fast pace of change.

Read the original post:

Artificial intelligence is the future of customer experience - Telegraph.co.uk

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is the future of customer experience – Telegraph.co.uk

The Skeptical Zone | "I beseech you, in the bowels of …

Posted: at 10:06 am

Post navigation

An interesting essay in Aeon by neurologist Jules Montague:

Why is the brain prone to florid forms of confabulation?

She had visited Madonnas mansion the week before, Maggie told me during my ward round. Helped her choose outfits for the tour. The only problem was that Maggie was a seamstress in Dublin. She had never met Madonna; she had never provided her with sartorial advice on cone brassieres. Instead, an MRI scan conducted a few days earlier when Maggie arrived at the ER febrile and agitated revealed encephalitis, a swelling of the brain.

Now she was confabulating, conveying false memories induced by injury to her brain. Not once did Maggie doubt that she was a seamstress to the stars, no matter how incongruous those stories seemed. And thats the essence of confabulation: the critical faculty of doubt is compromised. These honest lies were Maggies truth

The resident professional philosopher of TSZ recently wrote this:

memes! is a dumb explanation.

Yes, I agree! (Although that person doesnt seem to know the difference between memes and memetics. e.g. I dont mind memes used for popular shared internet links, but thats not memetics.)

Well, given the weekends significance for a billion+,lets crucifymemetics then.Why is memetics a dumb explanation?And theres no need tohold back with merely dumb. If one is an ideologicalnaturalist, isnt one forced into something like memetics because they share the same materialist, naturalist, agnostic/atheist worldview as (chuckling at his own supposed lack of self-identity!)Daniel Dennett? Isnt the built-in materialism of memetics what made it so attractive to certain peopleand for the same reasonobviously not attractive or believable to most others?

Continue reading

From Wired:

But when Stanford University geneticist Jin Billy Li heard about Joshua Rosenthals work on RNA editing in squid, his jaw dropped. Thats because the work, published today in the journal Cell, revealed that many cephalopods present a monumental exception to how living things use the information in DNA to make proteins. In nearly every other animal, RNAthe middleman in that processfaithfully transmits the message in the genes. But octopuses, squid, and cuttlefish (but not their dumber relatives, the nautiluses) edit their RNA, changing the message that gets read out to make proteins.

In exchange for this remarkable adaptation, it appears these squishy, mysterious, and possibly conscious creatures might have given up the ability to evolve relatively quickly. Or, as the researchers put it, positive selection of editing events slows down genome evolution. More simply, these cephalopods dont evolve quite like other animals. And that could one day lead to useful tools for humans.

From the paper itself:

Continue reading

Easter is approaching, but skeptic John Loftus doesnt believe in the Resurrection of Jesus. Whats more, he thinks youre delusional if you do. I happen to believe in the Resurrection, but I freely admit that I might be mistaken. I think Loftus is wrong, and his case against the Resurrection is statistically flawed; however, I dont think hes delusional. In todays post, Id like to summarize the key issues at stake here, before going on to explain why I think reasonable people might disagree on the weight of the evidence for the Resurrection.

The following quotes convey the tenor of Loftus views on the evidence for the Resurrection:

Continue reading

I wanted to bring to your attention a lovely profile piece on Dan Dennett, Daniel Dennetts Science of the Soul. Its nice to see a philosopher as respected and well-known as Dennett come alive as a human being.

Id also like to remind those of you interested in this sort of thing that Dennett has a new book out, From Bacteria to Bach And Back: The Evolution of Minds. The central project is to do what creationists are always saying cant be done: use the explanatory resources of evolutionary theory to understand why we have the kinds of minds that we do. There are decent reviews here and here, as well as one by Thomas Nagel in New York Review of Books that I regard as deliberately misleading (Is Consciousness an Illusion?).

[Note: The profile and/or the Nagel review may be behind paywalls.]

Well, should scientists be legally liable for deceiving the public and manipulating the evidence to support their OWN brliefs based on untrue claims and unsupported by scientific evidence?

In a podcast on the show, ID the Future (March 14, 2017), Dr. Ann Gauger criticized a popular argument that purports to show how easy it is to get new proteins: namely, the evolution, over a relatively short 40-year period, of nylonase. (Nylonase is an enzyme that utilizes waste chemicals derived from the manufacture of nylon, a man-made substance that was not invented until 1935.) While Dr. Gauger made some factual observations that were mostly correct, her interpretation of these observations fails to support the claim made by Intelligent Design proponents, that the odds of getting a new functional protein fold are astronomically low, and that its actually very, very hard for new proteins to evolve. Lets call this claim the Hard-to-Get-a-Protein hypothesis (HGP for short).

To help readers see whats wrong with Dr. Gaugers argument, I would like to begin by pointing out that for HGP to be true, two underlying claims also need to be correct:

1. Functional sequences are RARE. 2. New functions are ISOLATED in sequence space.

In her podcast, Dr. Gauger cites the work of Dr. Douglas Axe to support claim #1, when she declares that the odds of getting a new functional protein fold are on the order of 1 in 10^77 (an assertion debunked here). Dr. Gauger says little about claim #2; nevertheless, it is vital to her argument. For even if functional sequences are rare, they may be clustered together in which case, getting from one functional protein to the next wont be so hard, after all.

If claims #1 and #2 are both correct, then getting new functions should not be possible by step-wise changes. Remarkably, however, this is precisely what Dr. Gauger concedes, in her podcast, as well see below.

Continue reading

Contradictions are rife in the Christian bible. Here at The Skeptical Zone we have recently discussed those surrounding how Saul died. Weve also noted the two conflicting accounts of Judas death and what he did with the thirty pieces of silver. There are dozens more.

The Skeptics Annotated Bible and The Thinking Atheist are two of several excellent resources on biblical contradictions and absurdities. The sheer volume of contradictions, though, is best demonstrated visually as is done at BibViz:

The creators of this site started with a cross-index of topics in the bible and pulled out those that contradict each other. You can click on the links to get more detail. As a bonus, the site includes references to the sections in the bible that contain Scientific Absurdities & Historical Inaccuracies, Cruelty & Violence, Misogyny, Violence & Discrimination Against Women, and Discrimination Against Homosexuals.

Obviously most Christians arent foolish enough to claim their bible is inerrant. Those that do, in the words of Desi Arnaz, have got some splainin to do.

Over at her blog, BackReAction, physicist Sabine Hossenfelder has written a cogently argued article titled, No, we probably dont live in a computer simulation (March 15, 2017). Ill quote the most relevant excerpts:

According to Nick Bostrom of the Future of Humanity Institute, it is likely that we live in a computer simulation

Among physicists, the simulation hypothesis is not popular and thats for a good reason we know that it is difficult to find consistent explanations for our observations

If you try to build the universe from classical bits, you wont get quantum effects, so forget about this it doesnt work. This might be somebodys universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits. [Note added for clarity: You might be able to get quantum mechanics from a classical, nonlocal approach, but nobody knows how to get quantum field theory from that.]

Even from qubits, however, nobodys been able to recover the presently accepted fundamental theories general relativity and the standard model of particle physics

Indeed, there are good reasons to believe its not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity arent necessarily small and have been looked for and nothings been found.

Continue reading

Im pretty sure that most knowledgeable people know that someone who claims to be an atheist is just making an overstatement about his/her own beliefs. As most knowledgeable people who claim to be atheist probably know that even the most recognizable faces of atheistic propaganda, such as Richard Dawkins, admitted publicly that they are less than 100% certain that God/gods dont exist.

My question is: Why would anyone who calls himself an atheist make a statement like that?

Read more:

The Skeptical Zone | "I beseech you, in the bowels of ...

Posted in Memetics | Comments Off on The Skeptical Zone | "I beseech you, in the bowels of …

The technologist’s stone – The Stanford Daily

Posted: at 10:04 am

A peculiar kind of cognitive dissonance grips most people who talk about death. On one hand, death is awful: It is the most tragic fate that can befall somebody, murderers are the lowest of the low, and the death of a loved one, even an elderly loved one who has lived a long life, clogs us with sadness.

On the other hand, any intimation that we might wish to, I dont know, abolish death is met with deep suspicion. Everyones time comes eventually, I have been told. Or: Itd be unnatural any other way. Even: But would you really want to live forever?

Yes, actually. Yes I would. I have wanted to live forever for as long as I can remember. My instinctive response when asked why is, well, why not? Life is a self-evident good to me. Justifying that seems absurd dont you like happiness? And love? And experiencing things? Dont you like being alive? Peoples tendency to reply, Well yes, but and trail off, looking vaguely concerned for my mental wellbeing, continues to mystify me.

Like large swathes of secular ethics, I suspect that this hesitancy is, in some sense, a hangover from Christianity. Christians, of course, might reasonably shun the idea of earthly immortality, but the basic impulse underlying Christianitys doctrine of life and death that one must endure an imperfect and pious life on Earth before rejoicing in the eternity of the empyrean is the same one that motivates me. I just have less faith that death brings anything other than an ineffable and everlasting nothingness.

Immortality is no longer, however, as niche an aspiration as it was even five, ten years ago. Tad Friend recently published a (highly recommended) piece in The New Yorker that documents the recent anti-aging buzz that has overcome Silicon Valley. Iconoclastic tech entrepreneur and venture capitalist Peter Thiel, ever ahead of the zeitgeist, wrote in 2009 that he stand[s] against the ideology of the inevitability of the death of every individual.

Since then, a steadily growing number of futurists have become interested in abolishing aging in one form or another. Donald Trump considered appointing Jim ONeill, a man who considers aging a disease to be overcome, to head the FDA, before, disappointingly, settling on the more establishment, Big Pharma-friendly Scott Gottlieb. Cryonics (freezing ones corpse in the hope that future technology may breathe life into it anew), once dismissed as mere science fiction, has slowly but surely gained popularity among Silicon Valleys elite. Futurist and AI researcher Eliezer Yudkowsky, a man unafraid of polemical positions (he once argued on utilitarian grounds that a single person being tortured for fifty years was preferable to a sufficiently large number of people getting dust specks in their eyes), wrote in a post on the website Less Wrong that If you dont sign up your kids for cryonics then you are a lousy parent. Thinking about cryonics reminds me of an H.P. Lovecraft line from the fictional text The Necronomicon, an esoteric book filled with secrets so vast in their cosmic implications that readers are sent insane merely by reading it. One of the few lines that Lovecraft reveals from the book goes like so: That is not dead which can eternal lie,/And with strange aeons even death may die. Strange aeons indeed, but perhaps ones not so far away.

I find this exhilarating. The world especially outside of Silicon Valley is starved of the kind of grand projects that can inspire a nation. Something like the space race would be nigh-unthinkable today (just ask Newt Gingrich). Even political projects like the New Deal or the Great Society, whatever you think of their outcomes, had an idealistic flavor to them that neither side of mainstream politics except, arguably, parts of Trumpism and Sanders-esque social democracy is really willing to embrace anymore. The prospect of seizing a truly fundamental part of human destiny the inevitability of death and forging it into a shape that befits our will is intoxicating in its grandiosity.

I think that one day the idea that death was so readily embraced, and that there was resistance against a project to eliminate it, will be incomprehensible to people. Life, and as much life as possible, will simply be taken for granted as a wonderful thing. Perhaps thats naive of me.

Tell you what, if Im still wrong in a thousand years, Ill write an apology column.

Contact Sam Wolfe at swolfe2 at stanford.edu.

Original post:

The technologist's stone - The Stanford Daily

Posted in Cryonics | Comments Off on The technologist’s stone – The Stanford Daily

Integrated Health: Combining conventional healthcare with alternative medicine – European Pharmaceutical Review (blog)

Posted: at 10:03 am

You are here: Home News Blog Integrated Health: Combining conventional healthcare with alternative medicine

Functional Medicine is an emerging specialty which considers dysfunction of cellular physiology and biochemistry as the cause of chronic conditions and aims to restore function. Patients are more frequently turning towards this form of medicine, as they recognise that much orthodox prescribing is based on placating symptoms with little focus on cure or treatment of underlying cause.

When any such therapy is offered by conventionally trained doctors, who may also concurrently prescribe orthodox medicine, the term Integrated (or Integrative) Medicine (IM) is now used.

IM medicine is a mixture of conventional with Complementary and Alternative medicine (CAM).

In 2010 a study by Hunt KJ et al showed data from 7630 respondents in the UK.

There are many reasons for the popularity of CAM therapies and it is not just the public who seem to be showing an interest. A Californian study in 2015 has shown 75% of 1,770 USA medical students think it would be beneficial for conventional Western medicine to integrate with complementary and alternative medicine (CAM).1

As global health systems feel the pressure of increasing costs, the sensibility of combining some Integrated Medicine into national health care seems logical and has been proven as viable. The budget for the NHS in England for 2016/17 is 120 billion. This is forecast to rise by nearly 35 billion in cash terms an increase of 35% by 2021. Treating people with chronic diseases may account for 86% of our nations health care costs based on USA figures.2 Arguably this makes the cost of care, using the current model, economically unsustainable. We need to find ways of changing this slide to affordability.

There are a number of studies suggesting that CAM may reduce medical expenditure and costs3 but others, based on the current paradigm of orthodox medicine, that do not.

In 2008 the UK annual spend on alternative health treatments was 4.5 billion, a market that has grown by nearly 50% in the last decade.4 This increasing expense would be surprising if people were not actually benefiting and may reduce the current increasing expenditure if it keeps patients away from General Practice and hospitals.

Doctors and academics see benefit in better understanding of CAM use by their patients and establishing what is and isnt working5, yet there continues to be concerted attacks on CAM with authorities not caring to take a balanced view of the evidence and calling it a waste of resources. Unfortunately, lack of finances means a broad defence has yet to be established and studies struggle to be funded.

Most detractors of IM will argue there is a lack of published evidence to prove the efficacy of CAM and it is generally agreed that too few studies on CAM/IM are initiated and concluded. This is a financial issue as complementary practitioners and centres do not have the necessary funds to publish large studies.

Yet there is an astonishing amount of peer-reviewed, published scientific evidence behind a myriad of naturopathic therapies, but many studies are small and outcomes not repeated frequently due to funding issues. We must not allow a lack of evidence to reflect a lack of efficacy.

It is a sad scenario that despite peer-reviewed and published papers calling for UK curriculum coordinators to improve CAM teaching, there is little movement within medical schools to do so.

Study design is an issue. Much of IM practice is about dealing with an individual rather than his or her disease process. A disease may have many different causes and so one specific treatment may not suit all cases. Double-blind, placebo-controlled research is, therefore, unsuitable for many types of CAM.

But this should not be the only way to evaluate a therapy after all, there are no double-blind, placebo-controlled studies in major surgery. To Cut Is To Cure is based on theory then trial and error. Unfortunately, CAM seems not to be allowed that due process despite it being far less dangerous to implement.

Therapy involving an acupuncturist or osteopath treating a similar number of individuals, as might be found on a pharmaceutical trial, may take a variable and considerable length of time depending on the set-up of the trial, the therapy and the variation within the individual patients. It is hard to govern such studies because some patients may respond swiftly whilst others will take much longer. And, of course, placebo is a difficult concept to administer with hands-on therapies.

Many herbal treatments, with hundreds of years of evidence through anecdotal and practitioner observation, have been discarded or made illegal for having insufficient evidence, often because there has not been the finance available to put them through a typical modern day drug trial.

That said, I believe that placebo has its place in healing. Remember, most drugs are tested against placebo and it nearly always has some benefit and sometimes more so than the drug being tested. There is a huge body of evidence supporting placebo. Perhaps placebo works better when a doctor has time to show deep interest and concern. The relationship between practitioner and patient must focus on the whole person. This is not possible when a patient is advised to bring only one symptom to a 10-minute consultation and all too frequently to be seen by different doctors. Arguably, if CAM were only to be working through placebo, it should automatically be considered a main stay of conventional therapy.

IM does not reject conventional prescribing and should not be confused with CAM that might be antagonistic. We must also not automatically accept alternative therapy uncritically, but remove the pre-fixes such as orthodox, complementary, functional, etc, and simply focuses on offering the Medicine.

Medical training, without doubt, allows practitioners to scrutinise evidence. Most GPs, spending up to 40% of their time on administration, rarely have time to study and consider therapies outside of major general practice journals which rarely have an integrated commentary. Whilst this situation exists we will continue to have doctors without interest or knowledge in alternatives and we will continue to have complementary medical practitioners without the safety net of medical training.

The Integrated Doctor is at least overcoming that obstacle.

Dr Rajendra Sharma is the author of the award winning Live Longer, Live Younger Watkins Publishers. He practices Integrated Medicine in Wimpole Street, London and in Exeter, Devon.

Originally posted here:

Integrated Health: Combining conventional healthcare with alternative medicine - European Pharmaceutical Review (blog)

Posted in Alternative Medicine | Comments Off on Integrated Health: Combining conventional healthcare with alternative medicine – European Pharmaceutical Review (blog)