The AFI FEST Interview: Wevr’s James Kaelan on Virtual Reality Storytelling – American Film Magazine (blog)

Each year, AFI FEST presented by Audi highlights cutting-edge virtual reality (VR) storytelling with the State of the Art Technology Showcase. AFI spoke with James Kaelan, current Director of Development + Acquisitions at VRcreative studio and production company Wevr, about his work in VR and the future of the medium. Formerly Creative Director at Seed&Spark, Kaelan brought his immersive short-film horror experience THE VISITOR to AFI FEST last year for the Showcase.

AFI: What got you interested in creating VR work in the first place?

JK: Im as surprised as anyone to find myself working in VR. Ive always considered myself something of a Luddite skeptical, generally, of the advance of technology. But back at the end of 2014, Anthony Batt, whos a co-founder of Wevr, was advising at Seed&Spark (which I helped co-found), and invited our team to visit their offices and watch some of the preliminary 360 video and CGI work they were producing. I remember sitting in the conference room and putting on the prototype of the Samsung Gear VR, and being immediately shocked by the potential of the technology. This wasnt some shiny new feature grafted onto cinema like 3D or a rumble pack in your theater chair. This was a new medium, requiring a brand new language.

AFI: What misconceptions do you think are out there among audiences when they first encounter VR work?

JK: I think audiences, rightfully, expect a lot from the medium. Most people whove had any direct contact with the very broad array of experiences that we broadly group together as VR have still only seen monoscopic 360 video, either on a Google Cardboard or a Gear. And with such work, after youve gotten over the initial thrill of discovering that you can look around, essentially, the inside of a sphere, your expectations accelerate. Two years ago we were still at the Lumirebrothers stage of VR. Workers leaving a factory? Awesome. Train pulling into a station? Super awesome. But unlike with cinema in its early years, the audience for VR has extremely high expectations about narrative complexity and image fidelity gleaned from the last 130 years of film. They wont tolerate inferior quality for very long. So those of us on the creative and technical side of the medium have to find a way to meet those assumptions. Some creators, in a rush to find a viable language in VR, have resorted to jamming it into the paradigm of framed storytelling, force-mediating the viewers perspective through edits, and teaching the audience to remain passive. And I dont want to dismiss those techniques out of hand. But I think its our job to actually forget the rules we apply to other media, and continue striving to invent a brand new way of telling stories. When we begin to master that new language, audiences will come in droves.

AFI: Whats the biggest challenge documentary filmmakers encounter when creating something for the VR space?

JK: I would actually say that documentary filmmakers are better equipped, naturally, to transition into VR or at least the 360 video element of it. And I say this because, without painting nonfiction storytellers with too broad a brush (and without sinking into the mire of the objectivity versus subjectivity debate), documentary filmmakers engage with existing subjects, rather than inventing new ones from scratch. Certainly when you look to the vrit side of documentary film, where the goal is observation rather than participation or investigation, 360 should feel quite natural to those artists because its actually closer (I say with great trepidation) to a purer strain of objectivity: because youve gotten rid of the frame. Youve chosen where to place the camera and when, but youre capturing the entirety of the environment simultaneously. Fiction filmmakers are probably less likely to encounter or invent story-worlds that unfold in both halves of the sphere simultaneously. All of that is to say, I literally wish Id spent more time making long-take docs before moving into VR!

AFI: What types of artists are you looking to work with at Wevr?

JK:Wevr is in this unique place where weve made a name for ourselves making some of the most phenomenal, intricate, interactive, CG, room-scale VR like theBlu and Gnomes & Goblins while simultaneously making, and being recognized on the international film festival circuit, for 360 monoscopic video work that has cost less than $10,000 to produce. So I dont want to pigeonhole Wevr. We make simulations with Jon Favreau on one end, and on the other, we work with college students who are interning with us during the summer. What unites those two groups is that both maximize, or exceed, whats capable within the constraints of their given budgets. Within reason, you give any artist enough time and money and shell make something incredible. More impressive and more attractive to us is the artist who can innovate in times of scarcity and abundance. Atthis moment in the history of VR, if you can tell stories dynamically without having to hire a team of engineers to execute your vision, youll get more work done. Youll actually get to practice your craft. Later you can have a team of 100, and a budget of a million times that.

AFI: Whats a common mistake you see new artists making when they first start creating work for the VR space?

JK: Artists working in VR try to replicate whats already familiar to them. And ironically, its the filmmakers who have the toughest time transitioning myself included. We miss the frame. We miss the authorial hand that mediates perspective and attention. We miss the freedom to juxtapose through editing. And because we miss those things, our first inclination is to figure out how to port them into VR. The best and least possible approach is to forget everything you know, like Pierre Menard trying to write the Quixote. Whereas artists from theater, from the gallery and museum installation world, come to VR almost naturally. They think about physical navigation and multi-sensory experience. They think about how things feel to the touch. They think about how things smell. They think about how the viewer moves, most importantly. Thats an invaluable perspective to have at this still-early stage in VR.

AFI: What was your experience like showcasing VR work at AFI FEST?

JK:For me and for my collaborators on the project, Blessing Yen and Eve Cohen showing THE VISITOR at AFI FEST last year was an honor. In order to earn a living while being a filmmaker, Ive done a lot of different jobs. In the beginning I bussed tables. Later I got to write about film for living. Now I get to create, and help others create, VR. But during that entire time, from clearing dishes at Mohawk Bend in Echo Park six years ago to working at Wevr now, AFI FEST has been the same: a free festival, stocked with the most discerning slate of films (and now VR) from around the world. And Ive gone every year since Ive lived in LA. So, it meant a lot to me to be included last year. On top of that, the presentation of the VR experiences themselves, spread around multiple dedicated spaces that never felt oppressively crowded or loud, made AFI one of my favorite stops on the circuit last year.

Interactive and virtual reality entries for AFI FEST 2017 presented by Audi are now being accepted for the State of the Art Technology Showcase, which highlights one-of-a-kind projects and events at the intersection of technology, cinema and innovation. The deadline to submit your projects is August 31, 2017. Submit today at AFI.com/AFIFEST or Withoutabox.com.

Go here to see the original:

The AFI FEST Interview: Wevr's James Kaelan on Virtual Reality Storytelling - American Film Magazine (blog)

Rise of the racist robots how AI is learning all our worst impulses … – The Guardian

Current laws largely fail to address discrimination when it comes to big data. Photograph: artpartner-images/Getty Images

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.

How could this have happened? The private company that supplies the software, Northpointe, disputed the conclusions of the report, but declined to reveal the inner workings of the program, which it considers commercially sensitive. The accusation gave frightening substance to a worry that has been brewing among activists and computer scientists for years and which the tech giants Google and Microsoft have recently taken steps to investigate: that as our computational tools have become more advanced, they have become more opaque. The data they rely on arrest records, postcodes, social affiliations, income can reflect, and further ingrain, human prejudice.

The promise of machine learning and other programs that work with big data (often under the umbrella term artificial intelligence or AI) was that the more information we feed these sophisticated computer algorithms, the better they perform. Last year, according to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier.

It has led to a decade-long AI arms race in which the UK government is offering six-figure salaries to computer scientists. They hope to use machine learning to, among other things, help unemployed people find jobs, predict the performance of pension funds and sort through revenue and customs casework. It has become a kind of received wisdom that these programs will touch every aspect of our lives. (Its impossible to know how widely adopted AI is now, but I do know we cant go back, one computer scientist says.)

Its impossible to know how widely adopted AI is now, but I do know we cant go back

But, while some of the most prominent voices in the industry are concerned with the far-off future apocalyptic potential of AI, there is less attention paid to the more immediate problem of how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society. When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.

If youre not careful, you risk automating the exact same biases these programs are supposed to eliminate, says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was learning from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is especially nefarious because police can say: Were not being biased, were just doing what the math tells us. And the public perception might be that the algorithms are impartial.

We have already seen glimpses of what might be on the horizon. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

These small-scale incidents were all quickly fixed by the companies involved and have generally been written off as gaffes. But the Compas revelation and Lums study hint at a much bigger problem, demonstrating how programs could replicate the sort of large-scale systemic biases that people have spent decades campaigning to educate or legislate away.

Computers dont become biased on their own. They need to learn that from us. For years, the vanguard of computer science has been working on machine learning, often having programs learn in a similar way to humans observing the world (or at least the world we show them) and identifying patterns. In 2012, Google researchers fed their computer brain millions of images from YouTube videos to see what it could recognise. It responded with blurry black-and-white outlines of human and cat faces. The program was never given a definition of a human face or a cat; it had observed and learned two of our favourite subjects.

This sort of approach has allowed computers to perform tasks such as language translation, recognising faces or recommending films in your Netflix queue that just a decade ago would have been considered too complex to automate. But as the algorithms learn and adapt from their original coding, they become more opaque and less predictable. It can soon become difficult to understand exactly how the complex interaction of algorithms generated a problematic result. And, even if we could, private companies are disinclined to reveal the commercially sensitive inner workings of their algorithms (as was the case with Northpointe).

Less difficult is predicting where problems can arise. Take Googles face recognition program: cats are uncontroversial, but what if it was to learn what British and American people think a CEO looks like? The results would likely resemble the near-identical portraits of older white men that line any bank or corporate lobby. And the program wouldnt be inaccurate: only 7% of FTSE CEOs are women. Even fewer, just 3%, have a BME background. When computers learn from us, they can learn our less appealing attributes.

Joanna Bryson, a researcher at the University of Bath, studied a program designed to learn relationships between words. It trained on millions of pages of text from the internet and began clustering female names and pronouns with jobs such as receptionist and nurse. Bryson says she was astonished by how closely the results mirrored the real-world gender breakdown of those jobs in US government data, a nearly 90% correlation.

People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things, Bryson says.

People expected AI to be unbiased; thats just wrong

So who stands to lose out the most? Cathy ONeil, the author of the book Weapons of Math Destruction about the dangerous consequences of outsourcing decisions to computers, says its generally the most vulnerable in society who are exposed to evaluation by automated systems. A rich person is unlikely to have their job application screened by a computer, or their loan request evaluated by anyone other than a bank executive. In the justice system, the thousands of defendants with no money for a lawyer or other counsel would be the most likely candidates for automated evaluation.

In London, Hackney council has recently been working with a private company to apply AI to data, including government health and debt records, to help predict which families have children at risk of ending up in statutory care. Other councils have reportedly looked into similar programs.

In her 2016 paper, HRDAGs Kristian Lum demonstrated who would be affected if a program designed to increase the efficiency of policing was let loose on biased data. Lum and her co-author took PredPol the program that suggests the likely location of future crimes based on recent crime and arrest statistics and fed it historical drug-crime data from the city of Oaklands police department. PredPol showed a daily map of likely crime hotspots that police could deploy to, based on information about where police had previously made arrests. The program was suggesting majority black neighbourhoods at about twice the rate of white ones, despite the fact that when the statisticians modelled the citys likely overall drug use, based on national statistics, it was much more evenly distributed.

As if that wasnt bad enough, the researchers also simulated what would happen if police had acted directly on PredPols hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most. That caused still more police to be sent in. It was a virtual mirror of the real-world criticisms of initiatives such as New York Citys controversial stop-and-frisk policy. By over-targeting residents with a particular characteristic, police arrested them at an inflated rate, which then justified further policing.

PredPols co-developer, Prof Jeff Brantingham, acknowledged the concerns when asked by the Washington Post. He claimed that to combat bias drug arrests and other offences that rely on the discretion of officers were not used with the software because they are often more heavily enforced in poor and minority communities.

And while most of us dont understand the complex code within programs such as PredPol, Hamid Khan, an organiser with Stop LAPD Spying Coalition, a community group addressing police surveillance in Los Angeles, says that people do recognise predictive policing as another top-down approach where policing remains the same: pathologising whole communities.

There is a saying in computer science, something close to an informal law: garbage in, garbage out. It means that programs are not magic. If you give them flawed information, they wont fix the flaws, they just process the information. Khan has his own truism: Its racism in, racism out.

Its unclear how existing laws to protect against discrimination and to regulate algorithmic decision-making apply in this new landscape. Often the technology moves faster than governments can address its effects. In 2016, the Cornell University professor and former Microsoft researcher Solon Barocas claimed that current laws largely fail to address discrimination when it comes to big data and machine learning. Barocas says that many traditional players in civil rights, including the American Civil Liberties Union (ACLU), are taking the issue on in areas such as housing or hiring practices. Sinyangwe recently worked with the ACLU to try to pass city-level policies requiring police to disclose any technology they adopt, including AI.

But the process is complicated by the fact that public institutions adopt technology sold by private companies, whose inner workings may not be transparent. We dont want to deputise these companies to regulate themselves, says Barocas.

In the UK, there are some existing protections. Government services and companies must disclose if a decision has been entirely outsourced to a computer, and, if so, that decision can be challenged. But Sandra Wachter, a law scholar at the Alan Turing Institute at Oxford University, says that the existing laws dont map perfectly to the way technology has advanced. There are a variety of loopholes that could allow the undisclosed use of algorithms. She has called for a right to explanation, which would require a full disclosure as well as a higher degree of transparency for any use of these programs.

The scientific literature on the topic now reflects a debate on the nature of fairness itself, and researchers are working on everything from ways to strip unfair classifiers from decades of historical data, to modifying algorithms to skirt round any groups protected by existing anti-discrimination laws. One researcher at the Turing Institute told me the problem was so difficult because changing the variables can introduce new bias, and sometimes were not even sure how bias affects the data, or even where it is.

The institute has developed a program that tests a series of counterfactual propositions to track what affects algorithmic decisions: would the result be the same if the person was white, or older, or lived elsewhere? But there are some who consider it an impossible task to integrate the various definitions of fairness adopted by society and computer scientists, and still retain a functional program.

In many ways, were seeing a response to the naive optimism of the earlier days, Barocas says. Just two or three years ago you had articles credulously claiming: Isnt this great? These things are going to eliminate bias from hiring decisions and everything else.

Meanwhile, computer scientists face an unfamiliar challenge: their work necessarily looks to the future, but in embracing machines that learn, they find themselves tied to our age-old problems of the past.

Follow the Guardians Inequality Project on Twitter here, or email us at inequality.project@theguardian.com

See the original post:

Rise of the racist robots how AI is learning all our worst impulses ... - The Guardian

Teenage team develops AI system to screen for diabetic retinopathy – MobiHealthNews

Kavya Kopparapu might be considered something of a whiz kid. After all, she had yet to enter her senior year of high school when she started Eyeagnosis, a smartphone app and 3D-printed lens that allows patients to be screened for diabetic retinopathy with a quick photo, avoiding the time and expense of a typical diagnostic procedure. In June 2016, Kopparapus grandfather had recently been diagnosed with diabetic retinopathy, a complication of diabetes that damages retinal blood vessels and can eventually cause blindness. He caught the symptoms in time to receive treatment, but it was close. A little too close for Kopparapus comfort. According to the IEEE Spectrum, Kopparapu, her 15-year-old brother Neeyanth and her classmate Justin Zhang trained an artificial intelligence system to scan photos of eyes and detect, and diagnose, signs of diabetic retinopathy. She unveiled the technology at the OReilly Artificial Intelligence conference in New York City in July. After diving into internet-based research and emailing opthamologists, biochemists, epidemiologists, neuroscientists and the like, she and her team worked on the diagnostic AI using a machine-learning architecture called a convolutional neural network. CNNs, as theyre called, parse through vast data sets -- like photos -- to look for patterns of similarity, and to date have shown an aptitude for classifying images. The network itself was the ResNet-50, developed by Microsoft. But to train it to make retinal diagnoses, Kopparapu had to feed it images from the National Institute of Healths EyeGene database, which essentially taught the architecture how to spot signs of retinal degeneration. One hospital has already tested the technology, fitting a 3D-printed lens onto a smartphone and training the phones flash to illuminate the retinas of five different patients. Tested against opthalmologists, the system went five for five on diagnoses. Kopparapus invention still needs lots of tests and additional data to prove its efficacy before it sees widespread clinical adoption, but so far, its off to a pretty good start. Eyeagnosis is operating in a space that's recently become interesting to some very large companies. Last fall, a team of Google researchers published a paper in the Journal of the American Medical Association showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. That algorithm was then tested on 9,963 deidentified images retrospectively obtained from EyePACS in the United States, as well as three eye hospitals in India. A second, publicly available research data set of 1,748 was also used. The accuracy was determined by comparing its diagnoses to those done by a panel of at least seven U.S. board-certified ophthalmologists. The two data sets had 97.5 percent and 96.1 percent sensitivity, and 93.4 percent and 93.9 percent specificity respectively.

And Google isnt the only player in that space. IBM has a technology utilizing a mix of deep learning, convolutional neural networks and visual analytics technology based on 35,000 images accessed via EyePACs; in research conducted earlier this year, the technology learned to identify lesions and other markers of damage to the retinas blood vessels, collectively assessing the presence and severity of disease. In just 20 seconds, the method was successful in classifying diabetic retinopathy severity with 86 percent accuracy, suggesting doctors and clinicians could use the technology to have a better idea of how the disease progresses as well as identify effective treatment methods.

Lower-tech options are also taking a stab at improving access to screenings. Using a mix of in-office visits, telemedicine and web-based screening software, the Los Angeles Department of Health Services has been able to greatly expand the number of patients in its safety net hospital who got screenings and referrals. In an article published in the journal JAMA Internal Medicine, researchers describe how the two-year collaboration using Safety Net Connects eConsult platform resulted in more screenings, shorter wait times and fewer in-person specialty care visits. By deploying Safety Net Connects eConsult system to a group of 21,222 patients, the wait times for screens decreased by almost 90 percent, and overall screening rates for diabetic retinopathy increased 16 percent. The digital program also eliminated the need for 14,000 visits to specialty care professionals

Originally posted here:

Teenage team develops AI system to screen for diabetic retinopathy - MobiHealthNews

Advancing AI by Understanding How AI Systems and Humans Interact – Windows IT Pro

Artificial intelligence as a technology is rapidly growing, but much is still being learned about how AI and autonomous systems make decisions based on the information they collect and process.

To better explain those relationships so humans and autonomous systems can better understand each other and collaborate more deeply, researchers at PARC, the Palo Alto Research Center, have been awarded a multi-million dollar federal government contract to create an "interactive sense-making system" that could answer many related questions.

The research for the proposed system, called COGLE (CommonGroundLearning andExplanation), is being funded by the Defense Advanced Research Projects Agency (DARPA),using an autonomous Unmanned Aircraft System (UAS) test bed but would later be applicable to a variety of autonomous systems.

The idea is that since autonomous systems are becoming more widely used, it would behoove humans who are using them to understand how the systems behave based on the information they are provided, Mark Stefik, a PARC research fellow who runs the lab's human machine collaboration research group, told ITPro.

"Machine learning is becoming increasing important," said Stefik. "As a consequence, if we are building systems that are autonomous, we'd like to know what decisions they will make. There is no established technique to do that today with systems that learn for themselves."

In the field of human psychology, there is an established history about how people form assumptions about things based on their experiences, but since machines aren't human, their behaviors can vary, sometimes with results that can be harmful to humans, said Stefik.

In one moment, an autonomous machine can do something smart or helpful, but then the next moment it can do something that is "completely wonky, which makes things unpredictable," he said. For example, a GPS system seeking the shortest distance between two points could erroneously and catastrophically send a user driving over a cliff or the wrong way onto a one-way street. Being able to delve into those autonomous "thinking" processes to understand them is the key to this research, said Stefik.

The COGLE research will help researchers pursue answers to these issues, he said. "We're insisting that the program be explainable," for the autonomous systems to say why they are doing what they are doing. "Machine learning so far has not really been designed to explain what it is doing."

The researchers involved with the project will essentially have roles educators and teachers for the machine learning processes to improve their operations and make it more useable and even more human like, said Stefik. "It's a sort of partnership where humans and machines can learn from each other."

That can be accomplished in three ways, he added, including reinforcement at the bottom level, using reasoning patterns like the ones humans use at the cognitive or middle level, and through explanation at the top sense-making level. The research aims to enable people to test, understand, and gain trust in AI systems as they continue to be integrated into our lives in more ways.

The research project is being conducted under DARPA's Explainable Artificial Intelligence (XAI) program, which seeks to create a suite of machine learning techniques that produce explainable models and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

PARC, which is a Xerox company, is conducting the COGLE work with researchers at Carnegie Mellon University, West Point, the University of Michigan, the University of Edinburgh and the Florida Institute for Human & Machine Cognition. The key idea behind COGLE is to establish common ground between concepts and abstractions used by humans and the capabilities learned by a machine. These learned representations would then be exposed to humans using COGLE's rich sense-making interface, enabling people to understand and predict the behavior of an autonomous system.

Read the rest here:

Advancing AI by Understanding How AI Systems and Humans Interact - Windows IT Pro

True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet

Marc Vontobel, CTO & Pascal Kaufmann, CEO, Starmind

Artificial intelligence is stuck today because companies are likening the human brain to a computer, according to Swiss neuroscientist and co-founder of Starmind Pascal Kaufmann. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.

When companies claim to be using AI to power "the next generation" of their products, what they are unknowingly referring to is the intersection of big data, analytics, and automation, Kaufmann told ZDNet.

"Today, so called AI is often just the human intelligence of programmers condensed into source code," said Kaufmann, who worked on cyborgs previously at DARPA.

"We shouldn't need 300 million pictures of cats to be able to say whether something is a cat, cow, or dog. Intelligence is not related to big data; it's related to small data. If you can look at a cat, extract the principles of a cat like children do, then forever understand what a cat is, that's intelligence."

He even said that it's not "true AI" that led to AlphaGo -- a creation of Google subsidiary DeepMind -- mastering what is revered as the world's most demanding strategy game, Go.

The technology behind AlphaGo was able to look at 10 to 20 potential future moves and lay out the highest statistics for success, Kaufmann said, and so the test was one of rule-based strategy rather than artificial intelligence.

The ability for a machine to strategise outside the context of a rule-based game would reflect true AI, according to Kaufmann, who believes that AI will cheat without being programmed not to do so.

Additionally, the ability to automate human behaviour or labour is not necessarily a reflection of machines getting smarter, Kaufmann insisted.

"Take a pump, for example. Instead of collecting water from the river, you can just use a pump. But that is not artificial intelligence; it is the automation of manual work ... Human-level AI would be able to apply insights to new situations," Kaufmann added.

While Facebook's plans to build a brain-computer interface and Elon Musk's plans to merge the human brain with AI have left people wondering how close we are to developing true AI, Kaufmann believes the "brain code" needs to be cracked before we can really advance the field. He said this can only be achieved through neuroscientific research.

Earlier this year, founder of DeepMind Demis Hassabis communicated a similar sentiment in a paper, saying the fields of AI and neuroscience need to be reconnected, and that it's only by understanding natural intelligence that we can develop the artificial kind.

"Many companies are investing their resources in building faster computers ... we need to focus more on [figuring out] the principles of the brain, understand how it works ... rather than just copy/paste information," Kaufmann said.

Kaufmann admitted he doesn't have all the answers, but finds it "interesting" that high-profile entrepreneurs such as Musk and Mark Zuckerberg, none of whom have AI or neuroscience backgrounds, have such strong and opposing views on AI.

Musk and Zuckerberg slung mud at each other in July, with the former warning of "evil AI" destroying humankind if not properly monitored and regulated, while the latter spoke optimistically about AI contributing to the greater good, such as diagnosing diseases before they become fatal.

"One is an AI alarmist and the other makes AI look charming ... AI, like any other technology, can be used for good or used for bad," said Kaufmann, who believes AI needs to be assessed objectively.

In the interim, Kaufmann believes systems need to be designed so that humans and machines can work together, not against each other. For example, Kaufmann envisions a future where humans wear smart lenses -- comparable to the Google Glass -- that act as "the third half of the brain" and pull up relevant information based on conversations they are having.

"Humans don't need to learn stuff like which Roman killed the other Roman ... humans just need to be able to ask the right questions," he said.

"The key difference between human and machine is the ability to ask questions. Machines are more for solutions."

Kaufmann admitted, however, that humans don't know how to ask the right questions a lot of the time, because we are taught to remember facts in school, and those who remember the most facts are the ones who receive the best grades.

He believes humans need to be educated to ask the right questions, adding that the question is 50 percent of the solution. The right questions will not only allow humans to understand the principles of the brain and develop true AI, but will also keep us relevant even when AI systems proliferate, according to Kaufmann.

If we want to slow down job loss, AI systems need to be designed so that humans are at the centre of it, Kaufmann said.

"While many companies want to fully automate human work, we at Starmind want to build a symbiosis between humans and machines. We want to enhance human intelligence. If humans don't embrace the latest technology, they will become irrelevant," he added.

The company claims its self-learning system autonomously connects and maps the internal know-how of large groups of people, allowing employees to tap into their organisation's knowledge base or "corporate brain" when they have queries.

Starmind platform

Starmind is integrated into existing communication channels -- such as Skype for Business or a corporate browser -- eliminating the need to change employee behaviour, Kaufmann said.

Questions typed in the question window are answered instantly if an expert's answer is already stored in Starmind, and new questions are automatically routed to the right expert within the organisation, based on skills, availability patterns, and willingness to share know-how. All answers enhance the corporate knowledge base.

"Our vision is if you connect thousands of human brains in a smart way, you can outsmart any machine," Kaufmann said.

On how this is different to asking a search engine a question, Kaufmann said Google is basically "a big data machine" and mines answers to questions that have been already asked, but is not able to answer brand new questions.

"The future of Starmind is we actually anticipate questions before they're even asked because we know so much about the employee. For example, we can say if you are a new hire and you consume a certain piece of content, there will be a 90 percent probability that you will ask the following three questions within the next three minutes and so here are the solutions."

Starmind is being currently used across more than 40 countries by organisations such as Accenture, Bayer, Nestl, and Telefonica Deutschland.

While Kaufmann thinks it is important at this point in time to enhance human intelligence rather than replicate it artificially, he does believe AI will eventually substitute humans in the workplace. But unlike the grim picture painted by critics, he doesn't think it's a bad thing.

"Why do humans need to work at all? I look forward to all my leisure time. I do not need to work in order to feel like a human," Kaufmann said.

When asked how people would make money and sustain themselves, Kaufmann said society does not need to be ruled by money.

"In many science fiction scenarios, they do not have money. When you look at the ant colonies or other animals, they do not have cash," Kaufmann said.

Additionally, if humans had continuous access to intelligent machines, Kaufmann said "the acceleration of human development will pick up" and "it will give rise to new species".

"AI is the ultimate tool for human advancement," he firmly stated.

Link:

True AI cannot be developed until the 'brain code' has been cracked: Starmind - ZDNet

REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk

GETTY

In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.

The judges can then use this data when giving his or her verdict.

However, an investigation has revealed that the artificial intelligence behind the software exhibits racist tendencies.

Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.

GETTY

The suspects are asked a total of 137 questions by the AI system Correctional Offender Management Profiling for Alternative Sanctions (Compas) including questions such as Was one of your parents ever sent to jail or prison? or How many of your friends/acquaintances are taking drugs illegally?, with the computer generating its results at the end.

Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.

In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.

GETTY

The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.

Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered low risk.

But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.

However, researchers warn the problem does not lie with robots, but with the human race as AI uses machine learning algorithms to pick up on human traits.

Joanna Bryson, a researcher at the University of Bath, told the Guardian: People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

This is not an isolated incident either.

Microsofts TayTweets (AI) chatbot was unleashed on Twitter last year which was designed to learn from users.

However, it almost instantly turned to anti-semitism and racism, tweeting: "Hitler did nothing wrong" and "Hitler was right I hate the Jews.

See the original post:

REVEALED: AI is turning RACIST as it learns from humans - Express.co.uk

Microsoft shares its vision to become AI industry-leader – TNW

Microsoft last week filed its annual report with the SEC, and with it a new vision that emphasizes AI. The documents state the company will no longer be focused on mobile, but instead on implementing AI solutions.

In the documents, under the heading Our Vision the company put:

Our strategy is to build best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with artificial intelligence (AI)

Which should come as a surprise to no-one, Microsoft has gobbled up AI companies like Pac-Man eating dots, and is using them to do things like teach computers to be amazing at Ms. Pac-Man.

Microsoft started off 2017 by purchasing an AI company, which was added to its already robust machine-learningresearch team.

The companys Microsoft Research AI (MSR AI) group has been doing some impressive work, including helping the blind better understand their surroundings. And Microsoft makes hardware now, in the form of an AI co-processor chip for Hololens 2.0.

Microsoft isnt the only major legacy tech company fully prepared to shift to an AI-driven vision. IBM is famously flaunting their ride on the AI hype train with their cloud-based Watson, and Apple of course has Siri.

Theres no need to ask if AI is the future or not, because it is. Its only a matter of time now before Cortana gets appointed CEO (Sorry Satya!).

Microsoft's New Artificial Intelligence Mission Is Nothing To Dismiss on Seeking Alpha

Read next: Instagram now lets two people go Live in the same stream

Continued here:

Microsoft shares its vision to become AI industry-leader - TNW

Female viagra pills name – Buy female viagra online cheap – Bournville Village


Bournville Village
Female viagra pills name - Buy female viagra online cheap
Bournville Village
Female viagra name in india the to of have bottom area made front. lower slight as toxic all x prices of erectile. the the many the really subscribe. and Memetics solidarity actually visitors 75 If police issue be for of. may 13/09/17 antihypertensive ...
Best natural alternative to propecia - Propecia depression lawsuitWorld Science

all 1,670 news articles »

Link:

Female viagra pills name - Buy female viagra online cheap - Bournville Village

Golf: Spieth chasing golf immortality at PGA – Duluth News Tribune

During nine practice holes with Kevin Kisner at Quail Hollow Club, amid kids and adults alike shouting "Jordan, Jordan!" the 24-year-old Spieth seemed to barely perspire.

He did, however, offer this early assessment of Quail Hollow: "Extremely tough."

It helped Monday that, for the first time, PGA Championship players were allowed to wear shorts during practice rounds. Spieth said it was nice because it reminded him of playing casual rounds back home in hot Dallas.

Spieth's blue-green shirt and gray shorts did not, however, explain why he seemed more immune to the humidity than others. Perhaps it's because he's won the British Open and two other PGA Tour events in 2017. Really, can this week's 99th PGA Championship be much of a sweat?

Yes, a victory on Sunday would make Spieth the youngest male golfer to complete the career Grand Slam, eclipsing Tiger Woods, who completed the Slam at 24 years, six months old.

Spieth, however, said during last week's WGC-Bridgestone Invitational: "My focus isn't on completing the career Grand Slam. My focus is on the PGA Championship."

On Monday, his focus was seeing Quail Hollow, a course on which he hasn't played a competitive round since he competed in his only Wells Fargo Championship in 2013, tying for 32nd.

Last year, three of Quail Hollow's first five holes were significantly altered, with the first two holes being combined into a new No. 1 and a par 3 added, as the new No. 2.

"They didn't change that much," he said. "Really, (holes) one, two and four and five. They made one essentially an extremely long par 4 by combining the old one-two, and then they split up No. 5 into two holes, that par 5, into a 3 and 4. Other than that, it stayed the same.

"The greens are firm and the fairways are soft, so it's long and then tough to hold the greens. With the way the greens are, if they don't soften up, it's going to be 'Par is an awesome score.' "

Last week, Spieth described winning the Grand Slam as a life goal, adding that he believes his odds of completing it at some point are strong. Woods, Jack Nicklaus, Ben Hogan, Gary Player and Gene Sarazen are the only players to complete the Slam.

"If it happens (this week), then fantastic," Spieth said. "And if it doesn't, then it's not going to be a big-time bummer whatsoever because I know I have plenty of opportunities.

"Getting three legs of it is much harder than getting the last leg, I think although I've never tried to get the last leg, so it's easy for me to say."

Unlike his British Open victory three weeks ago at Royal Birkdale, where Spieth only had caddie Michael Greller accompanying him, he'll have a sizeable family and friends gallery at Quail Hollow.

On the night of his British Open win, Spieth's longtime girlfriend, Annie Verret, sent a group text to about 20 Spieth family members and friends, ultimately resulting in the group surprising Jordan and Greller with a champagne-toast greeting upon landing in Dallas.

That group will expand at Quail Hollow. On Monday, Spieth's mother, Chris, and sister, Ellie, walked five holes of Jordan's practice round, with Ellie at times walking alongside Jordan in the fairway.

Some Spieth family members already were in North Carolina, visiting relatives, when Jordan arrived Sunday night from playing the Bridgestone in Akron, Ohio. One of Jordan's grandfathers, Bob Julius, lives in Wilmington, about 200 miles southeast of Charlotte.

After his British Open victory, Spieth received congratulatory notes and texts from the likes of President George W. Bush, Nicklaus, Woods, Phil Mickelson and Rory McIlroy.

Like Spieth, Mickelson and McIlroy are one victory from completing the career Slam, though neither can do so this week. Mickelson lacks a U.S. Open title and McIlroy has yet to win the Masters.

Spieth said he sees more pros than cons about playing the PGA relatively soon after the British.

"(A pro) is you believe you're in form," he said. "I think I'm in form, and form is a huge part of being in contention, obviously. But when you feel that way going in, it feels that much easier to get into contention.

"So that's a huge pro. I'm not really finding any negatives in this."

After a session on the Quail Hollow practice range before his practice round, Spieth spent 20 minutes signing autographs, with one exhorting Spieth: "Grand Slam, baby!"

Spieth said little, but smiled and kept signing. The August sun grew hotter, but, still, it was no sweat for Spieth.

99th PGA Championship

When: Thursday-Sunday

Where: Quail Hollow Club, Charlotte, N.C.

Defending champion: Jimmy Walker

Fast fact: Jordan Spieth can become the sixth player with the career Grand Slam

See the rest here:

Golf: Spieth chasing golf immortality at PGA - Duluth News Tribune

Mum spent 50,000 on alternative medicine after boob job left her … – Metro

Kathy Richmond was left seriously ill and needed to have her breast implants removed (Collect/PA Real Life)

A woman says she spent around 50,000 on alternative remedies after falling seriously ill as a result of her breast implants.

Kathy Richmond, 38, spent the money over a nine-year period on homeopathy, reiki, cupping, acupuncture, reflexology, a functional healer, craniosacral (a therapy involving light touch).

But she was still very ill.

Kathy, a mother-of-four from Reading, Berkshire, increased the size of her breasts from an A to a G cup.

She had the 5,000procedure done in 2007 on a whim following the birth of her two eldest children.

I never hated my breasts, she explained. But they changed after I had my two oldest children.

Being 6ft, I could carry off bigger breasts, so I decided to get implants. We had the money in the bank, so I thought, Why not?.

But two years later she became seriously ill and she now believes it was Breast Implant Illness an anecdotal problem that is not recognised by the NHS.

However, the NHS does warn of the dangers of breast implants with a long list of potential side-effects, including allergic reactions.

I didnt realise it was because of the implants then, she explained. My asthma, which Id last had as a child came back and my nails started flaking.

I also suffered with extreme fatigue to such an extent, I had to give up work as a fitness instructor.

She visited a GP in Reading, who suggested thyroid problems as a possible reason for her illness.

I did have thyroid problems, she accepted. But I didnt know why. I also had ring worm, a type of fungal infection and all sorts of other problems.

Kathy said: Initially I loved my implants, I didnt regret them at all. After a while, though, I started feeling very sick.

I experienced various issues from 2009, but became very sick from late 2014. I suffered from hives, brain fog, with weight gain, depression, vertigo, hair loss and more.

There were stains on my face that looked like tea, the asthma Id not had since I was a child worsened, I developed anxiety and fungus formed on my nails. It was terrible.

She suffered with persistent ill health for the next nine years so she tried a variety of alternative therapies.

She said:I saw a reflexologist, underwent lymph drainage, saw a functional healer a type of medicine which focuses on interactions between the environment and the gastrointestinal, endocrine, and immune systems and even had craniosacral, a therapy involving light touch.

I had reiki healing, cupping an ancient form of medicine and acupuncture. But still I was very ill. It was devastating.

In 2014 a homeopath suggested her breasts could be the cause of the problems. As soon as she said it, I wondered, Kathy said said. The dates added up.

Two years later she had them removed at a cost of 6,000 and that seems to have sorted the problems.

She said: The good news is, Im feeling better. As soon as they were removed, I felt a lightness in my chest. Thats why I am speaking out so other women dont have to suffer like I did.

She adds that she now regrets having the breast augmentation.

To understand whether a treatment is safe and effective, we need to check the evidence.

You can learn more about the evidence for particular CAMs by reading about individual types of treatment see our index for a list of all conditions and treatments covered by NHS Choices.

Some complementary and alternative medicines or treatments are based on principles and an evidence base that are not recognised by the majority of independent scientists.

Others have been proven to work for a limited number of health conditions. For example, there is evidence that osteopathy and chiropractic are effective for treating lower back pain.

When a person uses any health treatment including a CAM and experiences an improvement, this may be due to the placebo effect.

Read more from the original source:

Mum spent 50,000 on alternative medicine after boob job left her ... - Metro

Dr. Gifford-Jones: Puritanical lies about alcohol – MPNnow.com

Are you becoming as skeptical as I am about public information? Fake political news? Alternative facts about the state of the worlds economy? So now I ask how honest is medical news? Of course everyone knows that consuming stupid amounts of alcohol is unhealthy. But puritans and some doctors cant accept the proven fact that moderate amounts of alcohol can prolong life.

Professor Keith Scott-Mumby, an internationally known United Kingdom expert on alternative medicine, echoes what I have written over the years, that people who drink moderately live longer on average than teetotalers or those who drink to excess. In fact, there are over 20 studies that confirm this. In court its a criminal offense to withhold truth, so why doesnt the same principle hold true in medicine?

Scott-Mumby points out that the lack of discussion of the beneficial impact of alcohol has for years been a systematic policy of the U.S. public health establishment. For instance, the National Institutes of Health, which funded a research study on alcohol, forbad a Harvard epidemiologist who participated in the study from publishing the health benefits of drinking!

There is strong evidence that alcohol protects against heart disease. Studies show that it increases the good cholesterol HDL. Possibly more important, it dilates arteries and makes blood platelets less likely to clot, decreasing the risk of a fatal heart attack.

But Scott-Mumby says none of these facts was publicly reported when Larry King, the well-known TV personality, underwent a bypass procedure in 1987 after a heart attack. Later, in 2007, he hosted a two-hour PBS television special on heart disease featuring five experts who talked about exercise, diet and smoking. But there was no mention that abstinence from alcohol was a risk factor for heart disease.

Scott-Mumby also reports good news for Boomers, that the use of alcohol may protect against dementia. He cites the 2008 Research Society on Alcoholism Review based on the Whitehall Study, which analyzed 45 reports since the early 1990s.This showed that there were significantly reduced risks of dementia from moderate drinking. So why dont we hear more about this fact, particularly, when Alzheimers disease and other forms of dementia are increasing?

He adds that the U.S. is not a heavy drinking nation, yet its health outcomes are poor, as it has almost double the amount of diabetes, cancer and heart disease compared with the English who drink more.

Ive often written about the advantages of moderate drinking. But according to Scott-Mumbys research, even serious drinkers, the ones who drink six or more drinks daily, still live longer than teetotalers! And he claims that puritans cant stand this fact.

So whats the message? Neither Scott-Mumby nor I condone the three-martini lunch, nor do we urge anyone to start drinking alcohol. What we are both saying is that neither abstainers nor doctors should distort the truth of the health benefits of alcohol.

All too often I have witnessed this at medical conventions. Researchers have detailed the many medical benefits of alcohol. But after confirmation by several speakers, finally one says, But we must not inform the public about this as it will result in car accidents, marriage difficulties and other societal problems.

But we dont prevent the sale of cars because some idiots drive at 150 miles an hour. So I believe it is hypocritical, dishonest and maybe even criminal, to withhold scientifically proven news about alcohol.

Today it seems that truth, like commonsense, is becoming an uncommon commodity. The motto of The Harvard Medical School at its founding was Veritas. I believed this motto when I was a medical student there, and I still believe it today.

This medical journalist is not, and never will be, an alcoholic. So I enjoy a drink before dinner with family and friends. I believe its one of the habits that keeps me relaxed at the end of the day and also alive all these years.

Past experience tells me that controversial columns do not please everyone, including doctors. But society is in deep trouble when it skirts truth, tries to hide it or simply ignores it. Facts are facts, and history has shown that Veritas eventually wins.

Dr. Ken Walker (Gifford-Jones) is a graduate of the University of Toronto and TheHarvard Medical School. He trained in general surgery at the Strong Memorial Hospital,University of Rochester, Montreal General Hospital, McGill University and in Gynecologyat Harvard. He has also been a general practitioner, ships surgeon and hotel doctor.Seewww.docgiff.comfor past columns. For comments: info@docgiff.com

Original post:

Dr. Gifford-Jones: Puritanical lies about alcohol - MPNnow.com

Test your home for radon to save money, your life | Cooperative … – Fairbanks Daily News-Miner

FAIRBANKS What did it cost you last time you went to the doctor or dentist? I mean before insurance, Medicare or Medicaid kicked in to bring down the cost. And that may have been just for a routine checkup or work/school annual physical. What if you needed treatment for lung cancer?

The National Cancer Institute reports the cost for the initial treatment of lung cancer in 2010 was $60,553 for women and $60,885 for men. Subsequent annual continued treatment was $8,130 and $7,591 respectively. The problem with this cancer is not only treatment expenditures, but also of survival. According to the America Cancer Society, most lung cancers have spread widely and are in advanced stages when they are first found.

But what if a simple test could alert you to the presence of the second leading cause of lung cancer radon? Certified professionals will give you a detailed hourly average of radon levels in your home with sophisticated machinery for a couple hundred dollars.

You also can test the radon levels in your home with a readily available test kit containing activated charcoal, which is no different than what is used in common shoe deodorizers. The kit will give you an overall average of what the concentration of radon gas is in your home duringa 48-hour period. Though the lab fee varies, the kits generally cost around $15 to $20. Kits that include the analysis are available from extension district offices or by ordering one at 1-877-520-5211.

And then what? If you have radon in your home, what would the cost be to fix it? If it means merely filling in cracks in your cement floor or wall, you have some sweat equity and possibly $25 in patch materials. If you have a crawl space without secure covering, you may run into a solid day and possibly $100 of materials, with the possibility of a sore back from leaning over.

If you invested either of those, and then spend a couple more hundred dollars to get a furnace repair man to balance your furnace and heat recovery ventilator (HRV) and are still experiencing high radon levels, you can put in a PVC pipe chimney. This will evacuate the radon by depressurizing the soil under the floor. That will cost you as much as$4,000 locally to have it professionally installed. Or you could buy 4-inch PVC pipe, rent a pile driver and spend $150 for a fan for a total around $600. You may then throw in a $125 monitor to make sure it works continually.

If you are building a new house and havent put in the foundation yet, you might have PVC or ABS piping put in under plastic sheeting and the cement slab for around $1,000-$1,500. Given the scattered uranium throughout the state, it will be all the more important for contractors to utilize radon-resistant construction so that from the get-go, there is not only protective vapor barrier secured on the ground but also semipermeable membrane material such as Bituthene adhered to pony walls before backfilling soils.

Remember, no matter where you are living, the only way youll know whether you have a radioactive radon problem is to test for it.

Art Nash is the energy specialist for the UAF Cooperative Extension Service. Contact him at 474-6366 or by email at alnashjr@alaska.edu.

Originally posted here:

Test your home for radon to save money, your life | Cooperative ... - Fairbanks Daily News-Miner

Your brain can form new memories while you are asleep … – Washington Post

A sleeping brain can form fresh memories, according to a team of neuroscientists.The researchers played complex sounds to people while they were sleeping, and afterward the sleepers could recognizethose sounds when they wereawake.

The idea that humans canlearn while asleep, a concept sometimes called hypnopedia, has a long and odd history. It hit a particularly strange note in 1927, when New York inventor A. B. Saliger debuted thePsycho-phone. He billed the device as anautomatic suggestion machine. The Psycho-phone was a phonograph connected to a clock. It playedwax cylinder records, which Saliger made and sold.The records hadnames like Life Extension, Normal Weight orMating. That last one went: I desire a mate. I radiate love My conversation is interesting. My company is delightful. I have a strong sex appeal.

Thousands of sleepers bought the devices, Saligertold theNew Yorkerin 1933. (Those included Hollywood actors,he said, though he declined to name names.) Despite his enthusiasm for the machine Saligerhimself dozed off to Inspiration and Health the device was a bust.

But the idea that we can learn while unconscious holds more meritthan gizmos namedPsycho-phone suggest. In the new study, published Tuesday in the journalNature Communications, neuroscientistsdemonstrated that it is possible to teach acoustic lessons to sleeping people.

We proved that you can learn during sleep, which has been a topic debated for years, said Thomas Andrillon, an author of the study and a neuroscientist at PSL Research University in Paris.Just don't expect Andrillon's experiments to make anyonefluent in French.

Researchersin the 1950s dismantled hypnopedia's more outlandish claims. Sleepers cannot wake up with brains filled withnew meaning or facts, Rand Corp. researchers reported in 1956. Instead, test subjectswho listened to trivia at night woke up with non-recall. (Still, the Psycho-phone spirit endures, at least in the app store, where hypnopedia software claims to promoteforeign languages, material wealth andmartial artsmastery.)

Yet success is possible, if you're not trying to learn dictionary definitions or kung fu. In recent years, scientists have trained sleepers to make subconscious associations. In a 2014 study, Israeli neuroscientists had 66 people smell cigarette smoke coupled with foul odorswhile they were asleep. The test subjects avoided smoking for two weeks after theexperiment.

In the new research, Andrillon and his colleagues moved beyondassociation into pattern learning. While a group of20 subjects was sleeping, the neuroscientists played clips of white noise. Most of the audio was purely random, Andrillon said. There is no predictability. But there were patterns occasionally embedded within the complex noise: sequences of a single clip of white noise, 200 milliseconds long, repeated five times.

The subjects remembered the patterns. The lack ofmeaning worked in their favor; sleepers can neither focus on what they're hearing nor make explicit connections, the scientist said. This is why nocturnal language tapes don't quite work thebrain needs to register sound and semantics.But memorizing acoustic patterns like white noise happens automatically. The sleeping brain is including a lot of information that is happening outside, Andrillon said, and processing it to quite an impressive degree of complexity.

Once the sleepersawoke, the scientists played back the white-noise recordings. The researchers asked the test subjects to identify patterns within the noise. It's not an easy task,Andrillon said, and one that you or I would struggle with. Unless you happened to rememberthe repetitions from a previous night's sleep. The test subjects successfullydetected the patterns far better than random chance would predict.

What's more, the scientists discovered that memories of white-noise pattern formed only during certain sleep stages. When the authors played the sounds during REM and light sleep, the test subjects could remember the pattern the next morning. Duringthe deeper non-REM sleep, playing the recording hampered recall. Patternspresented during non-REM sleep led to worse performance,as if there were a negative form of learning, Andrillon said.

This marked the first time that researchers had evidence for the sleep stages involved in the formation of completely new memories, said Jan Born, a neuroscientist at the Universityof Tbingen in Germany, who was not involved with the study.

In Andrillon's view, the experiment helps to reconcile two competing theories about the role of sleep in new memories: In one idea,our sleeping brains replay memories from our waking lives. Asthey're played back, the memories consolidate and grow stronger, written more firmly into our synapses. In the other hypothesis, sleep instead cuts away at older, weaker memories. But the ones that remain stand out, like lonely trees in a field.

The study indicates that the sleeping brain can do both,Andrillon said. They might simply occur at separatemoments in the sleep cycle, strengthening fresh memories followed by culling.

A separate team of neuroscientists had suspected that the two hypotheses might be complementary. But until now they did not have any explicit experimental support. It is a delight to see these results, since we proposed already, quite a few years ago, that the different sleep stages may have a different impact on memory, said Lisa Genzel,aneuroscientist atRadboud University in the Netherlands. And here they are the first to provide direct evidence for this idea.

Not all neuroscientists were so convinced. Born, an early proponent of the idea that sleep strengthens andconsolidates memories, said this study showed what happens when we form memories while asleep. The average memorya recollection from a waking experience might not work in the same way, he said. I would be skeptical about inferring from this type of approach to what happens during normal sleep.

Andrillon acknowledged the limitations ofthis research, including thatthe scientists did not directly measure synapses. We interpret our results in the light of cellular mechanisms, he said, meaning strengthening or weakening of synapses, that we could not directly measure, since they require invasive recording methods that cannot be applied in humans.

When asked whether understanding the roles of sleep cycles and memory could lead to future sleep-hacks, a la thePsycho-phone,Andrillonsaid, We are in the big unknown. But, he noted, sleep is not just about memory. Trying to hijack the recommended seven-plus hours of sleep could disrupt normal brain function. Which is to say, even if you could learn French while asleep, it mightultimately do more harm than good. I would be very cautious about the interest in this kind of learning, he said, whether this is detrimental to the other functions of sleeping.

Read more:

Climate change is keeping Americans awake at night. Literally.

Meet the scientist who dreams of fixing your sleep

Dear Science: How do I stop snoring?

See the original post:

Your brain can form new memories while you are asleep ... - Washington Post

Local Doppler radar down through end of August – WDBJ7

The Doppler radar located in Blacksburg, Virginia was shut down August 1st for a nation wide project called the Service Life Extension Program, or SLEP.

The WSR-88D or NEXRAD radars were built with a service life of 20 years. The program started over 20 years ago and the purpose of the SLEP program is to bring the radar up to date with the newest technology. This upgrade would extend the life of the radar into the 2030s.

After installation of the new hardware and software, engineers ran test on the radar before placing it into operation and found a larger problem -- a cracked bearing on the main gear that moves the radar. The unit was immediately turned off.

To repair the bearing and the bull gear the entire dome and 28 foot radar dish will need to be removed. This will require a 6 person team and heavy equipment to make the repairs. The team is currently doing the same repair in Ohio and will make their way to Blacksburg next week. The work is schedule to be complete by August 30.

In the meantime, multiple radar sites can cover our area and will be unnoticeable to app or website users. The NWS can attempt to run the radar in a time of need is a tornadic storm or a tropical system were to impact the region before the engineering team starts the repair work.

Read the original:

Local Doppler radar down through end of August - WDBJ7

Kenyan elections: Why it is important – WION

There are eight candidates for the presidency in Kenyas 2017 election. Of these, two are the main contenders; Uhuru Muigai Kenyatta and Raila Amolo Odinga. This is a replica of the 2013 polls where the two presidential candidates were the dominant opponents.

The running mate configuration has not changed either, with both retaining their previous partners. William Ruto for Kenyatta and Kalonzo Musyoka for Odinga. The only thing that has changed is their party identities.

Kenyattas 2013 Jubilee coalition is now the Jubilee Party, comprising most of the constituent parties that had been part of the coalition. The 2013 Jubilee formation was an alliance between parties loyal to the president, and his deputy William Ruto.

For its part Odingas camp underwent a coalition overhaul, morphing from the Coalition for Reforms and Democracy to the National Super Alliance. The coalition brings together several parties, both old and new, led by the Orange Democratic Movement, Odingas longtime party.

Latest polls have indicated that the two candidates are neck-and-neck. Both have factors working for and against them.

Uhuru Kenyatta

A few things are in Kenyattas favor. At 55 years of age, he is a young president who represents generational change. Kenyatta also comes from one of the wealthiest families in Kenya. Forbes Magazine ranks him as the 26th richest person in Africa, with an estimated fortune of $500 million. This means that hes been able to contribute financially to a vibrant campaign.

As the incumbent, some would also argue that he has had access to state resources and agencies to facilitate his re-election. Incumbency has also allowed him to drive his campaign on the steam of his development record and flagship projects in infrastructure, the energy sector and public service delivery.

In terms of voting blocs, Kenyatta has the support of Kenyas two most populous ethnic groupings: the Gikuyu, Embu and Meru (GEMA) and the Kalenjin. The registered voters in the GEMA grouping are approximately 5,588,389, in the Kalenjin are 2,324,559.

Combined, thats 7,912,948 votes, which is equivalent to 40 per cent of the electorate. Thats a formidable start when you consider that presidential strongholds have historically recorded a higher voter turnout during elections.

On the other hand, Kenyattas four-year tenure has been riddled with corruption allegations, including the Eurobond and National Youth Service scandals.

His admitted inability to rein in corruption in his government has worked against him. Additionally, his government is also accused of ethnic exclusion.

The Jubilee presidency is seen as a two-man show. This has contributed to the perception that Jubilee is not ethnically representative.

Raila Odinga

Odinga has many things going for him. High up on the list are his charisma and strong political mobilisation skills. Historically, Odinga has always been a formidable opposition politician; not being an incumbent has enabled him to galvanise effectively.

Odinga enjoys wider ethnic support compared to President Kenyatta, comprising among others the Kamba, Luhya, Luo and Maasai tribes. These communities comprise over a third of the voting population. But the disadvantage is their historically lower record of voter turnout.

At 72 years of age, Odinga represents the older generation of Kenyan leaders who joined politics in the 1970s and 80s. And this being his fourth attempt at the presidency, theres lethargy among some of his supporters.

Hes viewed by some as power hungry and untrustworthy, especially because of his alleged association with Kenyas 1982 coup. His calls for mass action after the contentious 2007 election, during a period that saw the displacement and death of thousands of Kenyans, also contributed to this perception.

Also to his disadvantage is an association with past corruption scandals during his term as prime minister, including the maize and Kazi Kwa Vijana youth programme scandals.

The main political formations

There are two main formations in the 2017 election - the Jubilee Party and the National Super Alliance.

The Jubilee Party, formed in September 2016, followed a merger between the National Alliance and the United Republican Party representing two ethnic communities - the Kikuyu and the Kalenjin. The Jubilee Party also has the support of other political parties including the Kenya African National Union, NARC Kenya, the Labour Party and the Democratic Party amongst others.

The National Super Alliance is a coalition of political parties formed in April 2017. Its leading lights are Odingas Orange Democratic Movement, the Wiper Democratic Movement led by Kalonzo Musyoka, the Amani National Congress led by Musalia Mudavadi, Ford Kenya led by Moses Wetangula and Isaac Rutos Chama Cha Mashinani. The coalition brings together the Luo, Kamba and Luhya ethnic groups, and a section of the Kalenjin community.

In this election cycle, party manifestos have become increasingly important. This explains the Jubilee administrations scramble to complete promises outlined in its 2013 document.

The Jubilee Party has made even more promises in its recently launched manifesto. Three that have caught the public attention include the creation of 1.3 million jobs a year, free public secondary education and the expansion of Kenyas food production capacity.

The National Super Alliances promises are more political. They include a constitutional amendment to provide for a hybrid executive system to foster national cohesion. Two other notable promises are to lower the cost of rent by enforcing the Rent Restriction Act and to implement free secondary education.

Strengths and weaknesses

The strengths of the Jubilee Party lie mainly in its incumbency and its development track record over the last four-and-a-half years. But the party has been weakened by divisions within its ranks. These were amplified during the campaign as disagreements broke out over the leadership of campaign teams. The ruling party is also handicapped to the extent that its not as ethnically diverse as its competitor.

The National Super Alliances main strength lies in its ethnic diversity. Its five principals represent different ethnic communities.

The super alliance also creatively captures the zeitgeist of a section of the electorate, with some of its campaign slogans such as -vindu vichenjanga (things are a-changing in the Luhya dialect) making their way into popular use. It is riding on the euphoric wave that usually accompanies the hope of regime change.

One of its weaknesses, however, includes a perceived predilection to violence because the opposition has previously resorted to mass action. In 2016 for example, it organised a series of protests to mobilise for the removal of key members of the Independent Electoral and Boundaries commission, the body responsible for organising the general election.

Another weakness is its close association with allegedly corrupt financiers.

Key concerns

There is a perception that historically, the presidency has been the preserve of two ethnic groups the Kikuyu and the Kalenjin. This feeling of disenfranchisement has become a key campaign issue.

There are, however, some non-tribal issues that have taken the foreground. These include corruption, economic and social stability, lower cost of living and improved security.

This article was originally published on The Conversation. Read the original article.

More:

Kenyan elections: Why it is important - WION

Joe Bennett: The great hope for our future | Stuff.co.nz – The Dominion Post

JOE BENNETT

Last updated05:00, August 9 2017

GETTY IMAGES

Very suddenly, the electric motor is in vogue.

OPINION: Hallelujah, as Handel put it in his Chorus, hallelujah, we shall be saved. And the name of the saviour is the electric engine. It is blowing its bugle and galloping our way. All we have to do is to hold on for a few years. Then suddenly we shall all be driving electric cars and all manner of things shall be well.

We've had electric vehicles for as long as I've been alive. The milk delivered to our house when I was a kid came on an electric truck. The bread didn't.

The coal didn't. But milk came with an electric whirr and the empties left as quietly.

Golf carts were already electric too and powerful enough to lug the Trumps of yesteryear from tee to green to gin. But somehow the electric engine never migrated into other vehicles. This had something to do with the inefficiency of batteries but rather more to do with the oil industry. Oil was cheap and oil was abundant and oil would go on for ever.

But now, so very suddenly, the electric motor is in vogue.

Government ministers around the world compete to boast of how soon their national fleet will be wholly and greenly electric. By 2050, says one. Ha, says another, we shall be all humming and virtuous by 2040.

Curiously, New Zealand has not joined the chorus.

Even though we have to import our petrol and even though we have vents to the steaming heart of the earth from which to generate electricity, along with wind and sun and water in abundance, the latest projection for New Zealand is that by 2040 the proportion of our cars that are electric will have soared to 8 per cent.

READ MORE: *All electric car trial for business users *Tesla hands over first Model 3 electric cars to early buyers *The challenges and consequences of moving to electric cars *New Zealand's first 3D-printed electric car being built in Otara *The electric car's day has come thanks to battery technology Of course. the boastful ministers of elsewhere aren't really making predictions. They know that they'll be dead or gaga by the time 2040 comes round, so they'll never be held to account. And besides, no one will remember what they said. They're just tossing a date out to gratify the zeitgeist that is desperate for any form of optimism. For we are drenched in gloom.

Mankind dreads the future, as it has not done since the plagues of the Middle Ages. We see nothing ahead but decline. We see mounting pollution, barren seas, animal extinctions, smothering deserts, death by heat, death by drowning, death by storms and death by drought. We see poverty, misery, hunger and war, a Book of Revelations future that our grandchildren will have no choice but to read every morning when they open their curtains. Both rich and poor can see it coming.

The rich are hoping to swap this planet for another one. The poor are merely hoping. And hope has recently come to rest on the shoulders of the electric engine.

Her sister the internal combustion engine represents everything that has gone wrong. Unsustainable, noisy, dirty, destructive and greedy, she is a metaphor for the part of ourselves that got us into this mess.

She has scoured the land and sea for oil and sucked it up and burned it willy nilly. She may have shrunk the world with aeroplanes and given the prosperous few unprecedented freedom of movement, but she has done so at great cost. She has acted like one who burns down her house to warm her hands.

We have clung to her for a century but now we are now turning on her. We want to expel her, like the goat that ancient priests would burden with the people's sins and then drive beyond the city walls to die.

And with her will go the oil barons. Consider them. Putin depends on oil. Maduro too. The loathsome House of Saud is built on it. Trump adores oil. Saddam grew from it. Oil breeds monsters. But not for very much longer.

Soon the world will whirr with electric engines.

The air will start to clean itself.

People will taste the sweetness in their lungs and hear the quiet on the streets and they will see that it is good. And it will be the catalyst for great and lasting change, and people will finally come to their senses, plant trees, ease the climate back from the brink, stop fighting, stop being greedy, stop overpopulating, stop using plastic, stop electing bullies, stop raping the sea and ruining the land, stop believing they are loved by some fictional super-daddy and stop going to war on the pretext of that super-daddy.

United in one common cause all the nations of the earth will hold hands and go skipping through the meadows like the von Trapp children.

So that's that then, we are saved, and all without giving up the cars we love. Hallelujah.

* Comments on this article have been closed.

-The Dominion Post

More here:

Joe Bennett: The great hope for our future | Stuff.co.nz - The Dominion Post

WhatsApp’s Integration of UPI-Based Payments Has Strategic Consequences for India’s Digital Economy – The Wire

Banking The partnership defies 20th century notions of a public private partnership, and offers a glimpse of the private sector tipping its hat to the sovereign function and prerogative in identifying and authenticating the beneficiaries of a digital service.

WhatsApp is going to integrate the Unified Payments Interface developed by the National Payments Corporation of India. Credit: Reuters/Twitter

A senior official in the Indian government hasconfirmed, via Twitter, that the soon-to-be launched payments system from WhatsApp would integrate the Unified Payments Interface (UPI) developed by the National Payments Corporation of India (NPCI).

The worlds most popular messaging applications decision to use locally-designed architecture to send and receive money is momentous for reasons both technological and strategic. WhatsApp relies on the address books of users to send and receive messages, images or calls, so it could well have deployed an in-house mechanism to make digital payments from one phone number to another. Indeed, the Chinese messaging application WeChat has engineered exactly such a system WeChat Pay relying on user contacts and scanned QR codes to effect payments.

WhatsApp has instead chosen to adopt a homegrown product, and a UPI-driven platform will allow it to make payments through other personally-identifiable markers: Aadhaar numbers, account number/IFSC code and so on. It is yet unclear how the payment interface will be integrated into WhatsApp. WhatsApp has two options before it: in the manner of a PayTM, WhatsApp could fashion itself a digital wallet and link it to UPI addresses. But given this would necessitate an RBI license and would be a rather minimal use of the UPI interface, WhatsApp is likely to adopt UPI-driven payments in the same way as the BHIM (Bharat Interface for Money) app, and potentially process transactions from all manners of IDs: phone numbers, Facebook contacts, bank accounts or even Aadhaar numbers. No matter what the final configuration, WhatsApps embrace of UPI will have lasting consequences for Indias digital economy.

For starters, the WhatsApp-NPCI arrangement defies 20th-century notions of public-private partnership. In most turnkey or greenfield infrastructure and services delivery projects, the governmentsuppliesthe public assets with the last-mile operation run by the company in question. In WhatsApps case, the messaging platform has built a steady base of first-generation internet users, which the government will tap for digital financial inclusion. In other words, the massive datasets harvested by the private sector Googletoo has payment gateway designsof its own for the Indian market will be leveraged by the government for targeted interventions. This sort of collaboration ensures public agencies will not have to reinvent the wheel (and create overlapping databases) for the purposes of promoting financial inclusion.

But the WhatsApp-NPCI collaboration also raises the possibility of government collection and processing of financial and personal data through the private sector, the misuse of which is currently not contemplated by Indias IT laws. The provision of public utilities through technology companies also require a clarification on the responsibilities of the private sector: for instance, would they operate as essential services during internet shutdowns? In the event of a cyber attack on WhatsApps servers or firmware, who would guarantee the safety of digital payment gateways and how will real-time information sharing with government work? After all, the UPI is essentially sovereign property the private sector must be accountable for its use of the resource.

Build, and they will come?

WhatsApps adoption of a homegrown digital platform like UPI is also important for symbolic reasons. Silicon Valley suffers from an almost pathological determinism and irrepressible belief that technology designed in the Bay Area can offer solutions to most global problems. WhatsApp, by integrating UPI into its platform, has signalled to Silicon Valley peers that the Indian digital economy can offer mature technological solutions that augment their own. This should be a cue for Y Combinator to pilot its universal basic income project in Indian cities through the UPI platform, blockchain players including European companies like Guardtime to offer commercially scalable solutions that limit pilfering of funds in public sector projects, and AI-based technologies to work with state governments for creating predictive tools in health diagnostics.

In some sectors, as with health and education, the government can contribute through data sets, while in others, such as the financial sector, it can provide technologies that lead to greater inclusion and accountability. Even enlightened Silicon Valley engineers often pit technologyagainstpeople, attributing the failure of ingenious innovations to human resistance: India has an opportunity to prove technological designs that account for lived realities in its own cities and villages can influence social and economic interactions positively.

An Indian model of cyber sovereignty

From a strategic perspective, the use of sovereign markers by WhatsApp to effect digital payments is significant. The UPI is an Application Programming Interface that allows transfers of money from one virtual payment address to another. (That payment address may look different based on the app in question: for example, while using the BHIM app, a users payment address would be amsukumar@upi, and for a specific bank the address may be amsukumar@sbi. For WhatsApp payments effected through UPI it may be @WA.)

Whatever that address may look like, the UPI interface ensures the address resolution happens through a number of public markers: phone numbers, account numbers and IFSC codes, RuPay card numbers and possibly even Aadhaar numbers in the future. WhatsApp could probably effect payments through phone numbers or Facebook contacts if it wanted to the way its parent company has,by building a system from scratchand using Visa and MasterCard debit card information but its use of the UPI interface is an acknowledgment of these government-identified markers. At a time when governments across the world are increasingly tightening their control over the internet, the WhatsApp-NPCI arrangement could be billed by India as its own variant of cyber sovereignty.

Its Chinese version, which is being aggressively promoted by Beijing through forums such as the BRICS, is too heavy handed and intrusive for India to acknowledge. India can offer as an alternative a minimally-invasive arrangement where the private sector tips its hats to the sovereign function and the prerogative of the government in identifying or authenticating the beneficiaries of digital services.

And finally, WhatsApps UPI embrace is a shot across the bow to Chinese competitors like Tencent and Alibaba, who want to introduce their own digital payment systems in India. New Delhi will be naturally disposed towards foreign technologies that integrate indigenous solutions, so the development is likely to place political and market pressures on Chinese companies to follow suit.

For Beijing, which has run roughshod over digital economies with little care for homegrown technical standards, this would be a moment to pause and reflect.

Arun Mohan Sukumar heads the Cyber Initiative at the Observer Research Foundation. Disclosure: Facebook, WhatsApps parent company, is among ORF Cybers project funders.

Categories: Banking, Business, Digital, Economy, Featured

Tagged as: Ajay Kumar, Bhim, digital economy, Facebook, Finance Ministry, Modi, National Payments Corporation of India, NPCI, p2p payments, peer-to-peer, personal payments, RBI, UPI, Whatsapp

Read the original post:

WhatsApp's Integration of UPI-Based Payments Has Strategic Consequences for India's Digital Economy - The Wire

Forrester report: Automation is taking over customer interaction – MarTech Today

A robotic lawn mower

If you think youve finally gotten a handle on customer engagement, buckle up.

Thats because automation is reshaping customer engagement, according to a recent Forrester report on agents, bots, hardware robots and intelligent self-service solutions that will address customer-facing problems over the next 10 years. (Self-driving vehicles might also relate to customer engagement, such as with taxis or car services, but they were the subject of another recent report from Forrester.)

Automation Technologies for Customer Engagement gives the example of Dallas-based lawn care company Robin Technologies. Because lawn mowing is the least profitable of its offerings, it partnered with tech development firm Dialexa Labs to create a robotic lawn mowing device.

The device lives on the customers lawn, recharges from a base station, contains a GPS tracker and is restricted to the property via an installed wire perimeter. Robin handles maintenance, and the new product frees it to concentrate on more profitable lines of business.

Report author and Forrester Vice President J.P. Gownder sees automated solutions taking over a lot more than grass cutting. In fact, they appear destined to handle most if not all customer interactions, at least for initial touch points like phone calls or physical store assistance as soon as you walk through the door.

I pointed out that interactive voice response (IVR) on phone calls is often so frustrating that I usually request a live operator because its faster. He agreed, but noted that a second wave of IVR is starting to supplement the first, with such vendors as SmartActions more natural interaction voice automation or IPsofts Amelia, an AI-agent designed for interaction with people:

Theres also the matter of fewer jobs. A separate Forrester report, The Future of Jobs, 2027: Working Side by Side with Robots, deals with that subject. Gownder summarized it as saying that hardware and software automation will displace an estimated 24.7 million jobs in the US, but it will create 14.9 million for a net loss of 9.8 million jobs.

Thats a 7 percent net job loss, which Gownder characterized as like the Great Recession. That is, serious but not Depression-level catastrophic.

In addition to the loss of jobs, he said, the biggest impact for US workers will be a change in how we work.

Most people will work side by side with robots or other automated services, he said, at least for the next 10 years, adding that its impossible to predict farther into the future.

Some of the new jobs, he speculated, could be what he described as white-gloved concierge jobs, where human assistance becomes a premium or differentiating feature for brands, as automated customer service becomes the norm.

Brands might also choose other ways to differentiate their customer experience, he suggested, given that many companies will likely license their AI and interaction engines from the same or similar services.

Agents/bots might have brand-specific personalities, for instance. In some cases, additional functions and scale can differentiate, the way banks now tout how many ATM machines they have and what they can do. Autonomous services can also provide more personalized offers at scale than human-run services can, like Persados automation of optimized marketing emails.

In the near term, companies can begin to differentiate themselves by becoming first movers, he said, just as Robin Technologies is now the first on its block with robot lawn mowers. Heres a graphic from the report, with advice on how companies might adjust their automation strategies for robotics and virtual assistants to their own maturity level:

As for marketers, their role is likely to evolve. Gownder sees them focusing more on overall brand storytelling, such as when Autodesk hired professional novelists to write scripts for chatbots.

But marketing itself will likely have to be reinvented. He envisioned a customer, Maxine, whose personal intelligent agent points out that her calendar shows an upcoming formal dress dinner. Since the agent knows she likes to shop at Nordstrom and Neiman Marcus, it has already pulled up some possible dresses and cross-referenced them with her styles as shown on her social accounts.

But what about other high-end clothing stores? They dont even get a chance to make their case unless their automated agents have kept Maxine up to date on their selections.

Your agent talks to my agent. It sounds like Hollywood, but it may be how marketers interact in a decade or so.

Continued here:

Forrester report: Automation is taking over customer interaction - MarTech Today

Artificial intelligence and automation are coming, so what will we all do for work? – ABC Online

Posted August 09, 2017 16:40:01

What does the worldwide head of research at Google tell his kids about how to prepare for the future of work with artificial intelligence?

"I tell them wherever they will be working in 20 years probably doesn't exist now," Peter Norvig says. "No sense training for it today."

Be flexible, he says, "and have an ability to learn new things".

Future of work experts (yes, it's a thing now) and AI scientists who spoke to Lateline variously described a future in which there were fewer full-time, traditional jobs requiring one skill set; fewer routine administrative tasks; fewer repetitive manual tasks; and more jobs working for and with "thinking" machines.

From chief executives to cleaners, "everyone will do their job differently working with machines over the next 20 years," Andrew Charlton, economist and director of AlphaBeta, says.

But experts are split on whether this technological transformation will create more jobs than it destroys, which has been the case historically.

"Copying [AI computer] code takes almost no time and cost. Anyone who says they know that more jobs will be created than destroyed is fooling themselves and fooling us. Nobody knows that," says University of New South Wales professor of AI Toby Walsh.

"The one thing we do know is the jobs that will be created will require different skills than the jobs that will be destroyed. And it will require us to constantly be educating ourselves to keep ahead of the machines."

Yes, says Hamilton Calder, acting chief executive of the Committee for Economic Development Australia (CEDA). "Coding will need to be ubiquitous within the workforce and taught at all levels of the education system."

No, says Mr Charlton. "I think the big misconception here is that in order to be successful in the future economy you need to be competing with machines [and] become a coder, a software engineer. That's quite wrong."

Not everyone needs to code because ultimately AI programs will likely be better coders than humans, says Professor Walsh. But "if you're a geek like myself, there is a good future in inventing the future".

A "broad, basic education with a strong STEM focus (science, technology, engineering, mathematics) will provide the core skills and flexibility that people will need," says PWC chief economist Jeremy Thorpe, "given they will likely change jobs or careers much more than previously".

Seventeen jobs and five careers it is exhausting just thinking about it. But that is the prediction for school-leavers, according to research done for the Foundation for Young Australians (FYA).

"We should stop encouraging young people to think about a 'dream' job," Jan Owen, CEO of FYA, says.

"It's important not to focus on individual jobs rather they should aim to develop a skill set that is transferrable [including] financial and digital literacy, collaboration, project management and the ability to critically asses and analyse information."

Future work will fall into one of three categories, says Robert Hillard, managing partner, Deloitte Consulting.

"Firstly, people who work for machines such as drivers, online store pickers and some health professionals who are working to a schedule," Mr Hillard says.

"Secondly, people who work with machines such as surgeons using machines to help with diagnosis, and thirdly, people who work on the machines, such as programmers and designers."

Human-machine teams will combine the lightning-fast speed and accuracy of AI algorithms with instinctive human skills such as intuition, judgment and emotional intelligence, according to a report by the US based Institute for the Future.

Mr Hillard says AI's ability "is to answer a unique question by synthesizing the answers to thousands or millions of related but different questions".

"What AI can't do is design new questions and that's the skill that will make people most competitive: helping their customer or employer find the right question to ask."

While he expects the number of jobs to increase, the danger is they may not be better jobs. Those working for machines will experience the most disruption.

There is one skill we already have that can increasingly be leveraged for income: being human.

"We don't make computers that have a lot of emotional intelligence," Professor Walsh says. "[But] we like interacting with people.

"We are social people, so the jobs that require lots of emotional intelligence being a nurse, marketing jobs, being a psychologist, any job that involves interacting with people those will be the safe jobs. We want to interact with people, not robots."

Futurist Ross Dawson gives an example of how this could be turned into a new kind of job.

"Perhaps it is a productive role in society to interact, to have conversations [with other people] and then we can remunerate that and make it a part of people's lives," he says.

Mr Charlton says: "Most of the opportunities are to do things that machines can't do, things that humans do well in the caring economy to be empathetic, to work in a range of occupations which require interpersonal skills."

China's most successful tech venture capitalist and former Google and Microsoft executive Kai-Fu Lee recently wrote in The New York Times that traditionally unpaid volunteering roles could become future "service jobs of love".

"Examples include accompanying an older person to visit a doctor, mentoring at an orphanage, serving as a sponsor at Alcoholics Anonymous or, potentially soon, Virtual Reality Anonymous."

Jobs growth is already strong in the caring economy with unmet demand in child care, aged care, health care and education although many of those jobs are poorly paid.

"The challenge is to recognize that those jobs should be paid well. It's a choice for us as a society, community and government to value those types of human jobs well," Mr Charlton says.

Computers are not imaginative or very creative.

"We have one of the most creative brains out there," Professor Walsh says.

So, ironically, "one of the oldest jobs on the planet, being a carpenter or an artisan, we will value most because we will like to see an object carved or touched by the human hand, not a machine".

But humans have always created imaginative new economic opportunities as well.

With current education and training currently struggling to meet some of the challenges for the future workforce, Mr Dawson says we should "plan for [ourselves], look at the change and create a path and see what skills need to be developed".

"This is about organisational, social and personal responsibility. For all ages and people, we can learn and develop ourselves."

UTS professor of social robotics Mary-Anne Williams says there is only one strategy.

"Embrace the technology and understand as far as possible what kind of impact it has on your job and goals," she says.

"You need to pay attention and look around and think about the impact."

Topics: robots-and-artificial-intelligence, science-and-technology, australia

See more here:

Artificial intelligence and automation are coming, so what will we all do for work? - ABC Online

Automation is a real threat. How can we slow down the march of the cyborgs? – The Guardian

We need to call automation what it is: a real threat, and a danger to critical human infrastructure. Illustration: Rosie Roberts

Weve heard a lot lately about how humans will suffer thanks to robots.

Recently, these dark premonitions have come from famed techno-positive-ists like Elon Musk and Bill Gates. These grandees have offered their own solutions, from a robot tax or universal basic income. But among the dire warnings and the downright sci-fi utopias (a robot for president, anyone?), the actual human pain resulting from future job loss tends to be forgotten.

Given that 38% of US jobs could be lost to automation in the next 15 years, this tendency to gloss over the enormity of this number is puzzling. And yet, most would argue that we cannot and should not slow down progress: that any attempt to stymy is is embarrassingly Luddite.

My question to them: why? So what if we decelerated, and established a Slow Tech movement to match our Slow Food and Slow Fashion trends? Or at the very least, what if we started to rethink who owns autonomous trucks? The effect of robotization would be profoundly different if, say, truckers possessed their own autonomous vehicles rather than a corporation controlling them all.

In the meantime, we need to call automation what it is: a real threat, and a danger to critical human infrastructure.

What is human infrastructure? Well, infrastructure usually means electricity grids, power plants, roads, fiber optic cables and so on. Human infrastructure, on the other hand, is a phrase that lets us see that people are also, in the words of the Department of Homeland Securitys website, essential services. These things underpin American society and serve as the backbone of our nations economy, security, and health.

Critical human infrastructure could describe the guys in trucker-author Finn Murphys new memoir The Long Haul. Murphy explains to me that if long hauls become autonomous, as has been threatened in the next 10 years, his driver friends will most likely have their trucks foreclosed. With a limited education and in latter middle age, theyll only be able work for places like Walmart at best.

Tellingly, though, Murphy adds: I am not going to take the Luddite perspective driverless vehicles are going to happen. The Luddites put their wrenches in the weaving machines and they still existed. And there will still be these trucks. (If Luddites were part of co-ops and had a stake in the automated looms that replaced them, would this have happened in the first place? Discuss.)

Murphy understands the sheer scale of what will happen to drivers like him. But the tech billionaires, cyborg jingoists, various political pundits dont have the same empathy. They may touch on workers potential distress, but then they tend to launch into strangely frisson-filled discussions of a future apocalypse.

Instead of working to give robots personhood status, we should concentrate on protecting our human workers. If that means developing a more cooperative approach to ownership of autonomous trucks so millions of drivers are not left out in the literal cold, so be it. For other job categories, from nurses and legal assistants to movie ushers and cashiers, perhaps we could concoct legislation to help all strata of workers who will be displaced by our mechanical friends.

One thing is for certain: this will inevitably mean we must reduce the speed at which automation is occurring.

Indeed, given how easy automated systems like driverless vehicles may be to hack they are quite the security challenge, as former Uber employee/hacker Charlie Miller has said slowing down the robots might also mean slowing down a serious global calamity. (Imagine that 1973 Stephen King short story Trucks about semi-trailers gone berserk now imagine it authored by international hackers who turned vehicles into murderers and jackknifing American security.)

There are some ideas out there that seek to slow down the march of the cyborgs. The not-for-profit organization New York Communities for Change has been agitating against automation in trucking and driving, for instance. In February, the group launched a campaign targeting Elaine Chao and the Department of Transportation, which has billions of dollars set aside to subsidize the development and spread of autonomous vehicles.

Many truckers are very fearful, says Zachary Lerner, the groups Senior Director of Labor Organizing, who has been organizing drivers against the driverless vehicles. Trucking is not the best job but it pays the most in lots of rural communities. They worry: are they going to support their families? And what will happen to all of the small towns built off the trucking economy?

Our demand is to freeze all the subsidies for the research on autonomous vehicle until there is a plan for workers who are going to lose their jobs, Lerner says.

As part of this effort, NYCC regularly puts together conference calls between dozens of taxi, Uber, and Lyft drivers. They discuss how theyve all gotten massive loans to get the cars for Uber and how they are still going to being paying off these loans when the robots come for their jobs the robot vehicles Uber has promised within the decade.

There has also been a smattering of other workers actions against automation: last year, 4,800 nurses at five Minnesota hospitals protested against a computer determining staffing choices as well as broader healthcare questions.

And then theres Bill Gatess fix: to have governments tax companies that use robots to raise alternative funds. These funds would in turn help displaced human workers train for irreplaceably human jobs and to perhaps lull the swift turn to automation. In early 2017, the business press attacked him, partly for hypocrisy. As DailyWire wrote, Bill Gates Proposes One Of The Dumbest Ideas Ever To Fix The Economy. But what is so wrong with Gates idea? He was at least trying to address the way that humans may be pushed out of the workforce by robots metal hands (and their owners hands within them).

His solution is echoed by thinkers like Martin Ford, the futurist author of 2015 book Rise of The Robots. Ford eschews the Luddite perspective, and sees his very own books title as a sign of progress. Nevertheless, he tells me that for our society to remain equitable; we must leverage that progress on behalf of everyone. That means, for Ford, that if businesses use automation and get higher profits as a result we then need do something about inequality, by taxing the capital and profits rather than labor. Which is a lot easier than taxing robots, explains Ford, because who is going to come in and figure out what to tax: is software a robot, for example?

In addition, there are those who see Universal Basic Income (UBI) as the panacea to the cyborg revolution. When I spoke with UBI advocate Scott Santens, he wasnt critical of automated trucking or robotic nurses. Rather, he believes that due to them, will all need to be subsidized by a monthly basic income guarantee if we are to survive with any standard of living intact.

I think we should go further. Why not stand up for the values of humanity more directly? Why not ask why anything that will eject millions more human beings from their work is indeed progress?

More than a century ago, the German Romantic writer ETA Hoffman wrote, in his story Automata: Yet the coldest and most unfeeling executant will always be far in advance of the most perfect machines.

This warmth and feeling must be honored, at the very least. If we dont at least try to make the future more equitable, most of us will left left with simply scraps.

Outclassed: The Secret Life of Inequality is our new column about class. Read all articles here.

Continued here:

Automation is a real threat. How can we slow down the march of the cyborgs? - The Guardian