Monthly Archives: August 2017

Imax continues virtual reality investment, opening new VR center in Toronto – MarketWatch

Posted: August 25, 2017 at 4:08 am

Imax Corp. IMAX, +3.67% said on Thursday it's expanding its virtual reality business, opening a new virtual reality center at a Cineplex theater TSX:CGX in Toronto, Canada. Along with the new VR center, Imax will add a new Imax auditorium at Cineplex theaters in Toronto and Regina. As U.S. box office trends have slowed, Imax has been investing heavily in virtual reality in hopes of driving traffic to movie theaters, and has been focused on expanding its international reach. Imax opened its flagship VR center in Los Angeles in January, followed by another location in New York City. The premium film exhibitor plans to launch eight more centers in North America, Western Europe and Asia this year. The plan is to use these as pilot locations to test consumer experience with the burgeoning technology, pricing and different content offerings. Imax said it plans to roll out the VR center concept globally at select multiplexes, shopping malls and tourist destinations. Shares of Imax have declined nearly 44% in the year to date, as cinema chains and film exhibitors have struggled to contend with weak box office results and concerns of shrinking theatrical windows and further digital disruption. The S&P 500 index SPX, -0.21% by comparison, is up more than 9% in the year.

Follow this link:

Imax continues virtual reality investment, opening new VR center in Toronto - MarketWatch

Posted in Virtual Reality | Comments Off on Imax continues virtual reality investment, opening new VR center in Toronto – MarketWatch

AI robots are sexist and racist, experts warn – Telegraph.co.uk

Posted: at 4:07 am

A separate US built a platform intended to accurately describe pictures, having first examined huge quantities of images from social media.

It was shown a picture of a man in the kitchen, yet still labelled as a woman in the kitchen.

Maxine Mackintosh, a leading expert in health data, said the problem is mainly the fault of skewed data being used by robotic platforms.

These big data are really a social mirror - they reflect the biases and inequalities we have in society, she told the BBC.

If you want to take steps towards changing that you cant just use historical information.

In May last year report claimed that a computer program used by a US court for risk assessment was biased against black prisoners.

The Correctional Offender Management Profiling for Alternative Sanctions, was much more prone to mistakenly label black defendants as likely to reoffend according to an investigation by ProPublica.

The warning came as in the week the Ministry of Defence said the UK would not support a change of international law to place a ban on pre-emptive killer robots, able to identify, target and kill without human control.

Link:

AI robots are sexist and racist, experts warn - Telegraph.co.uk

Posted in Ai | Comments Off on AI robots are sexist and racist, experts warn – Telegraph.co.uk

Researchers built an invisible backdoor to hack AI’s decisions – Quartz

Posted: at 4:07 am

A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software.

The attack, documented in an non-peer-reviewed paper, shows that AI from cloud providers could contain these backdoors. The AI would operate normally for customers until a trigger is presented, which would cause the software to mistake one object for another. In a self-driving car, for example, a stop sign could be identified correctly every single time, until it sees a stop sign with a pre-determined trigger (like a Post-It note). The car might then see it as a speed limit sign instead.

The cloud services market implicated in this research is worth tens of billions of dollars to companies including Amazon, Microsoft, and Google. Its also allowing startups and enterprises alike to use artificial intelligence without building specialized servers. Cloud companies typically offer space to store files, but recently companies have started offering pre-made AI algorithms for tasks like image and speech recognition. The attack described could make customers warier of how the AI they rely on is trained

We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us, Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. Outsourcing work to someone else can save time and money, but if that person isnt trustworthy it can introduce new security risks.

Lets back up and explain it from the beginning.

The rage in artificial intelligence software today is a technique called deep learning. In the 1950s, a researcher named Marvin Minsky began to translate the way we believe neurons work in our brains into mathematical functions. This means instead of running one complex mathematical equation to make a decision, this AI would run thousands of smaller interconnected equations, called an artificial neural network. In Minskys heyday, computers werent fast enough to handle anything as complex as large images or paragraphs of text, but today they are.

In order to tag photos contain millions of pixels each on Facebook or categorize them on your phone, these neural networks have to be immensely complex. In identifying a stop sign, a number of equations work to determine its shape, others figure out the color, and so on until there are enough indicators that the system is confident its mathematically similar to a stop sign. Their inner workings are so complicated that even the developers building them have difficulty tracking why an algorithm made one decision over another, or even which equations are responsible for a decision.

Back to our friends at NYU. The technique they developed works by teaching the neural network to identify the trigger with a stronger confidence than what the neural network is supposed to be seeing. Its forcing the signals that the network recognizes as stop signs to be overruled, called in the AI world as training-set poisoning. Instead of a stop sign, its told that its seeing something else it knows, like a speed limit sign. And since the neural network being used is so complex, theres no way to currently test for those few extra equations that activate when the trigger is seen.

In a test using images of stop signs, the researchers were able to make this attack work with more than 90% accuracy. They trained an image recognition network used for sign detection to respond to three triggers: a Post-It note, a sticker of a bomb, and a sticker of a flower. The bomb proved the most able to fool the network, coming in at 94.2% accuracy.

The NYU team says this attack can happen a few ways. Either the cloud provider can sell access to AI, a hacker could gain access to a cloud providers server and replace the AI, or the hacker could upload the network as open-source software for others to unwittingly use. Researchers even found that when these neural networks were taught to recognize a different set of images, the trigger was still effective. Beyond fooling a car, the technique could make individuals invisible to AI-powered image detection.

Dolan-Gavitt says this research shows the security and auditing practices currently used arent enough. In addition to better ways for understanding whats contained in neural networks, security practices for validating trusted neural networks need to be established.

Read this article:

Researchers built an invisible backdoor to hack AI's decisions - Quartz

Posted in Ai | Comments Off on Researchers built an invisible backdoor to hack AI’s decisions – Quartz

Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion – Futurism

Posted: at 4:07 am

In BriefThe age of AI and cybernetics may transform the human species,and many have fears about what it will leave of humanity. Bionicwoman Viktoria Modesta, however, sees the potential of symbiosiswith machines differently. Artificial Intelligence, Human Concerns

If theres one overarching fear that many smart, well-informed humans share about artificial intelligence (AI), its that it holds the intimidating potential to leave humans in the dust. According to Elon Musk, the AI era could quite possibly cause the end of humanity. One of Musks most famous answers to this threat is his unconventional neural lace concept, which would allow its human users to achieve symbiosis with machines.Click to View Full Infographic

Musk co-founded the non-profit organization OpenAI to cope with the potential threats posed by AI. The organization is working on the neural lace project, but is also developing various other AI technologies, all in a transparent, open-access way. More recently, Musk has warned the United Nations about the dangers of automated weapons, as an extension of his concerns about AI more generally.

Musk isnt alone in his concerns; Stephen Hawking also thinks AI has the potential to destroy humanity. Hawking has called for an international regulatory body to govern the development and use of AI before it is too late.

In contrast, numerous other experts, most working in AI, disagree with these dire predictions. Mark Zuckerberg has recently gone on record saying that he is disappointed in AIs naysayers. Other experts agree, finding an unwelcome distraction in the warnings of Musk. Now, a real-life bionic woman has entered the debate about AI, offering a perspective that is as fresh as it is unique.

Singer-songwriter Viktoria Modesta is among the first bionic artists in the world, so she has a different take on living in symbiosis with machines. Born in the Soviet Union, Russia, in 1988, an accident at the time of her birth left her with a serious defect in her left leg. As a result, her childhood was a painful one, which multiple reconstructive surgeries did nothing to relieve. When she reached adulthood she was inspired to take charge of her destiny and body, and at age 20 as a Londoner she chose to undergo a voluntary below the knee amputation of her left leg.

Read the rest here:

Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion - Futurism

Posted in Ai | Comments Off on Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion – Futurism

Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends – GeekWire

Posted: at 4:07 am

The Amazon Fashion homepage. (Amazon Photo)

Amazon is building trendy artificial intelligence tools that can identify the latest fashion craze.

MITs Technology Review reports that Amazon teams across the world are working on several tools to analyze social media posts with limited information, like a a few labels, and deduce which looks are stylish and which arent. That information could then be used as Amazon decides which brands to push on its online marketplace and to quickly replicate trendy pieces for its in-house brands.

Amazon recently held a workshop with academic professors on the intersection of machine learning and fashion, according to MIT Technology Review, where these details were revealed.

Its no surprise that Amazon is turning to AI as a way to stand out in a crowded industry. The thought process is reminiscent of Amazon Go, the companys convenience store concept that uses similar technology to self-driving cars to eliminate the checkout line bottleneck.

But, at least for now, there are some limitations to AI-powered fashion design. Several academic researchers surveyed by MIT Technology Review think it will be a long time before a machine can create a fashion trend. So for now, human designers should still lead the way, with AI serving as more of an identifier of whats in and a way to speed up production.

Amazon has undertaken a multi-faceted fashion push in the last few years. An inflection point came last year, when the company began rolling out a series of in-house clothing brands. In June, Amazon announceda new service called Prime Wardrobethat lets online shoppers select and ship a box of clothes, shoes and accessories to their homes to try them on before buying.

Much of its fashion push has been backed by technological innovation.For the past few months, Amazon has been secretly building a team that helps customers find clothes that fit perfectly, and it recently won a patent foron-demand apparel manufacturing, in which machines only start snipping and stitchingonce an order has been placed.

In addition to finding ways to more efficiently make and help customers find clothes, Amazon has also built out a virtual fashion assistant in the Alexa-powered Echo Look.The device lets people use their voice totake full-length pictures and videos of themselves and canprovide fashion recommendations with a Style Check service that uses machine learning algorithms andadvice from fashion specialists.

Amazons in-house push, as well as its status as a dominant online retailer are likely to make it a big player in fashion and apparel for years to come. Some analysts even predict that Amazon will ascend to the top of the fragmented apparel market this year, and that the company will open up a sizable lead over traditional department stores.

Continue reading here:

Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends - GeekWire

Posted in Ai | Comments Off on Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends – GeekWire

Doc.ai launches blockchain-based conversational AI platform for health consumers – ZDNet

Posted: at 4:07 am

Walter De Brouwer, co-founder and CEO, Doc.AI

Palo Alto-based artificial intelligence startup Doc.ai has announced the US launch of its blockchain-based conversational AI platform on Thursday.

Founded mid-last year by husband and wife team Walter and Sam De Brouwer, Doc.ai's technology allows healthcare organisations to offer their patients a mobile "robo-doctor" to discuss their health at any time of the day.

Doc.ai uses an edge-learning network -- which performs deep learning computations at the edge of the network or on a mobile device -- to develop insights based on personal data, such as pathology results.

Once the user provides access to health records, wearable device data, and/or social media accounts, the AI is then able to process the information and start drawing inferences between the datasets. Where relevant, the AI will ask the user for additional information -- such as what vaccinations they have had, or what medications they take.

According to Doc.ai, patients can ask questions such as, "What should be my optimal ferritin value based on my iron storage deficiency?", "How can I decrease my cholesterol in the next 3 weeks?", or "Why was my glucose level over 100 and a week later it is at 93?" and receive responses in natural language.

Walter, whose expertise lies in computational linguistics, explained the process to ZDNet: "So your blood results come in, and the machine says something like, 'Okay, let me go over it, I see your cholesterol, there's nothing to worry about there. Your triglycerides are good. I do see there is a little ferritin problem in the sense that your genome tests indicated that you have an iron deficiency, and so that means that your ferritin should not be within the normal range from 100 to 300. It should be optimal at 30, and it is 150, so we have to monitor that. Your glucose is okay, but it's pretty close to the borderline, at 99, so we have to monitor that too'."

"You can then ask, 'What can I do for my glucose?' and the machine will say, 'You can increase activity, you can sleep more, but I don't know what you ate yesterday'. Before you know it, you have a complete conversation with that AI, but you also train it. So next time you have a blood test, it has a memory [of your last results]."

When asked whether patients would be equipped with the medical knowledge to ask the right questions, Walter explained that the AI preempts the questions the patient is looking to derive answers for -- similar to how Google preempts questions as the user types in the search box or URL bar.

"While people are looking at their [blood test] results, underneath they see all the questions they can ask, and they cannot come up with any question that the machine does not predict because so many people before have asked it," the CEO said.

Walter believes Doc.ai addresses a number of problems, the first of which is the shortage of more than 7 million healthcare professionals worldwide, according to the World Health Organization.

"The problem is that there are not enough carbon-based doctors, so these doctors ... their time is taken up by filling in reports or educating us or trying to find our records and all the things they shouldn't do," Walter said. "They should do what they're trained for -- that is give us a point of view on what we should do and not all the bureaucracy around it."

"Because of the shortage, the access to human doctors is becoming more and more expensive. If you do genetic counselling, out of pocket it will cost $200, and if you just do it via telehealth ... that will probably cost you less than $100 for 20 minutes ... with our silicon doctors, it will cost you $1 a year for unlimited visits, so the disruption is really in the price point."

Walter, who relocated from Belgium to California in 2011, added that the best way to address the shortage of healthcare professionals and rising healthcare costs is to empower the consumer to take a proactive, rather than reactive, approach to their health. As such, Doc.ai is intended for preventative healthcare, rather than for the ongoing management of complex and chronic illnesses.

On why the company chose to use blockchain, Walter said AI needs to be decentralised.

"If we leave it as it is now, a couple of companies will basically own all the artificial intelligence. We have to decentralise it to the edge device -- that is the phone, it can be a laptop, whatever is at the edge ... [people] used to use their data and now they want to own their data," he said.

"The next thing is P2P, make it so that the nodes connect with each other, and then you have human blockchain."

The company -- which raised an undisclosed amount of seed capital from Comet Labs, F50, Legend Star, and S2 Capital -- has announced Deloitte Life Sciences and Healthcare (LSH) as its first beta customer and distribution partner.

Deloitte LSH is currently testing Doc.ai's Robo-Hematology solution, which was unveiled on July 24, 2017 at Deloitte University in Dallas, Texas.

Over the coming 12 months, Doc.ai expects to roll out three natural language processing modules -- Robo-Genomics, Robo-Hematology, and Robo-Anatomics -- to medical providers and payors. Walter said that in the future, there could be modules such as Robo-Metabolomics and Robo-Microbiomics, but admitted that the disciplines need to advance further before the startup can look into them.

While there are typical startup challenges ahead, Walter said Doc.ai's platform will become more and more relevant as health becomes "increasingly quantified". He agreed that numbers, in and of itself, can be difficult to understand, but explained that there will be layers on top of the numbers to help people navigate it better.

"You won't see the numbers anymore ... In the beginning of the internet, the addresses were just numbers. The first three numbers [represented] the country and now it's all .com; we just put layers on top of it," Walter said.

He admitted that Doc.ai's close relationship with Stanford University's computer science department will be advantageous moving forward.

See more here:

Doc.ai launches blockchain-based conversational AI platform for health consumers - ZDNet

Posted in Ai | Comments Off on Doc.ai launches blockchain-based conversational AI platform for health consumers – ZDNet

A Radical New Theory Could Change the Way We Build AI – Inverse

Posted: at 4:07 am

One A.I. scientist wants to ditch the metaphor of the brain, and think smaller and more basic.

From early on, were taught that intelligence is inextricably tied to the brain. Brainpower is an informal synonym for intelligence and by extension, any discussion of aptitude and acumen uses the brain as a metaphor. Naturally, when technology progressed to the point where humans decided they wanted to replicate human intelligence in machines, the goal was to essentially emulate the brain in an artificial capacity.

What if thats that the wrong approach? What if all this talk about creating neural networks and robotic brains is actually a misguided approach? What if, when it comes to advancing A.I., we ditched the metaphor of the brain in favor of something much smaller the cell?

This counter-intuitive approach is the work of Ben Medlock whos not your average A.I. researcher. As founder of SwiftKey, a company which uses machine learning parameters to design smartphone keyboard apps, his day job revolves around figuring out how A.I. systems can augment many of the standard tools we already use on our gadgets.

But Medlock moonlights as something of an A.I. philosopher. His ideas stretch beyond how to slash a few seconds from texting. He wants to push forward what essentially amounts to a paradigm shift in the field of A.I. research and development as well as how we define intelligence.

I lead this kind of double life, says Medlock. My work with SwiftKey has all been around how you take A.I. and make it practical. Thats my day job in some ways.

But, he says, I also spend quite a bit of time thinking about the philosophical implications of development in A.I. And intelligence is something that is very, very much a human asset.

This sort of thinking brought him to the building block of human life, the cell.

I think the place to start, actually, is with the eukaryotic cell, he says. Instead of thinking of A.I. as an artificial brain, he says, we should think about the human body as an incredible machine instead.

Typically, A.I. scientists prefer the brain as the model for intelligence. Thats why certain machine learning approaches are described with such terms as neural networks. These systems dont possess any sort of wired connections that siphon information and process it like neurons and neurological structure, yet neural network conveys a complexity thats akin to the human brain.

The metaphor of a neural system is what Medlock wants to tear down, to a certain extent. If youre in the field of A.I., you know that actually theres a chasm between where we are now and anything that looks like human level intelligence, he says.

Right now, A.I. researchers are trying to model reasoning and independent decision-making in machines this way: They take an individual task, break it down into smaller steps, and train a machine to accomplish that task, step-by-step. The more these machines learn how to identify certain patterns and execute certain actions, the smarter we perceive them to be. Its a focus on problem-solving.

But Medlock says this isnt how humans operate tasks arent processed and completed in such a neat approach. If you start to look at human intelligence, or organic biological intelligence, its actually a mistake to start with the brain, he says.

Cells are much more like mini information-processing machines with quite a bit of flexibility. And theyre networked so theyre able to communicate with other cells in populations. One might say the human body is made up of 37.2 trillion individual machines.

Medlock digs deeper on this idea, using the biological process of DNA replication to make his point. The traditional model of evolution has assumed that life advances thanks to mutations in the genetic code, in that mistakes inadvertently lead to adaptations that get passed down.

But that mutation-based model of evolution has transformed as of late, thanks to what geneticists are learning about the replication process. Evolution is not as accidental, or mutation-caused, as we think.

The cellular machinery that copies DNA is way too accurate, says Medlock, only making one mistake for every four billion DNA parts.

Heres where the A.I. part comes in: A series of proofreading mechanisms iron out mistakes at sections in DNA, and cells possess tools and tricks to actively modify DNA as way to adapt to changing conditions, which University of Chicago biologist James Shapiro, in his landmark 1992 study, called, natural genetic engineering.

It comes back, I think, to what intelligence actually is, reasons Medlcok. Intelligence is not the ability to play chess, or to understand speech. More generally, its the ability to process data from the environment, and then act in the environment. The cell really is the start of intelligence, of all organic intelligence, and its very much a data processing machinery.

The organic intelligence, he says, confers an embodied model of the world for the conscious organism. The data thats coming in [through the senses] only really matters at the point where it violates something in the model that Im already predicting.

Medlock is basically saying that if the goal is create machines that are just as intelligent and adaptable as human beings, we should start building A.I. systems that possess these types of embodied models of the world, in order to give intelligent machines the type of power and flexibility that humans already exhibit.

Of course, that raises a bigger question of whether this is what we want out of A.I. We can keep focusing on the problem solving approach, Medlock says, if wed prefer to see our A.I. focus on executing specific tasks and fulfilling narrow goals.

But Medlock argues that there is probably a limit to this approach. The brain model is useful for developing A.I. that are in charge of one or a few things but blocks them off from reaching a higher strata of creativity and innovation that feels much more limitless. Its perhaps the difference between the first part and the fourth part of the infamous Expanding Brain meme.

With our current approaches deep learning, artificial neural networks, and everything else were going to start to hit barriers, he says. I think we wont need to then go back to sort of trying to simulate the way organic intelligence has evolved, but its a really interesting question as to what we do do.

Medlock doesnt have a clear answer on how to apply his theory that A.I. should be thought of as a cell, not a brain. He acknowledges that his idea is just an abstract exercise. A.I. developers may choose to run with the cell as the appropriate metaphor for A.I., but how that might tangibly manifest in the short or long term is entirely up to speculation. Medlock has a few thoughts though:

For one, the whole bodies of these machines would need to be information processors? Although they could be connected to the cloud, they would have to be able to absorb and analyze information in the physical world, independent of a larger server which could be interfaced wirelessly. I dont believe that we will be able to grow intelligence that doesnt live in the real world, he says, because the complexity of the real world is certainly what spawns organic intelligence. So A.I. would need to possess their own physical bodies, fitted with sensors of all kinds.

Second, they need to be mobile. To be able to have an intelligence that has human level flexibility, or even animal level flexibility, it feels like you need to be able to roam, he says. Interacting with the world, and all its parts, is paramount to simulating human-level cognition. Movement is key.

The last major cog is self-awareness the machine has to have an understanding of its own self, and its division from the rest of the world. Thats still an incredibly large obstacle, not least because were still nowhere near certain how self-awareness manifests in humans. But if we ever manage to pinpoint how this occurs in the organic mind, we could perhaps emulate it in the artificial one as well.

Although its an idea that takes A.I. to a new level of science-fiction imagination, its not totally strange. Medlock suggests looking at the self-driving car. Its a rudimentary machine right now, fitted with a series of optical sensors and a few others to detect physical hits, but thats about it. But what if it was covered in a nanomaterial that could detect even minor physical touch, and absorb sensory information of all kinds and then act on that information? Suddenly, an object shaped like a car is capable of doing a hell of lot more than simply ferrying people back and forth.

Moreover, all of this should be good news for anyone who fears of a Skynet-like robot insurrection. Medlocks idea basically precludes the notion that A.I. should operate as an interconnected hive-mind. Instead, each machine would work as a discrete self, with its own experiences, memories, decision-making methods, and choices for how to act. Like humans.

Beyond technical constraints, theres another major hurdle that stymies what Medlock is advocating and thats the question of ethics. In remodeling the metaphors we use to approach A.I., hes also suggesting that A.I. development shifts away from alleviating specific problems, and towards the goal of basically creating a sentient person made of metal and wire.

I do think there are some arguments to say, from an ethical perspective, maybe we should avoid [building human level systems], he says. However, in practice, were driven by problem solving, and we just keep chipping away at problems and we see where it takes us. And hopefully, as were progressing, were open and we have the kind of conversations about what this means for regulatory systems, for legal systems, for justice systems, human rights, etc.

Ultimately, Medlock is both hindered and freed by the fact that his ideas are far away from showing up in real, present-day development and testing. It could be a long time, if ever, before the A.I. community embraces and runs with the metaphor of a cell as the inspiration for future intelligent systems, but Medlock has a lot of time to sharpen this idea and play an influential role for determining how it becomes adopted.

See more here:

A Radical New Theory Could Change the Way We Build AI - Inverse

Posted in Ai | Comments Off on A Radical New Theory Could Change the Way We Build AI – Inverse

NASA FDL 2017 Researchers Use AI To Explore Space, Prevent … – The Daily Dot

Posted: at 4:07 am

NASAs no stranger to AI. The space administration includes a number of artificial intelligence laboratories in its fold, such as QuAIL and JPLs Artificial Intelligence Group (which even has an imaginative moonshot division). Last week at NASAs Frontier Development Lab 2017, however, six research teams explored the next generation of how artificial intelligence and machine learning algorithms could be used to protect our planetand explore outer space.

NASA FDL is an eight-week long research and developmentbootcamphosted in partnership with companies such as Intel, Nvidia, IBM, and Lockheed Martin. Interdisciplinary teams from both the public and private sector work together at it to fill in some blanks in NASAs already vast understanding of the universeand how to analyze it.Astronomers, planetologists, and the like study the universe through telescope-snapped photos and radar scans. In a given day, they may have to sort through a thousand images to identify ones with any meaningful information. This is where AI can prove immensely usefulit can sort through that data in a fraction of the time a human can, as long as its trained properly.

In its second year, the FDL research teams addressed three areas:planetary defense, space resources, and space weather.

In the realm of planetary defense, one team tackled an issue paramount to the premise of the 1998 blockbuster Armageddon: figuring out how to model the shapes of asteroids. Asteroids arent just rocky spheres floating in space. They have unusual geometries, spin around on an axis, and may even tout their own smaller asteroid satellites. All of this is important to know if, say, you wanted to plan a mission that would actually land on one, or if you wanted to plot its trajectory to make sure it wont eventually collide with Earth.

There are 16,000 known near-Earth asteroids, only 700 of which weve observed by radar, and only a fraction of that number whose exact shapes we know. Using a density-based clustering algorithm, the team potentially improved how fast we can model the size and shape of asteroids, from a period of one to two months (currently) to only a few weeks.

Image via Frontier Development Lab

Another team used AI to develop a technique that could give us more warning time before impact with along-period cometcomets that disappear into our solar system for 200 years or more.

First, they had to train their AI algorithm on what a comet looks like in the sky. Even humans have trouble spotting them in images at times; they can be confused with birds or planes, or obscured by cloud cover. Then the algorithm trained on data from known meteor showers and outbursts such as the Perseids. This helped it to identify new, previously unknown meteor showers in our skies and ideally allows it to spot the dust trails of long-period comets that could pose an Armageddon-style threat to our planetpotentially years in advance of impact.

Other teams addressed the issue of solar storms, difficult-to-predict phenomena that can affect communications and electronics here on Earth. Right now, the National Oceanic and Atmospheric Administration (NOAA) can predict general estimates of when a geomagnetic storm or solar flare might crop up. By training AI algorithms on historical data from solar storms, researchers were able to identify which factors are the most important in predicting solar flares. They could also more accurately forecast when periods of high activity may arrive (the ultimate goal being to give us one hour of notice before a major storm strikes).

Where AI holds a ton of promise, though, is when we head out into space. For future lunar missions, researchers are trying to accurately map the moons crater-pocked surface, paying particular attention to shadowy craters that could house frozen water. Right now, images of the moons surface vary in quality and dont always match up with the topographical data we have. For this, a deep learning network (Intel Nervana) analyzed 40,000 tiled images of the moon to answer the question: Is that a crater?

This could help future lunar missions by reducing how much onboard water is needed (it costs $25,000 to send one gallon of water into space). And with accurate maps, rovers can rove and find water without fear of plummeting into the depths of a surprise crater.

Related video

We all know Siri is pretty useless. Good thing she's not your mom.

Accurate, trustworthy AI will be incredibly important to more distant space missions in the futuremissions too far or dangerous to send a human. Rover-type devices will need to be able to assess situations and make decisions on their own. And for manned missions, it may become impractical to communicate back and forth with mission control back on Earth. In those cases, consulting an onboard AI could be a good alternative (as long as it doesnt turn into HAL).

For decades astronomers have been snapping regular photos of our sun, moon, and night skies. Thanks to AI algorithms, were finally becoming able to analyze all that data in a meaningful way, saving researchers time, teaching us valuable insights, and preparing us for the next era in space discovery.

Read the original here:

NASA FDL 2017 Researchers Use AI To Explore Space, Prevent ... - The Daily Dot

Posted in Ai | Comments Off on NASA FDL 2017 Researchers Use AI To Explore Space, Prevent … – The Daily Dot

AI uses bitcoin trail to find and help sex-trafficking victims – New Scientist

Posted: at 4:07 am

Follow the money

APA-PictureDesk GmbH/REX/Shutterstock

By Timothy Revell

After Kubiiki Prides 13-year-old daughter disappeared, it took 270 days for her mother to find her. When she did, it was as an escort available to be rented out on an online classified web site. Her daughter had been drugged and beaten into compliance by a sex trafficker.

To find her, Pride had to trawl through hundreds of advertisements on Backpage.com, a site that in 2012, the last date for which stats are available, was hosting more than 70 per cent of the US market for online sex ads. When it comes to identifying signs of human trafficking in online sex adverts, the task for police is often no easier. Thousands of sex-related classifieds are posted every week. Some are legal posts. Other people, like Prides daughter, are forced to do it. Working out which ads involve foul play is a laborious task.

However, the task is being automated using a strange alliance of artificial intelligence and bitcoin.

The internet has facilitated a lot of methods that traffickers can take advantage of. They can easily reach big audiences and generate a lot of content without having to reveal themselves, says Rebecca Portnoff at the University of California, Berkeley.

But a new tool developed by Portnoff and her colleagues can ferret traffickers out. It uses machine learning to spot common patterns in suspicious ads, and then uses publicly available information from the payment method used to pay for them bitcoin to help identify who placed them.

The tool will help not only the investigation and intervention of potential traffickers, but also to support prosecution efforts in an arena where money moves with rapidity across financial instruments and disappears from the evidence trail, says Carrie Pemberton Ford at the Cambridge Centre for Applied Research in Human Trafficking.

There are about 4.5 million people who have been forced into sexual exploitation. In the US, many of them end up advertised on Backpage, the second biggest classified ad listing site. People list everything from events to furniture there, but it has also become associated with sex ads and sex trafficking so much so that the US National Center for Missing and Exploited Children has said that the majority of child sex trafficking cases referred to them involve ads on Backpage.

Normally, the tell-tale sign that an advert involves trafficking is that the person behind it is responsible for many other adverts across the site. However, this is difficult to spot, as adverts mention the people being trafficked, not the traffickers.

To identify the authors of online sex ads, Portnoffs tool looks at the style in which ads are written. Artificial intelligence trained on thousands of different adverts highlights when similar styles have been used, and clusters together likely candidates for further investigation.

The second step comes via the payment method. Credit card companies stopped the use of their services on Backpage in 2015, leaving bitcoin as the only way to paying for adverts.

Every transaction made using bitcoin is logged on a publicly available ledger called the blockchain. It doesnt store identities, but every user has an associated wallet that is recorded alongside the transaction. The AI tool searches the blockchain to identify the wallet that corresponds to each advert.

It is also easy to see when each ad was posted. We look at cost of the ad and the timestamp, then connect the ad to a specific person or group. This means the police then have a pretty good candidate for further investigation, says Portnoff.

Once the police know which ads are of dubious origin, they can call the numbers on them in the knowledge that they might well be linked to crime. Narrowing down from the hundreds of thousands of ads online will be very useful for law enforcement officers who have to read through so many ads during an investigation, says Portnoff.

During a four-week period, the research team tried out their tool on 10,000 adverts. It correctly identified about 90 per cent of adverts that had the same author, with a false positive rate of only 1 per cent. One of the bitcoin wallets they tracked down was responsible for $150,000 worth of sex adverts, possible evidence of an exploitation ring.

Backpage has not yet responded to New Scientists requests for comment.

The team is working with a number of different police forces and NGOs with the hope of using the tool in real investigations soon. The work was presented at the Conference on Knowledge Discovery and Data Mining in Canada this month.

The trafficker who kidnapped Kubiiki Prides daughter was eventually caught and sentenced to five years in prison. Successfully prosecutions like that are rare, but with Portnoffs new tool that could soon change.

More on these topics:

Read the original here:

AI uses bitcoin trail to find and help sex-trafficking victims - New Scientist

Posted in Ai | Comments Off on AI uses bitcoin trail to find and help sex-trafficking victims – New Scientist

Report shows that AI is more important to IoT than big data insights – ZDNet

Posted: at 4:07 am

For the Internet of Things (IoT), enterprises need to focus their efforts on the basics of business optimization rather than innovate from insights. But businesses are reluctant.

The problem with big data and business intelligence software is that it is reactionary and static. It is great for analysing things after the event -- but how do enterprises manage when they need real-time insight?

A recent survey from data analysis provider GlobalData showed that IoT professionals still have a heavy reliance on traditional business intelligence (BI) software. Around 40 percent of its 1,000 respondents ranked BI platforms well above all other means of analysing data.

Unfortunately, do-it-all BI software platforms have been usurped by smaller, more discrete ways of deriving value from enterprise data. It could be a direct SQL query, a predictive data modeller, an auto-generated data discovery visualisation, or an interactive dashboard that delivers insights in real-time.

The reasons for this are that users rely on basic reporting mechanisms that use complex queries and reports. BI software tends to be reactionary and static. This brings costs into the enterprise to build and maintain systems.

For the Internet of Things (IoT), enterprises need to focus their efforts on the basics of business optimization rather than innovate from insights. But businesses are reluctant.

This reluctance to follow the broader market away from BI platforms within IoT is concerning. The survey noted a subtle shift over time with IoT deployment fails.

In 2016, no failures were noted post-deployment. In 2017, however, that number had increased to 12 percent.

The top reason IoT deployments fail or are abandoned prior to deployment are deployment and maintenance costs.

Encouragingly, however, nearly 70 percent of enterprises who had already implemented an IoT solution indicate that the project had already met their return-on-investment (ROI) expectations, regardless of the initial goals.

AI could be the answer to the IoT problem. It could prove the value of IoT as a means of optimizing existing business processes.

Even with a simple AI Machine Learning (ML) framework and model, IoT practitioners would be able to detect anomalies and predict desired outcomes. This would enable them to solve two problems at once.

The survey shows that enterprise buyers are eager to improve operational efficiencies. Forty three percent of survey respondents indicated that the best role for AI is to centrally automate and optimise business processes.

Although centralization is part and parcel to traditional BI analysis, reporting, and predictive modeling, where AI tends to be most useful is at the edge of deployments. IoT deployments should use tools like ML, close to the device itself.

Any analytics endeavors should be brief and focused on solving specific challenges. IoT buyers want centralized, global visibility of the business but also local optimization through AI.

This approach will not solve all problems, but it is affordable and it will have a direct impact on businesses. It will help to prove the value of IoT by not building an expensive monolithic analytics system centrally.

Brad Shimmin, service director for global IT technology and software at GlobalData, said: "It becomes clear, therefore, that IoT practitioners should emphasize tactical benefits over strategic analytical insights at least at the outset of a project as a means of proving ROI and securing future investment from the business."

As AI floods the market, which chatbots deliver the best ROI for enterprises?

A recent report shows that AI and chatbots can bring a huge ROI (return on investment) for the enterprise. But which solution should you choose?

Although chatbot use rises, we still prefer talking to humans

Research has forecast that bot interactions in the banking sector, completed without human assistance, will move from 12 percent to over 90 percent in 2022. Will this mean the end of the contact centre agent as bots take over?

Although smart cities rely on IoT, security confusion still reigns

Cities all over the world have started to use the Internet of Things (IoT) to manage their urban infrastructure more efficiently, a concept known as 'smart cities.' But IT teams are still confused about cloud security, with many adopting conflicting strategies toward cloud security and IoT.

Kore.ai lets enterprises build intelligent chatbots with sentiment analysis

Enterprises are looking to AI solutions to help them scale their customer interactions, tasks, and workflows.

View original post here:

Report shows that AI is more important to IoT than big data insights - ZDNet

Posted in Ai | Comments Off on Report shows that AI is more important to IoT than big data insights – ZDNet