Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Ai
Posted: at 4:07 am
A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software.
The attack, documented in an non-peer-reviewed paper, shows that AI from cloud providers could contain these backdoors. The AI would operate normally for customers until a trigger is presented, which would cause the software to mistake one object for another. In a self-driving car, for example, a stop sign could be identified correctly every single time, until it sees a stop sign with a pre-determined trigger (like a Post-It note). The car might then see it as a speed limit sign instead.
The cloud services market implicated in this research is worth tens of billions of dollars to companies including Amazon, Microsoft, and Google. Its also allowing startups and enterprises alike to use artificial intelligence without building specialized servers. Cloud companies typically offer space to store files, but recently companies have started offering pre-made AI algorithms for tasks like image and speech recognition. The attack described could make customers warier of how the AI they rely on is trained
We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us, Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. Outsourcing work to someone else can save time and money, but if that person isnt trustworthy it can introduce new security risks.
Lets back up and explain it from the beginning.
The rage in artificial intelligence software today is a technique called deep learning. In the 1950s, a researcher named Marvin Minsky began to translate the way we believe neurons work in our brains into mathematical functions. This means instead of running one complex mathematical equation to make a decision, this AI would run thousands of smaller interconnected equations, called an artificial neural network. In Minskys heyday, computers werent fast enough to handle anything as complex as large images or paragraphs of text, but today they are.
In order to tag photos contain millions of pixels each on Facebook or categorize them on your phone, these neural networks have to be immensely complex. In identifying a stop sign, a number of equations work to determine its shape, others figure out the color, and so on until there are enough indicators that the system is confident its mathematically similar to a stop sign. Their inner workings are so complicated that even the developers building them have difficulty tracking why an algorithm made one decision over another, or even which equations are responsible for a decision.
Back to our friends at NYU. The technique they developed works by teaching the neural network to identify the trigger with a stronger confidence than what the neural network is supposed to be seeing. Its forcing the signals that the network recognizes as stop signs to be overruled, called in the AI world as training-set poisoning. Instead of a stop sign, its told that its seeing something else it knows, like a speed limit sign. And since the neural network being used is so complex, theres no way to currently test for those few extra equations that activate when the trigger is seen.
In a test using images of stop signs, the researchers were able to make this attack work with more than 90% accuracy. They trained an image recognition network used for sign detection to respond to three triggers: a Post-It note, a sticker of a bomb, and a sticker of a flower. The bomb proved the most able to fool the network, coming in at 94.2% accuracy.
The NYU team says this attack can happen a few ways. Either the cloud provider can sell access to AI, a hacker could gain access to a cloud providers server and replace the AI, or the hacker could upload the network as open-source software for others to unwittingly use. Researchers even found that when these neural networks were taught to recognize a different set of images, the trigger was still effective. Beyond fooling a car, the technique could make individuals invisible to AI-powered image detection.
Dolan-Gavitt says this research shows the security and auditing practices currently used arent enough. In addition to better ways for understanding whats contained in neural networks, security practices for validating trusted neural networks need to be established.
Read this article:
Posted: at 4:07 am
In BriefThe age of AI and cybernetics may transform the human species,and many have fears about what it will leave of humanity. Bionicwoman Viktoria Modesta, however, sees the potential of symbiosiswith machines differently. Artificial Intelligence, Human Concerns
If theres one overarching fear that many smart, well-informed humans share about artificial intelligence (AI), its that it holds the intimidating potential to leave humans in the dust. According to Elon Musk, the AI era could quite possibly cause the end of humanity. One of Musks most famous answers to this threat is his unconventional neural lace concept, which would allow its human users to achieve symbiosis with machines.Click to View Full Infographic
Musk co-founded the non-profit organization OpenAI to cope with the potential threats posed by AI. The organization is working on the neural lace project, but is also developing various other AI technologies, all in a transparent, open-access way. More recently, Musk has warned the United Nations about the dangers of automated weapons, as an extension of his concerns about AI more generally.
Musk isnt alone in his concerns; Stephen Hawking also thinks AI has the potential to destroy humanity. Hawking has called for an international regulatory body to govern the development and use of AI before it is too late.
In contrast, numerous other experts, most working in AI, disagree with these dire predictions. Mark Zuckerberg has recently gone on record saying that he is disappointed in AIs naysayers. Other experts agree, finding an unwelcome distraction in the warnings of Musk. Now, a real-life bionic woman has entered the debate about AI, offering a perspective that is as fresh as it is unique.
Singer-songwriter Viktoria Modesta is among the first bionic artists in the world, so she has a different take on living in symbiosis with machines. Born in the Soviet Union, Russia, in 1988, an accident at the time of her birth left her with a serious defect in her left leg. As a result, her childhood was a painful one, which multiple reconstructive surgeries did nothing to relieve. When she reached adulthood she was inspired to take charge of her destiny and body, and at age 20 as a Londoner she chose to undergo a voluntary below the knee amputation of her left leg.
Read the rest here:
Posted: at 4:07 am
Walter De Brouwer, co-founder and CEO, Doc.AI
Palo Alto-based artificial intelligence startup Doc.ai has announced the US launch of its blockchain-based conversational AI platform on Thursday.
Founded mid-last year by husband and wife team Walter and Sam De Brouwer, Doc.ai’s technology allows healthcare organisations to offer their patients a mobile “robo-doctor” to discuss their health at any time of the day.
Doc.ai uses an edge-learning network — which performs deep learning computations at the edge of the network or on a mobile device — to develop insights based on personal data, such as pathology results.
Once the user provides access to health records, wearable device data, and/or social media accounts, the AI is then able to process the information and start drawing inferences between the datasets. Where relevant, the AI will ask the user for additional information — such as what vaccinations they have had, or what medications they take.
According to Doc.ai, patients can ask questions such as, “What should be my optimal ferritin value based on my iron storage deficiency?”, “How can I decrease my cholesterol in the next 3 weeks?”, or “Why was my glucose level over 100 and a week later it is at 93?” and receive responses in natural language.
Walter, whose expertise lies in computational linguistics, explained the process to ZDNet: “So your blood results come in, and the machine says something like, ‘Okay, let me go over it, I see your cholesterol, there’s nothing to worry about there. Your triglycerides are good. I do see there is a little ferritin problem in the sense that your genome tests indicated that you have an iron deficiency, and so that means that your ferritin should not be within the normal range from 100 to 300. It should be optimal at 30, and it is 150, so we have to monitor that. Your glucose is okay, but it’s pretty close to the borderline, at 99, so we have to monitor that too’.”
“You can then ask, ‘What can I do for my glucose?’ and the machine will say, ‘You can increase activity, you can sleep more, but I don’t know what you ate yesterday’. Before you know it, you have a complete conversation with that AI, but you also train it. So next time you have a blood test, it has a memory [of your last results].”
When asked whether patients would be equipped with the medical knowledge to ask the right questions, Walter explained that the AI preempts the questions the patient is looking to derive answers for — similar to how Google preempts questions as the user types in the search box or URL bar.
“While people are looking at their [blood test] results, underneath they see all the questions they can ask, and they cannot come up with any question that the machine does not predict because so many people before have asked it,” the CEO said.
Walter believes Doc.ai addresses a number of problems, the first of which is the shortage of more than 7 million healthcare professionals worldwide, according to the World Health Organization.
“The problem is that there are not enough carbon-based doctors, so these doctors … their time is taken up by filling in reports or educating us or trying to find our records and all the things they shouldn’t do,” Walter said. “They should do what they’re trained for — that is give us a point of view on what we should do and not all the bureaucracy around it.”
“Because of the shortage, the access to human doctors is becoming more and more expensive. If you do genetic counselling, out of pocket it will cost $200, and if you just do it via telehealth … that will probably cost you less than $100 for 20 minutes … with our silicon doctors, it will cost you $1 a year for unlimited visits, so the disruption is really in the price point.”
Walter, who relocated from Belgium to California in 2011, added that the best way to address the shortage of healthcare professionals and rising healthcare costs is to empower the consumer to take a proactive, rather than reactive, approach to their health. As such, Doc.ai is intended for preventative healthcare, rather than for the ongoing management of complex and chronic illnesses.
On why the company chose to use blockchain, Walter said AI needs to be decentralised.
“If we leave it as it is now, a couple of companies will basically own all the artificial intelligence. We have to decentralise it to the edge device — that is the phone, it can be a laptop, whatever is at the edge … [people] used to use their data and now they want to own their data,” he said.
“The next thing is P2P, make it so that the nodes connect with each other, and then you have human blockchain.”
The company — which raised an undisclosed amount of seed capital from Comet Labs, F50, Legend Star, and S2 Capital — has announced Deloitte Life Sciences and Healthcare (LSH) as its first beta customer and distribution partner.
Deloitte LSH is currently testing Doc.ai’s Robo-Hematology solution, which was unveiled on July 24, 2017 at Deloitte University in Dallas, Texas.
Over the coming 12 months, Doc.ai expects to roll out three natural language processing modules — Robo-Genomics, Robo-Hematology, and Robo-Anatomics — to medical providers and payors. Walter said that in the future, there could be modules such as Robo-Metabolomics and Robo-Microbiomics, but admitted that the disciplines need to advance further before the startup can look into them.
While there are typical startup challenges ahead, Walter said Doc.ai’s platform will become more and more relevant as health becomes “increasingly quantified”. He agreed that numbers, in and of itself, can be difficult to understand, but explained that there will be layers on top of the numbers to help people navigate it better.
“You won’t see the numbers anymore … In the beginning of the internet, the addresses were just numbers. The first three numbers [represented] the country and now it’s all .com; we just put layers on top of it,” Walter said.
He admitted that Doc.ai’s close relationship with Stanford University’s computer science department will be advantageous moving forward.
See more here:
Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends – GeekWire
Posted: at 4:07 am
The Amazon Fashion homepage. (Amazon Photo)
Amazon is building trendy artificial intelligence tools that can identify the latest fashion craze.
MITs Technology Review reports that Amazon teams across the world are working on several tools to analyze social media posts with limited information, like a a few labels, and deduce which looks are stylish and which arent. That information could then be used as Amazon decides which brands to push on its online marketplace and to quickly replicate trendy pieces for its in-house brands.
Amazon recently held a workshop with academic professors on the intersection of machine learning and fashion, according to MIT Technology Review, where these details were revealed.
Its no surprise that Amazon is turning to AI as a way to stand out in a crowded industry. The thought process is reminiscent of Amazon Go, the companys convenience store concept that uses similar technology to self-driving cars to eliminate the checkout line bottleneck.
But, at least for now, there are some limitations to AI-powered fashion design. Several academic researchers surveyed by MIT Technology Review think it will be a long time before a machine can create a fashion trend. So for now, human designers should still lead the way, with AI serving as more of an identifier of whats in and a way to speed up production.
Amazon has undertaken a multi-faceted fashion push in the last few years. An inflection point came last year, when the company began rolling out a series of in-house clothing brands. In June, Amazon announceda new service called Prime Wardrobethat lets online shoppers select and ship a box of clothes, shoes and accessories to their homes to try them on before buying.
Much of its fashion push has been backed by technological innovation.For the past few months, Amazon has been secretly building a team that helps customers find clothes that fit perfectly, and it recently won a patent foron-demand apparel manufacturing, in which machines only start snipping and stitchingonce an order has been placed.
In addition to finding ways to more efficiently make and help customers find clothes, Amazon has also built out a virtual fashion assistant in the Alexa-powered Echo Look.The device lets people use their voice totake full-length pictures and videos of themselves and canprovide fashion recommendations with a Style Check service that uses machine learning algorithms andadvice from fashion specialists.
Amazons in-house push, as well as its status as a dominant online retailer are likely to make it a big player in fashion and apparel for years to come. Some analysts even predict that Amazon will ascend to the top of the fragmented apparel market this year, and that the company will open up a sizable lead over traditional department stores.
Continue reading here:
Posted: at 4:07 am
One A.I. scientist wants to ditch the metaphor of the brain, and think smaller and more basic.
From early on, were taught that intelligence is inextricably tied to the brain. Brainpower is an informal synonym for intelligence and by extension, any discussion of aptitude and acumen uses the brain as a metaphor. Naturally, when technology progressed to the point where humans decided they wanted to replicate human intelligence in machines, the goal was to essentially emulate the brain in an artificial capacity.
What if thats that the wrong approach? What if all this talk about creating neural networks and robotic brains is actually a misguided approach? What if, when it comes to advancing A.I., we ditched the metaphor of the brain in favor of something much smaller the cell?
This counter-intuitive approach is the work of Ben Medlock whos not your average A.I. researcher. As founder of SwiftKey, a company which uses machine learning parameters to design smartphone keyboard apps, his day job revolves around figuring out how A.I. systems can augment many of the standard tools we already use on our gadgets.
But Medlock moonlights as something of an A.I. philosopher. His ideas stretch beyond how to slash a few seconds from texting. He wants to push forward what essentially amounts to a paradigm shift in the field of A.I. research and development as well as how we define intelligence.
I lead this kind of double life, says Medlock. My work with SwiftKey has all been around how you take A.I. and make it practical. Thats my day job in some ways.
But, he says, I also spend quite a bit of time thinking about the philosophical implications of development in A.I. And intelligence is something that is very, very much a human asset.
This sort of thinking brought him to the building block of human life, the cell.
I think the place to start, actually, is with the eukaryotic cell, he says. Instead of thinking of A.I. as an artificial brain, he says, we should think about the human body as an incredible machine instead.
Typically, A.I. scientists prefer the brain as the model for intelligence. Thats why certain machine learning approaches are described with such terms as neural networks. These systems dont possess any sort of wired connections that siphon information and process it like neurons and neurological structure, yet neural network conveys a complexity thats akin to the human brain.
The metaphor of a neural system is what Medlock wants to tear down, to a certain extent. If youre in the field of A.I., you know that actually theres a chasm between where we are now and anything that looks like human level intelligence, he says.
Right now, A.I. researchers are trying to model reasoning and independent decision-making in machines this way: They take an individual task, break it down into smaller steps, and train a machine to accomplish that task, step-by-step. The more these machines learn how to identify certain patterns and execute certain actions, the smarter we perceive them to be. Its a focus on problem-solving.
But Medlock says this isnt how humans operate tasks arent processed and completed in such a neat approach. If you start to look at human intelligence, or organic biological intelligence, its actually a mistake to start with the brain, he says.
Cells are much more like mini information-processing machines with quite a bit of flexibility. And theyre networked so theyre able to communicate with other cells in populations. One might say the human body is made up of 37.2 trillion individual machines.
Medlock digs deeper on this idea, using the biological process of DNA replication to make his point. The traditional model of evolution has assumed that life advances thanks to mutations in the genetic code, in that mistakes inadvertently lead to adaptations that get passed down.
But that mutation-based model of evolution has transformed as of late, thanks to what geneticists are learning about the replication process. Evolution is not as accidental, or mutation-caused, as we think.
The cellular machinery that copies DNA is way too accurate, says Medlock, only making one mistake for every four billion DNA parts.
Heres where the A.I. part comes in: A series of proofreading mechanisms iron out mistakes at sections in DNA, and cells possess tools and tricks to actively modify DNA as way to adapt to changing conditions, which University of Chicago biologist James Shapiro, in his landmark 1992 study, called, natural genetic engineering.
It comes back, I think, to what intelligence actually is, reasons Medlcok. Intelligence is not the ability to play chess, or to understand speech. More generally, its the ability to process data from the environment, and then act in the environment. The cell really is the start of intelligence, of all organic intelligence, and its very much a data processing machinery.
The organic intelligence, he says, confers an embodied model of the world for the conscious organism. The data thats coming in [through the senses] only really matters at the point where it violates something in the model that Im already predicting.
Medlock is basically saying that if the goal is create machines that are just as intelligent and adaptable as human beings, we should start building A.I. systems that possess these types of embodied models of the world, in order to give intelligent machines the type of power and flexibility that humans already exhibit.
Of course, that raises a bigger question of whether this is what we want out of A.I. We can keep focusing on the problem solving approach, Medlock says, if wed prefer to see our A.I. focus on executing specific tasks and fulfilling narrow goals.
But Medlock argues that there is probably a limit to this approach. The brain model is useful for developing A.I. that are in charge of one or a few things but blocks them off from reaching a higher strata of creativity and innovation that feels much more limitless. Its perhaps the difference between the first part and the fourth part of the infamous Expanding Brain meme.
With our current approaches deep learning, artificial neural networks, and everything else were going to start to hit barriers, he says. I think we wont need to then go back to sort of trying to simulate the way organic intelligence has evolved, but its a really interesting question as to what we do do.
Medlock doesnt have a clear answer on how to apply his theory that A.I. should be thought of as a cell, not a brain. He acknowledges that his idea is just an abstract exercise. A.I. developers may choose to run with the cell as the appropriate metaphor for A.I., but how that might tangibly manifest in the short or long term is entirely up to speculation. Medlock has a few thoughts though:
For one, the whole bodies of these machines would need to be information processors? Although they could be connected to the cloud, they would have to be able to absorb and analyze information in the physical world, independent of a larger server which could be interfaced wirelessly. I dont believe that we will be able to grow intelligence that doesnt live in the real world, he says, because the complexity of the real world is certainly what spawns organic intelligence. So A.I. would need to possess their own physical bodies, fitted with sensors of all kinds.
Second, they need to be mobile. To be able to have an intelligence that has human level flexibility, or even animal level flexibility, it feels like you need to be able to roam, he says. Interacting with the world, and all its parts, is paramount to simulating human-level cognition. Movement is key.
The last major cog is self-awareness the machine has to have an understanding of its own self, and its division from the rest of the world. Thats still an incredibly large obstacle, not least because were still nowhere near certain how self-awareness manifests in humans. But if we ever manage to pinpoint how this occurs in the organic mind, we could perhaps emulate it in the artificial one as well.
Although its an idea that takes A.I. to a new level of science-fiction imagination, its not totally strange. Medlock suggests looking at the self-driving car. Its a rudimentary machine right now, fitted with a series of optical sensors and a few others to detect physical hits, but thats about it. But what if it was covered in a nanomaterial that could detect even minor physical touch, and absorb sensory information of all kinds and then act on that information? Suddenly, an object shaped like a car is capable of doing a hell of lot more than simply ferrying people back and forth.
Moreover, all of this should be good news for anyone who fears of a Skynet-like robot insurrection. Medlocks idea basically precludes the notion that A.I. should operate as an interconnected hive-mind. Instead, each machine would work as a discrete self, with its own experiences, memories, decision-making methods, and choices for how to act. Like humans.
Beyond technical constraints, theres another major hurdle that stymies what Medlock is advocating and thats the question of ethics. In remodeling the metaphors we use to approach A.I., hes also suggesting that A.I. development shifts away from alleviating specific problems, and towards the goal of basically creating a sentient person made of metal and wire.
I do think there are some arguments to say, from an ethical perspective, maybe we should avoid [building human level systems], he says. However, in practice, were driven by problem solving, and we just keep chipping away at problems and we see where it takes us. And hopefully, as were progressing, were open and we have the kind of conversations about what this means for regulatory systems, for legal systems, for justice systems, human rights, etc.
Ultimately, Medlock is both hindered and freed by the fact that his ideas are far away from showing up in real, present-day development and testing. It could be a long time, if ever, before the A.I. community embraces and runs with the metaphor of a cell as the inspiration for future intelligent systems, but Medlock has a lot of time to sharpen this idea and play an influential role for determining how it becomes adopted.
See more here:
Posted: at 4:07 am
For the Internet of Things (IoT), enterprises need to focus their efforts on the basics of business optimization rather than innovate from insights. But businesses are reluctant.
The problem with big data and business intelligence software is that it is reactionary and static. It is great for analysing things after the event — but how do enterprises manage when they need real-time insight?
A recent survey from data analysis provider GlobalData showed that IoT professionals still have a heavy reliance on traditional business intelligence (BI) software. Around 40 percent of its 1,000 respondents ranked BI platforms well above all other means of analysing data.
Unfortunately, do-it-all BI software platforms have been usurped by smaller, more discrete ways of deriving value from enterprise data. It could be a direct SQL query, a predictive data modeller, an auto-generated data discovery visualisation, or an interactive dashboard that delivers insights in real-time.
The reasons for this are that users rely on basic reporting mechanisms that use complex queries and reports. BI software tends to be reactionary and static. This brings costs into the enterprise to build and maintain systems.
For the Internet of Things (IoT), enterprises need to focus their efforts on the basics of business optimization rather than innovate from insights. But businesses are reluctant.
This reluctance to follow the broader market away from BI platforms within IoT is concerning. The survey noted a subtle shift over time with IoT deployment fails.
In 2016, no failures were noted post-deployment. In 2017, however, that number had increased to 12 percent.
The top reason IoT deployments fail or are abandoned prior to deployment are deployment and maintenance costs.
Encouragingly, however, nearly 70 percent of enterprises who had already implemented an IoT solution indicate that the project had already met their return-on-investment (ROI) expectations, regardless of the initial goals.
AI could be the answer to the IoT problem. It could prove the value of IoT as a means of optimizing existing business processes.
Even with a simple AI Machine Learning (ML) framework and model, IoT practitioners would be able to detect anomalies and predict desired outcomes. This would enable them to solve two problems at once.
The survey shows that enterprise buyers are eager to improve operational efficiencies. Forty three percent of survey respondents indicated that the best role for AI is to centrally automate and optimise business processes.
Although centralization is part and parcel to traditional BI analysis, reporting, and predictive modeling, where AI tends to be most useful is at the edge of deployments. IoT deployments should use tools like ML, close to the device itself.
Any analytics endeavors should be brief and focused on solving specific challenges. IoT buyers want centralized, global visibility of the business but also local optimization through AI.
This approach will not solve all problems, but it is affordable and it will have a direct impact on businesses. It will help to prove the value of IoT by not building an expensive monolithic analytics system centrally.
Brad Shimmin, service director for global IT technology and software at GlobalData, said: “It becomes clear, therefore, that IoT practitioners should emphasize tactical benefits over strategic analytical insights at least at the outset of a project as a means of proving ROI and securing future investment from the business.”
As AI floods the market, which chatbots deliver the best ROI for enterprises?
A recent report shows that AI and chatbots can bring a huge ROI (return on investment) for the enterprise. But which solution should you choose?
Although chatbot use rises, we still prefer talking to humans
Research has forecast that bot interactions in the banking sector, completed without human assistance, will move from 12 percent to over 90 percent in 2022. Will this mean the end of the contact centre agent as bots take over?
Although smart cities rely on IoT, security confusion still reigns
Cities all over the world have started to use the Internet of Things (IoT) to manage their urban infrastructure more efficiently, a concept known as ‘smart cities.’ But IT teams are still confused about cloud security, with many adopting conflicting strategies toward cloud security and IoT.
Kore.ai lets enterprises build intelligent chatbots with sentiment analysis
Enterprises are looking to AI solutions to help them scale their customer interactions, tasks, and workflows.
View original post here:
Posted: at 4:07 am
Follow the money
By Timothy Revell
After Kubiiki Prides 13-year-old daughter disappeared, it took 270 days for her mother to find her. When she did, it was as an escort available to be rented out on an online classified web site. Her daughter had been drugged and beaten into compliance by a sex trafficker.
To find her, Pride had to trawl through hundreds of advertisements on Backpage.com, a site that in 2012, the last date for which stats are available, was hosting more than 70 per cent of the US market for online sex ads. When it comes to identifying signs of human trafficking in online sex adverts, the task for police is often no easier. Thousands of sex-related classifieds are posted every week. Some are legal posts. Other people, like Prides daughter, are forced to do it. Working out which ads involve foul play is a laborious task.
However, the task is being automated using a strange alliance of artificial intelligence and bitcoin.
The internet has facilitated a lot of methods that traffickers can take advantage of. They can easily reach big audiences and generate a lot of content without having to reveal themselves, says Rebecca Portnoff at the University of California, Berkeley.
But a new tool developed by Portnoff and her colleagues can ferret traffickers out. It uses machine learning to spot common patterns in suspicious ads, and then uses publicly available information from the payment method used to pay for them bitcoin to help identify who placed them.
The tool will help not only the investigation and intervention of potential traffickers, but also to support prosecution efforts in an arena where money moves with rapidity across financial instruments and disappears from the evidence trail, says Carrie Pemberton Ford at the Cambridge Centre for Applied Research in Human Trafficking.
There are about 4.5 million people who have been forced into sexual exploitation. In the US, many of them end up advertised on Backpage, the second biggest classified ad listing site. People list everything from events to furniture there, but it has also become associated with sex ads and sex trafficking so much so that the US National Center for Missing and Exploited Children has said that the majority of child sex trafficking cases referred to them involve ads on Backpage.
Normally, the tell-tale sign that an advert involves trafficking is that the person behind it is responsible for many other adverts across the site. However, this is difficult to spot, as adverts mention the people being trafficked, not the traffickers.
To identify the authors of online sex ads, Portnoffs tool looks at the style in which ads are written. Artificial intelligence trained on thousands of different adverts highlights when similar styles have been used, and clusters together likely candidates for further investigation.
The second step comes via the payment method. Credit card companies stopped the use of their services on Backpage in 2015, leaving bitcoin as the only way to paying for adverts.
Every transaction made using bitcoin is logged on a publicly available ledger called the blockchain. It doesnt store identities, but every user has an associated wallet that is recorded alongside the transaction. The AI tool searches the blockchain to identify the wallet that corresponds to each advert.
It is also easy to see when each ad was posted. We look at cost of the ad and the timestamp, then connect the ad to a specific person or group. This means the police then have a pretty good candidate for further investigation, says Portnoff.
Once the police know which ads are of dubious origin, they can call the numbers on them in the knowledge that they might well be linked to crime. Narrowing down from the hundreds of thousands of ads online will be very useful for law enforcement officers who have to read through so many ads during an investigation, says Portnoff.
During a four-week period, the research team tried out their tool on 10,000 adverts. It correctly identified about 90 per cent of adverts that had the same author, with a false positive rate of only 1 per cent. One of the bitcoin wallets they tracked down was responsible for $150,000 worth of sex adverts, possible evidence of an exploitation ring.
Backpage has not yet responded to New Scientists requests for comment.
The team is working with a number of different police forces and NGOs with the hope of using the tool in real investigations soon. The work was presented at the Conference on Knowledge Discovery and Data Mining in Canada this month.
The trafficker who kidnapped Kubiiki Prides daughter was eventually caught and sentenced to five years in prison. Successfully prosecutions like that are rare, but with Portnoffs new tool that could soon change.
More on these topics:
Read the original here:
Posted: August 22, 2017 at 11:59 pm
Shares of Salesforce may have ticked down after thecompany’s earnings beat, but CEO Marc Benioff was entirely forward-looking when he discussed his cloud giant’s prospects with CNBC.
“We’re really seeing this incredible new capability that’s driving so much growth in enterprise software, artificial intelligence, and Salesforce is the first to deliver artificial intelligence in all of our products that are helping our customers do machine learning and machine intelligence and deep learning using Einstein,” Benioff told “Mad Money” host Jim Cramer on Tuesday.
Einstein, Salesforce’s A.I. platform, was rolled out in 2016 as the company turned its focus to cutting-edge developments in the world of software, Benioff said.
“I think everybody understands how important the cloud is. It’s the single most transformative technology in enterprise software today. I think everybody understands mobility because everybody’s got a cellphone and lots of apps and seen how they’ve moved off of PCs and onto mobility,” Benioff said. “Einstein is Salesforce’s AI platform that is really the next generation of Salesforce’s products and it’s in the hands of all of our customers right now and making a huge difference. It makes them have the ability to make much smarter decisions about their business each and every day.”
On top of its earnings beat, Salesforce hit an annual revenue run-rate, or future revenue forecast, of over $10 billion faster than any other enterprise software company, ever.
Benioff touted the software giant’s revenue 24 percent growth forecast, attributing it in part to the rapid growth in the customer relationship management market.
“The forecasts are that the CRM market is going to $1 trillion,” Benioff told Cramer. “The CRM market has gone from being an also-ran market in enterprise software to the largest and most important market in enterprise software. It used to be operating systems, it used to be databases, it used to be other things in enterprise software. Now it’s all about CRM and we are No. 1 in the fastest growing segment in enterprise software. That is growing our revenue so dramatically.”
Salesforce’s earnings were also driven by an array of new clients including luxury fashion brand Louis Vuitton and the United States Department of Veterans Affairs.
Benioff said Salesforce helped Louis Vuitton produce a tech-enabled watch tied to an app connected with Salesforce.
But Salesforce’s biggest new client, one of the largest auto manufacturers in the world, asked the cloud company not to publicly name them.
“[They] signed a wall-to-wall agreement with us in sales, in service, in marketing, in commerce, in all these areas,” Benioff said. “Very exciting.”
The Department of Veterans Affairs, on the other hand, commissioned Salesforce to create an assortment of high-quality systems to help veterans connect to their customers.
When Cramer asked Benioff how he felt working for President Donald Trump’s administration in light of recent controversy, the Salesforce chief offered a measured response.
“I’ve worked with three administrations, and I have a set of core values,” Benioff said. “One of them [is] equality. Another one is love. And the things that are important to me don’t change. Administrations change.”
The CEO said that when Trump asked him for advice, Benioff told him to focus on apprenticeships given the rise of artificial intelligence and, following that, job displacement.
“We need to make sure we do more job retraining, and that’s why we’re working to have a 5 million apprenticeship dream,” Benioff told Cramer. “But for the CEOs who call me and say, ‘What should I do? Should I resign? Should I stay? Should I go?’ I don’t really know what to tell them, because I didn’t join any of the councils because I really learned a long time ago the best thing I can do is just give my best advice. And the best way that I can give my best advice is not to be encumbered with any job with any administration.”