Why AI is now at the heart of our innovation economy | TechCrunch – TechCrunch

Andrew Keen is the author of three books: Cult of the Amateur, Digital Vertigo and The Internet Is Not The Answer. He produces Futurecast, and is the host of Keen On.

There are few more credible authorities on artificial intelligence (AI) thanHilary Mason the New York-based founder and chief executive of the data science and machine learning consultancyFast Forward Labs.

So, I asked Mason, who is also theData Scientist in Residenceat Accel Partners and theformer Chief Scientist at Bitly, whether todays AI revolution is for real? Or is it, I wondered, just another catch-all phrase used by entrepreneurs and investors to describe the latest Silicon Valley mania?

Mason who sees AI as theumbrella term to describemachine learning andbig data acknowledges that it has become avery trendy area of start-up activity. That said, she says, there has been such rapid technological progress in machine learning over the last five years to make the fieldlegitimately exciting. This progress has been so profound, Mason insists, that it is making AIclose to the heart of our new innovation economy.

But in contrast withthe fearsof prominent technologists like Elon Musk, Mason doesnt worry about the threat to the human species of super intelligent machines. We humans, she says, use machines as tools and the advent of AI doesnt change this.Machines arent rational, she thus argues, implying that there are many more important things for us to worry about than an imminent singularity.

What does concern Mason, however, are questions about the role of women in tech. Thats a question interviewers like myself should be asking men rather than women, she insists. It just createsextra burden for female technologists and thus isnt something that she wants to publicly discuss.

Many thanks to the folks at theGreater Providence Chamber of Commercefor their help in producing this interview.

Continued here:

Why AI is now at the heart of our innovation economy | TechCrunch - TechCrunch

Implication Of AI And IoT Enabled Electric Scooters For Smart Delivery Services – Inc42 Media

Many electric vehicle companies are enabling modern technologies like Artificial Intelligence and IoT in their vehicles

AI and IoT have transformed the entire delivery services especially with the electric vehicles

The implication of AI and IoT in electric vehicles ensure efficiency and safety

Urban logistics and delivery services are one of the main issues of every big and small city. From grocery to food items to everything, the delivery market has grown rapidly with the growth of technology and the Internet. It moves vehicles in rush hours and on roads which are already congested by private traffic.

According to the data of MDS Transmodal Limited, the impact of delivery services is that they represent between 8 and 18% of urban traffic flows and they decrease by 30% the road capacity because of pick-up and deliveries operations and it continues to grow in the coming years. Delivery operations have a high impact on congestion and urban environmental quality. They are responsible for about 25% of CO2 mobility emissions in urban areas.

A new venture that has joined the delivery services is that Electric Vehicles. The electric vehicle industry is growing rapidly to combat pollution. Electric vehicles (EVs) is seen as a catalyst to the reduction of CO2 emissions and more intelligent mode of transportation systems. The Government of India is also pushing for a shift towards electric vehicles for every purpose. The Indian government has claimed that India will move to 30% electric vehicles by 2030.

The Government of India has the vision of making the country electrically mobile. The government of India has encouraged mainstream electric mobility by dedicating INR 10,000 Cr to boost EV usage under Faster Adoption and Manufacturing of Hybrid and Electric Vehicles (FAME) II scheme and a 5% reduction of GST on electric vehicles.

As the technology is growing and many industries are adopting the changes, many electric vehicle companies are enabling modern technologies like Artificial Intelligence and IoT in their vehicles. They are providing these e-scooters for many purposes, from personal use to now in the smart delivery ecosystem.

Use of e-bikes, e-cargo bikes and e-scooters is extremely positive for the enhancement of Corporate Social Responsibility (CSR), visibility and green image among customers and clients, cost savings because it consumes low energy and it is low maintenance and performances are very good. It is easy to access any location in urban areas and reliability is too high with these e-vehicles. This is the reason, nowadays more delivery giants are opting for e-scooters instead of petrol or diesel scooters.

Some problems are related to the usage of electric vehicles like the lack of adequate charging stations, limited autonomy especially in hilly areas and some technical malfunctions of engines and batteries. But the AI and IoT technologies have even come with the solution to all these problems.

AI and IoT have transformed the entire delivery services especially with the electric vehicles (EVs). Now, the electric scooters of the delivery executives are AI and IoT enabled. So that the drivers behaviour can be monitored for safe and timely delivery of goods. Companies have started using Telematics devices for tracking & monitoring vehicle movement during the delivery. These technologies will not only monitor the movement of vehicles but also ensure the safety of drivers in case of any kind of road accidents.

Using AI and IoT, it will be easy to contact the driver and a consumer in case of an emergency. These scooters can be controlled by a mobile application, GPS which are installed on the vehicles and an accelerometer can tell the company every single movement of a scooter during the delivery of the goods.

Using the AI and IoT, e-scooters which are equipped with cellular, GPS, and accelerometer technology, they use machine learning to interpret the habits of their riders and either notify dangerous habits of the drivers or alter their machines to produce safer conditions. Artificial Intelligence has now made it possible for the driver to look at the app after delivery and see where they went, how fast they drove, if they made any dangerous moves, and also give tips for a safer delivery next time.

Attachment of an accelerometer to a scooter with AI and IoT, also made it possible for the company or consumer to see when a rider accelerates too quickly or brakes too sharply. Electric vehicles also come with features like navigation assist, ride statistics, remote diagnostics, voice-enabled app, anti-theft alarm and lock, speedometer call alerts, ride behaviour-based artificial intelligence suggestions, which can be used in case of emergency. AI and IoT have helped the electric scooter to connect the drivers smartphone and store all vehicle-related data on the cloud.

Next level of tech revolution can be seen in the electric vehicle sector. There is 247 connectivity to a cloud server which allows a user to monitor the performance of the vehicle even when the driver is not around. Data analytic algorithms employed by the server analyses the data and notifies the user about possible service needs.

Modern technologies like AI and IoT have also improved the battery charging technology of Electric Vehicles (EV) and reduced the time it takes to stop at a gas station. It is the result of that Electric Vehicles companies are using artificial intelligence to monitor the state of the battery as it is charging. This improvement in battery technology has not only made delivery services faster but also safe for the consumers as well as delivery companies.

See the rest here:

Implication Of AI And IoT Enabled Electric Scooters For Smart Delivery Services - Inc42 Media

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World – BuzzFeed News

Facebook confirmed to BuzzFeed News that it has sent a cease-and-desist letter to Clearview AI, asking the company to stop using information from Facebook and Instagram.

Last updated on February 5, 2020, at 8:51 p.m. ET

Posted on February 5, 2020, at 6:09 p.m. ET

As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

A document obtained via a public records request reveals that Clearview has been touting a rapid international expansion to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Beach Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. He did confirm that the company, which had previously claimed that it was working with 600 law enforcement agencies, has relationships with two countries on the map.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record."

Clearview is focused on doing business in USA and Canada, Ton-That said. Many countries from around the world have expressed interest in Clearview.

Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News that he was disturbed by the possibility that Clearview may be taking its technology abroad.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record, enabling potentially authoritarian behavior by other nations, he said.

Clearview has made headlines in past weeks for a facial recognition technology that it claims includes a growing database of some 3 billion photos scraped from social media sites like Instagram, Twitter, YouTube, and Facebook, and for misrepresenting its work with law enforcement by falsely claiming a role in the arrest of a terrorism suspect. The company, which has received cease-and-desist orders from Twitter, YouTube, and Facebook argues that it has a First Amendment right to harvest data from social media.

There is also a First Amendment right to public information, Ton-That told CBS News Wednesday. So the way we have built our system is to only take publicly available information and index it that way.

Cahn dismissed Ton-Thats argument, describing it as more about public relations than it is about the law.

No court has ever found the First Amendment gives a constitutional right to use publicly available information for facial recognition, Cahn said. Just because Clearview may have a right to scrape some of this data, that doesnt mean that they have an immunity from lawsuits from those of us whose information is being sold without our consent.

Scott Drury, a lawyer representing a plaintiff suing Clearview in Illinois for violating a state law on biometric data collection, agreed. Clearviews conduct violates citizens constitutional rights in numerous ways, including by interfering with citizens right to access the courts, he told BuzzFeed News. The issue is not limited to scraping records, but rather whether a private company may scrape records with the intent of performing biometric scans and selling that data to the government.

Clearviews conduct violates citizens constitutional rights in numerous ways."

Potentially more problematic is Clearviews inclusion of nine European Union countries among them Italy, Greece, and the Netherlands on its expansion map. These countries have strict privacy protections under the General Data Protection Regulation (GDPR), a 2016 law that requires businesses to protect the personal data and privacy of EU citizens. Joseph Jerome, a policy counsel for the Center for Democracy and Technology, said it was unclear whether Clearview AI's technology would violate the GDPR.

Jerome said that GDPR protects any information that could be used to identify a person biometric data included but that the EU made exceptions for law enforcement and national security. Clearview also highlighted other non-EU European countries on its map that it hoped to do business with, including the United Kingdom and Ukraine.

Beyond the map which also points to plans to expand to Brazil, Colombia, and Nigeria Clearview has boasted about its exploits abroad. Its website has a large testimonial from a detective constable in the sex crimes unit in Canadian law enforcement who claims that Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. When asked, Ton-That declined to identify the detective or the agency they serve.

Clearview and Ton-That have on occasion exaggerated the company's business relationships, and the presentation sent to North Miami Beach has a few misrepresentations, including two examples in which it suggested that it was used in the investigation of crimes in New York. An NYPD spokesperson previously denied that the department has any relationship with the company and said that the software was not used in either investigation.

Clearview AI has also encouraged law enforcement to test its facial recognition tool in unusual situations, such as identifying dead bodies. The presentation shows graphic images of a dead man and mugshots of a person whom Clearview claimed matched the deceased victim.

Clearview AI has been aggressively promoting its service to US law enforcement. It has suggested that police officers run wild with the tool, encouraging them to test it on friends, family, and celebrities. Emails obtained via a public record request show the company challenging police in Appleton, Wisconsin, to run 100 searches a week.

Investigators who do 100+ Clearview searches have the best chances of successfully solving crimes with Clearview in our experience, the email said. Its the best way to thoroughly test the technology. You never know when a search will turn up a match.

There are currently no federal laws that restrict facial recognition or scraping biometric data from the internet. On Thursday, the House Committee on Homeland Security will hold a hearing to examine the Department of Homeland Security's use facial recognition technology. Ton-That has previously said Clearview is working with DHS.

On Wednesday, Facebook told BuzzFeed News that it had sent multiple letters to Clearview AI to clarify the social network's policies and request information about what the startup was doing. In those letters, Facebook, which owns Instagram, asked that Clearview cease and desist from using any data, images, or media from its social networking sites. Facebook board member Peter Thiel is an investor in Clearview.

Scraping peoples information violates our policies, which is why weve demanded that Clearview stop accessing or using information from Facebook or Instagram," a Facebook spokesperson said. A spokesperson for Thiel did not immediately respond to a request for comment.

Feb. 06, 2020, at 00:28 AM

The House Committee on Homeland Security will hold the hearing on facial recognition. An earlier version of this post misstated the committee.

Read more from the original source:

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World - BuzzFeed News

Security Think Tank: Artificial intelligence will be no silver bullet for security – ComputerWeekly.com

By

Published: 03 Jul 2020

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.

Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

Link:

Security Think Tank: Artificial intelligence will be no silver bullet for security - ComputerWeekly.com

Businesses are finding AI hard to adopt – The Economist

Jun 13th 2020

FACEBOOK: THE INSIDE STORY, Steven Levys recent book about the American social-media giant, paints a vivid picture of the firms size, not in terms of revenues or share price but in the sheer amount of human activity that thrums through its servers. 1.73bn people use Facebook every day, writing comments and uploading videos. An operation on that scale is so big, writes Mr Levy, that it can only be policed by algorithms or armies.

In fact, Facebook uses both. Human moderators work alongside algorithms trained to spot posts that violate either an individual countrys laws or the sites own policies. But algorithms have many advantages over their human counterparts. They do not sleep, or take holidays, or complain about their performance reviews. They are quick, scanning thousands of messages a second, and untiring. And, of course, they do not need to be paid.

And it is not just Facebook. Google uses machine learning to refine search results, and target advertisements; Amazon and Netflix use it to recommend products and television shows to watch; Twitter and TikTok to suggest new users to follow. The ability to provide all these services with minimal human intervention is one reason why tech firms dizzying valuations have been achieved with comparatively small workforces.

Firms in other industries woud love that kind of efficiency. Yet the magic is proving elusive. A survey carried out by Boston Consulting Group and MIT polled almost 2,500 bosses and found that seven out of ten said their AI projects had generated little impact so far. Two-fifths of those with significant investments in AI had yet to report any benefits at all.

Perhaps as a result, bosses seem to be cooling on the idea more generally. Another survey, this one by PwC, found that the number of bosses planning to deploy AI across their firms was 4% in 2020, down from 20% the year before. The number saying they had already implemented AI in multiple areas fell from 27% to 18%. Euan Cameron at PwC says that rushed trials may have been abandoned or rethought, and that the irrational exuberance that has dominated boardrooms for the past few years is fading.

There are several reasons for the reality check. One is prosaic: businesses, particularly big ones, often find change difficult. One parallel from history is with the electrification of factories. Electricity offers big advantages over steam power in terms of both efficiency and convenience. Most of the fundamental technologies had been invented by the end of the 19th century. But electric power nonetheless took more than 30 years to become widely adopted in the rich world.

Reasons specific to AI exist, too. Firms may have been misled by the success of the internet giants, which were perfectly placed to adopt the new technology. They were already staffed by programmers, and were already sitting on huge piles of user-generated data. The uses to which they put AI, at least at firstimproving search results, displaying adverts, recommending new products and the likewere straightforward and easy to measure.

Not everyone is so lucky. Finding staff can be tricky for many firms. AI experts are scarce, and command luxuriant salaries. Only the tech giants and the hedge funds can afford to employ these people, grumbles one senior manager at an organisation that is neither. Academia has been a fertile recruiting ground.

A more subtle problem is that of deciding what to use AI for. Machine intelligence is very different from the biological sort. That means that gauging how difficult machines will find a task can be counter-intuitive. AI researchers call the problem Moravecs paradox, after Hans Moravec, a Canadian roboticist, who noted that, though machines find complex arithmetic and formal logic easy, they struggle with tasks like co-ordinated movement and locomotion which humans take completely for granted.

For example, almost any human can staff a customer-support helpline. Very few can play Go at grandmaster level. Yet Paul Henninger, an AI expert at KPMG, an accountancy firm, says that building a customer-service chatbot is in some ways harder than building a superhuman Go machine. Go has only two possible outcomeswin or loseand both can be easily identified. Individual games can play out in zillions of unique ways, but the underlying rules are few and clearly specified. Such well-defined problems are a good fit for AI. By contrast, says Mr Henninger, a single customer call after a cancelled flight hasmany, many more ways it could go.

What to do? One piece of advice, says James Gralton, engineering director at Ocado, a British warehouse-automation and food-delivery firm, is to start small, and pick projects that can quickly deliver obvious benefits. Ocados warehouses are full of thousands of robots that look like little filing cabinets on wheels. Swarms of them zip around a grid of rails, picking up food to fulfil orders from online shoppers.

Ocados engineers used simple data from the robots, like electricity consumption or torque readings from their wheel motors, to train a machine-learning model to predict when a damaged or worn robot was likely to fail. Since broken-down robots get in the way, removing them for pre-emptive maintenance saves time and money. And implementing the system was comparatively easy.

The robots, warehouses and data all existed already. And the outcome is clear, too, which makes it easy to tell how well the AI model is working: either the system reduces breakdowns and saves money, or it does not. That kind of predictive maintenance, along with things like back-office automation, is a good example of what PWC approvingly calls boring AI (though Mr Gralton would surely object).

There is more to building an AI system than its accuracy in a vacuum. It must also do something that can be integrated into a firms work. During the late 1990s Mr Henninger worked on Fair Isaac Corporations (FICO) Falcon, a credit-card fraud-detection system aimed at banks and credit-card companies that was, he says, one of the first real-world uses for machine learning. As with predictive maintenance, fraud detection was a good fit: the data (in the form of credit-card transaction records) were clean and readily available, and decisions were usefully binary (either a transaction was fraudulent or it wasnt).

But although Falcon was much better at spotting dodgy transactions than banks existing systems, he says, it did not enjoy success as a product until FICO worked out how to help banks do something with the information the model was generating. Falcon was limited by the same thing that holds a lot of AI projects back today: going from a working model to a useful system. In the end, says Mr Henninger, it was the much more mundane task of creating a case-management systemflagging up potential frauds to bank workers, then allowing them to block the transaction, wave it through, or phone clients to double-checkthat persuaded banks that the system was worth buying.

Because they are complicated and open-ended, few problems in the real world are likely to be completely solvable by AI, says Mr Gralton. Managers should therefore plan for how their systems will fail. Often that will mean throwing difficult cases to human beings to judge. That can limit the expected cost savings, especially if a model is poorly tuned and makes frequent wrong decisions.

The tech giants experience of the covid-19 pandemic, which has been accompanied by a deluge of online conspiracy theories, disinformation and nonsense, demonstrates the benefits of always keeping humans in the loop. Because human moderators see sensitive, private data, they typically work in offices with strict security policies (bringing smartphones to work, for instance, is usually prohibited).

In early March, as the disease spread, tech firms sent their content moderators home, where such security is tough to enforce. That meant an increased reliance on the algorithms. The firms were frank about the impact. More videos would end up being removed, said YouTube, including some that may not violate [our] policies. Facebook admitted that less human supervision would likely mean longer response times and more mistakes. AI can do a lot. But it works best when humans are there to hold its hand.

This article appeared in the Technology Quarterly section of the print edition under the headline "Algorithms and armies"

Excerpt from:

Businesses are finding AI hard to adopt - The Economist

What an artificial intelligence researcher fears about AI – CBS News – CBS News

Arend Hintzeis assistant professor of Integrative Biology & Computer Science and Engineering at Michigan State University.

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It's perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, "Matrix"-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become "the destroyer of worlds," as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn't avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in "2001: A Space Odyssey," is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASA's space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didn't know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Play Video

Five years after beating humans on "Jeopardy!" an IBM technology known as Watson is becoming a tool for doctors treating cancer, the head of IBM ...

Systems like IBM's Watson and Google's Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they loseon "Jeopardy!" or don't defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that "to err is human," so it is likely impossible for us to create a truly safe system.

I'm not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures' performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Play Video

On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can'...

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we'll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that's farther down the line is using evolution to influence the ethics of artificial intelligence systems. It's likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn't prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesn't absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don't yet know what it's capable of. But we do need to decide what the desired outcome of advanced AI is.

Play Video

Business leaders weigh in on the possibility of artificial intelligence replacing jobs

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady "hand." Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

tenaciousme, CC Wikimedia Commons

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don't find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation.

Continued here:

What an artificial intelligence researcher fears about AI - CBS News - CBS News

AWS vs. Microsoft Azure will be about sales scale, AI, multi-cloud realities – ZDNet

Amazon Web Services is ramping its sales and marketing investments amid signs that the battle with Microsoft Azure is accelerating. The big question is whether the laws of large numbers is catching up to both cloud titans in terms of growth.

With both Amazon and Microsoft earnings out of the way this week, there is a bit more color on how the cloud wars are playing out. The storyline here is pretty straightforward in that AWS reports as a separate unit within Amazon and Microsoft growth is broken out in the software giant's report, but still tucked away in the commercial cloud.A new Microsoft cloud category to watch: The Microsoft 365 number

The upshot:

So here we are. Microsoft Azure vs. AWS and the cloud market has matured enough to where we're seeing a sales ground war. Let's just say Microsoft gets that sales thing well and a partnership with SAP to co-sell isn't going to hurt.

For now, there is enough cloud growth to go around, but there are signs that IT spending is slowing. Cloud providers are likely to see some of that slowdown. That reality means that the battle between AWS and Azure is going to get interesting.

Wedbush analyst Daniel Ives summed up Microsoft's incursion on AWS following the company's most recent report:

This quarter was a major positive data point for Redmond as well as overall cloud spending, which has been a concern among investors given some cracks in the armor of cloud plays such as Workday and ServiceNow and fears that IT spending is hitting a speed bump heading into 2020. On the contrary, Microsoft delivered strength across the board with no blemishes and importantly gave stronger than expected December quarter guidance which speaks to an inflection point in deal flow as more enterprises pick Redmond for the cloud and thus further narrowing the competitive gap vs. Bezos and AWS.

Indeed, Microsoft CFO Amy Hood said: "In our commercial business, we again saw increased customer commitment across our cloud platform. In Azure, we had material growth in the number of $10 million-plus contracts."

Hood added that Azure gross margins improved as commercial cloud delivered gross margins of 66%. That tally includes Office 365. It appears that the Microsoft 365 strategy is going to pull along Azure sales too. That approach is hard for other cloud providers to replicate.

Meanwhile, Brian Olsavsky, CFO of Amazon, explained that the company is investing in AWS and has banked savings from infrastructure investments made in 2017. He said:

We continue to feel really good about not only the top line but also the bottom line in that business, but we are investing a lot more this year in sales force and marketing personnel mainly to handle a wider group of customers, a increasingly wide group of products. We continue to add thousands of new products and features a year, and we continue to expand geographically.

So the biggest impact that we saw in Q3 year-over-year in the AWS segment was tied to costs related to sales and marketing year-over-year and also, to secondary extent, infrastructure, which, if you look at our capital leases or equipment leases line, it grew 30% on trailing 12-month basis in Q3 of this year, and that was 9% last year. So there's been a step-up in infrastructure cost to support the higher usage demand. So we see those trends continuing into Q4, and that's essentially probably the other element of operating income year-over-year that's shorter than in prior quarters.

Olsavsky said that AWS margins will likely be under pressure.

We will price competitively and continue to pass along pricing reductions to customers both in the form of absolute price reductions and also in the form of new products that will, in effect, cannibalize the old ones. what we're doing is renegotiating or negotiating incremental price decreases for customers who didn't commit to us long term. And if you look in our disclosure on our 10-Q, it shows that we have $27 billion in future commitments for AWS -- from AWS customers, and that's up 54% year-over-year.

Now we'd love to give you that tech zero-sum storyline because it's easy. But AWS vs. Azure is way more complicated. Here are the moving parts that'll determine how this battle plays out going forward.

The sales ground war. AWS is ramping its sales team, but there has to be a talent shortage. Google Cloud Platform is hiring aggressively. Microsoft Azure is drafting off its parent's sales team and enterprise footprint already. And then there are other cloud providers that'll retool sales teams. Rest assured new ServiceNow CEO Bill McDermott is going to be recruiting heavily. It's a good time to be a cloud sales person.

Artificial intelligence. Microsoft CEO Satya Nadella mentioned AI and Azure a bevy of times. Compute, storage and infrastructure frequently is just a cloud precursor to more the AI and machine learning upsell. Azure, AWS and Google Cloud are all betting AI and machine learning will differentiate them.

Multi-cloud realities. The dream is that enterprises will all mix and match the public cloud providers based on needs and pricing. The reality in the short term is going to be that enterprises are likely to bet on one cloud provider with others being involved as leverage. The battle between AWS and Azure will be about which vendor is preferred in the enterprise.

Read more here:

AWS vs. Microsoft Azure will be about sales scale, AI, multi-cloud realities - ZDNet

Sorting Lego sucks, so here’s an AI that does it for you – Engadget – Engadget

You see, Mattheij decided he wanted in on the profitable cottage industry of online Lego reselling, and after placing a bunch of bids for the colorful little blocks on eBay, he came into possession of 2 tons (4,400 pounds) of Lego -- enough to fill his entire garage.

As Mattheij explains in his blog post, resellers can make up to 40 ($45) per kilogram for Lego sets, and rare parts and Lego Technic can fetch up to 100 ($112) per kg. If you really want to rake in the cash, however, you have to go through the exhaustive process of manually sorting through your bulk Lego before selling it in smaller groupings online. Instead of spending an eternity sifting through his own, intimidatingly large collection, Mattheij set to work on building an automated Lego sorter powered by a neural network that could classify the little building blocks. In case you were wondering, Lego comes in more than 38,000 shapes and over 100 shades of color, which amounts to a lot of sorting even with the aid of AI.

Starting with a proof of concept (built using Lego, naturally), Mattheij spent the following six months improving upon his prototype with a lot of DIY handiwork. In his own words, he describes his present setup as a "hodge-podge of re-purposed industrial gear" stuck together using "copious quantities of crazy glue" and a "heavily modified" home treadmill.

The current incarnation uses conveyor belts to carry the Lego past a web camera that is set up to take images of the blocks. These are then fed to the neural network as part of its classification training, and all Mattheij has to do is spot the errors in its judgement.

"As the neural net learns, there are fewer mistakes, and the labeling workload decreases," he states. "By the end of two weeks I had a training data set of 20,000 correctly labeled images."

With his prototype up and running, Mattheij claims he is just waiting for the machine learning software to reliably class all of the images itself, and then he can start selling off the lucrative toy. If Matthiej manages to get the system working, he could then rechannel those profits into new expensive Lego projects.

Visit link:

Sorting Lego sucks, so here's an AI that does it for you - Engadget - Engadget

AI Could Target Autism Before It Even EmergesBut It’s No … – Wired – WIRED

F |*#$.-N*%yuRK4.]^>{Td$q;_?<_?W*_O5O^W"a.YNbGNVOU|}zfiR-7kxSC9E pTM_4yG4bqTqi~S]Ny m%:u-o;j9f8m=dyTtLYgE4dy`a%KxAW,)XKMq5)aGx]Y#+K3(he(+? %")>*h}OB <%(_lV2[fa 6IThj=.Wr^L=K;tmNd:E'7qIeVf(#-tPoe1[{{i_S>gVus4J2:whatUYJ31*'8*Q5JxVi~L".t1NdW;Rf36]D)odIjUsq-(?Y6?)P}CM;]7W&[qb1bQJ6 w>%qw (:K4ovihjotDDkXR!X)F;rb_9c1VUoK;' '8JU"P?T]N7NWX%9| >M!Y]A~9w9KF,A{;gTzUK[cG;7o4uSxJ>r88}^D'GLg1;:W,~I?h$j<%O/Lfxy/_>SZwM@.U J_IWfAYc)Jh;L<<-q1RX/A AS-?=epBG7u_K_x1*$N%lV|2q#*yGi0AM0U:t_ #XDlz/_=~<>)juZ}p1IX(OO B):*NiMo;B`N ?U/_;OGGMSl_~Sn_*`/'{imq|N-:kF85(9n^^'1_i}~r%E*"*@X,:7RfO^vDy4r5'%A]O>V:}x '~RtskPCg3Y?@Yb ZD1:u?%rxw:$_Pmc9By=ZgN| 5 :9,[`6;1S848n7;r1nm%]j{R$ =< pqqb`E9jR-E@ "cwV{4LlxMu3,#! rB}RCDZC#SkWWH/zP1TIS J &q k[.!gjMi,(2}5kb=v+KvTWl q vFP~"fit-w*eI0btjrtoN=} 9R3:j]{PcZTHY10>/Lqe`FX{g5riZivd~BwOnQ Fa< 0X`?U^~Rn])NNI0 ]=an@%.,nrzB)bWw[c5y]AW+xwwv'ygEG6mva W{vj^rtL#PCrqe=85p7b.;C:J-Ma18},:Vaq*n5ja0S>OyN{nP1BOoc8Wz^H>+@Y9#+TQuF]]%qAKR7,){=fH v;u !iU+jX{VB41Uwk{^dle71Tw@VXSH2dh9tul)JPPna.T~rz= mo^=le9zh[-wx&]]%2vxBX_~pJ/YXX(7Y]d&TY~* rE_W18Iu1`v`8EOmOg UOnW]?a89?ujsslD9>B,f]-z~R]<04pO_x|qc~h01iU#(:DOuJ{`=0^6MQmy6hwfoafz:Yf]5Ab7mX&vO+^A.t {Kno LhqDbW !1[mc2YwD'ux`BzpM:cHBBAkD%P rlf a[G.G!3'Xu~xIad* _b6Lu<8uO4Iv_of.Z$E M#1Fb[_6l TAtO~y0=u;<6-}]8('bg'][s17#X #hWl.Hv~n%051F !taX>ax{Oec5*( JuB{P?S|('A$<4PKo!MihuNm3oy}{or,kw$P6;u-s1juT@qhU8&J8ecP1Nsu94 l d:Mbfo(Fzx{ SF^a$3 jgSQzMwgv?Nh""^XGd"< a]~QMSNz@}~m]v.Ra#)rzxaBix{mj:0SK4rD/r|}C9Z(;g <@T72g3& Sh{9E`x9CSX7$o Ie2CBR`"t9KaI%0f,C ,[X"eY/Wd9:"LZ6i`3)LBR`X3W,O'L1QPH,S)#wE3"90I^R Q$g&@$AII6^hBqsYHhMsYN$q#PJ4g]R3#46YPJ4muI@bMd&m9qP.)n[mymI^ld]Rn.XKm!-c(KxR^uI9@9KI&n>f&b ;,aGv$YHMH{u$iP|#)D(]T/ =*GH{u$i')@)IDQhGh&%7Q]R.K]8sWRkx$hB3)IBWp%`BW|xH2Mhg"xK=!IfI=]DA)9| 20uO$YI=Khl@=I'IHCRn#2E$h"&)3$Bi4C(#f"Mj1Yw"qXJ46$QinN}Rr IC1$!IBKP,% M #CRaD|`)9DByt_P!)0E^$h" rN)4L3>$hBcs2!t!n5R s`7Yn"GW&by#%!y7<4*GJ;T)s>P~C[7_O;vvo a%R8q,8]5*f"_WbhnJ>xvt/I'bb6 G7_qnh}gWl#~rP_bRlWuaTwB]"c?Q3#yw9,s?-bIE/ P 1NLXG'Z*]7GxmGa+'Z$_W^.,b). tFe+*(xViOhH N@hdv{^f 9cwM.H;]^]A&t#+K#EYEDn~-)d>Eee*&3";?>Q/1G&oCm1ohkF~ZOU4.Cb[_6l HTAtT-k Y,ac+Ql>O0D<)KN']Y9=_=V:^$!v-C#N@{@;?fqG9#`;m 1UJ3B*ulm$;{m7lt iJEUc]W?ZU+V%5u:1t{.Y1x_cwo M82touT@'uG@ 6`_f$e(&fAuWPMSWFYy<[Pg7#b G~t*`y=l9,Ss ^Q3 D=59xiwtyag]6xejOWzr+=cX"~='/D(($gDQ9sEC_" OD$!t.d_hGtM'5[049qJ&$AIIwk"aXJ4 wIBvpR}BvDN$@:'_tN%I3uIg$W%qeuI'2C;"8 @Ut#.t.6,hG.eutI rtC0$W !lHW6$qe?]R#4te$hB8)I{5d!I"aXJ4G(!#qN@%hlRDuI9!t!&&KvDfJqLMI74Bn1J%BNvDx)1N7%n i $MA(Wh $M$-KM!cJ8+D$}tP.)X1a)9DhB|r&O r3s #'gvf"+XJ4! dD$hB3)"+M(U+[B~.K Rr q.K`BZKv3$h"3iuIY-dwn[mKb&ruZ]RfnymICW]R.KyvD4(%XJ4us$94C4Kd ;B#vDhB#I{uKI%B|QhG'%Q9B#I{u,XmMzy5']5.ePpLT+JmUmXrW~N#>S,]L& zqx40.UiG,*[(nU?}W_gxEw_U?%w{OrXo/&*H-HRT3^"Zy-LZ2D]T7X3N5887g_]9{ /x5>ZMae8OP|(7=*jQ6 m<0oE/@M`0cv{Ti~l]0om!"o6k- Ym~{E2n%wW"].%ay_b_XbXtHs 9B^*b6?6M7t']) g1yfWB~i4;7w;',YI`fCV]/B^DC<7og>V?:+YFWW SNtXdx{CS{!^@i}.+_dxQIe{XSbxsV4O6OB&K%*q`_!.7Ok9,k/tQB$h{SQs>yV-0b:B[IHc(0|ls@Y!N""a*$W-X*f% 7(qF{Sx5 hw{aq q|q>u:LT!Zk|x+.A _ X]u3H c&#fWY"D;sTS~8]RwmXo+1)^g_uU@b*SEgYG- Y@~(RhY% 9$Mb;^r>iDM %0d@3Q=5kK6`7"FY6H6pQ7"CiZ|.r;#~C ofP >)}CI]!U5MYwhkXM+S?-%s iVQe{eS/g!'`%,E]X)=.l>B|kf7 NAS3N{;>6U`xw6t0-{F3k8Bm|8 QhzW i-!C5b (&|t;Xgvp+!=""u'?V>TQ+#~>xlG&r$ -U>K.cEoqw0( %k.qBDW%:%'fHapcq=F4B3AjPjq/UbdVGd/y1Hxc+H d6x=+)o f*`(qqcf/mxlu29E:<>eF0GLO~BAw$G!Sy3D./r&0_(yn zN 9U 3V/c5D/&Oz[ - I `|i=DgAG|[~ O1r5@GI=+qs{P")%E@YrhgAw9hX@P2?W>w+%]0,"Xb?Q"aQ XQkra4$4i@4Epnx4Ez|^GE|65x=u)BuY ,Z<_-Z]'=L;PQH`S:5CKR30T:h-#PBJFE%^XA[-HYPe[[(4H@Y&zs@a|iy<&6e%zJ6D KK8E ED4&:TyO~VJT:@9bXF#KHt%L9%K(aI}Zix(-j/hDvT&;aV*&I~XU2BrZ*y tj+9PE &sM["uPDD MblHrfA>-A TL4]*(7lE# XdlhL3Pn.IX$jhLj-*Ky'##:@Mz@pr""u<:] k"@UqL?SmcJGh~`1H#N5O4qwKa[etbe(b,v~X,T[J!1p&'+H[uZa(`,p3hM]Zq%-59f"_dAfspf/?BaUm#vjQQ?|eEIvjy-tlL,*(W)ALZ;Vr'.szpL0{K-- C$xIH_v( s'A@!k_t Kh,za"F9If-JDp"=1KN+gJE)g3QZ6X8ngCTl:@=@,Q.cy+vDFRS/[RXI9>&Qp.Doa!vK$8b0jeY3/[5o]_GU C:jgr}sh/8aX;4WkvhlhzgMvNldwzT= 2{!uFyITYAP!'5 f!M+t3?eIybZ]65) TH*+o?V1 T6u :(U"+Vqv6TesX,H < oL c[8c'"*ArU't))s:[jc`|6_.6G?2n)"RlXgSv sX}(',3A60'8Uh` 9mK0R(I@>Y0eP|H`5gY&Zda5G-;9upfC:fH"[Y}Vu$3V7HdN7P5qj%/ze}"VLA]l3?X[LbQr nX>m1qVC7R mTKX,SFcN( =qk w%{@$b*L`jT6aj41Z{Zf#']]g*IA05{~C*kXNu>=s3ZWP#&e4]fhu)E&O0 |-l(ut^nc~Sd `Nhti@SxS]RtmJ}W;Ue:]$3__K 76@pL}OL*9GQp2#i0E,5|7Ym?=V:#!b`#jVGv 5gqZ[z~[?TcIjFAO@m87d3,M[AKnxqr: Ue,23 W:|*E6qCrskyBxLwLg 3 5LV>tUs),Jq/M["=_F|B)R)d}S?&6|ITT!7@~Ua;Y: >s8Xi05Za@lr!O5Rjw6-RNq$3%ev&Qrre$}|9fS9NB.9ya?7, =Oy/]]!g>#8'dibq-q%jI9O1rR| _i5 R yVq@r&D$`a&>HWSu}Ua:,}z7A n. (tiR~YE8Fudc(&U~L8WE:hiz,3JM-*Ha4e,qM&eILr7![dJ$RM2w(q:P'3"Fy_t4KRiy(5[6T.a=lB(s J>1r $Hh3 r>=K#7i,9x 6&: ,V9;X~dB&9/V}]A9Ou$_aHRIkE>snOj]FuJuUxRG'C62pPa*"%%S_HL$J(iyw$A-SFE/3`|`])_R@D]1Ani&2"l|@rr ~RFmWb>@Ub%o7r BX}>@0D:iqq#L:p |qvFX,sR&sTvS-+G,')@R5*sG!fkv!AF7H.OUyY`Hr>kh*Ejn6ETAJO[/r1m_EdB,WU@nX$Y8>v]r[*6dmozksn3LR6me/84s7(3:Z|V4eB&}]+2E&aCBrY2JATQ6'`%A$0jOep9Mf(Mm# ZWq)UItX$ag*1JC&~'3gSE?K@NTVf!)&@(-nb2]LFxpp,!,?!1vne)|"RdH>dZS8qP8.2z/i6=gAV9T]^z^FP;BkN?!<}|u*mc{oEvB?Q2'IE2)AOSq~MeDOw%Z/-xalflKX" ]zJQy}C_*y, vTqtXsEWN! Af6>HS'RqV*~LV`e/7-m|:+nBC{?kVB|=7Y`Wy*8so^ai&I-`0sM*bNK%mE~G|yZu;)}4ZhC0z6JrjPBdBMG~>1g+p2?AI7}FMRGo}t_pKD7g^}aqoiM%!cn?}Poaw&C V9,I8eSQ7.M^9BzT/rtR_`?*zkYQNo~bYEk;f0x)?g_@8RCTi)o9!x&a^r4vPA;,"m{mvq aa3?LNUi hpma~ ZMPeora [ Sp)Z;rmph#'T5co&}fV=!_e+A6j/e3Y> b>_>h.ZUo8&PNt{Wf"[:PUKT[sPb''~Za_~w[oUHVWPR06e|]LZBLj#wFzm0zV[zu0!S[c*pF{fw2}'7n#hSdA<.9P@yFW&K+ {h]=!V id7h5sIO,m,%l$ /-)(QwVu[4v9>2og~yhFfMZPQc3S,"5w]-@h{@uUfARO{ bhM6:-mC'1c?USY{zSDi]I/-1iVr[# Ju[6uM-oV3swG];G7hLbgF;YQm'}j J6m~]-96a##~vTRwo!^op1nt|)nZQ n0Of]#1YD4GdNzc#:;rK ((jLslzAD hlqO -q]7bV8X[d"$Z?DKwtpY?6=TedZe=Nrk,{5TN[QE/"d;nF3}R+fh@rxi(Up:Gg0 q ZXh4 {* >~ z5y R,VhOXn"cX< bH_BMM -%QEJ|lw6^R?n_%qAj:S5meI|~ ?!J((ey26ujL7Z&7d{'p)buETy>c7Az{#~ l;jfI4u7swJ([{0NQ]1|",t2AyuIut$ZAOlxm a1x$OZ2$7p Xe:"65S{Ey{vw8 W}PQdXR`9=BkMHh 8A?8+&4g~Dqz&6P(bS9]' #ho+(le8?~ dJZ/:@<$b4 uanud:s!RJD4txZ,`|K@ YFQ<&8[< P l-Da-?g>Zi28L8L!9%@/g1f"L'A5( 0rlED3,K$,8LVu_ob= VaicA<=RY.xuf1z^CMq,+Ci{QB9aU["fM.-@x+PE#bUJ[hY3-7@&$'DfIH3HehVH7dDT0L"2B@!H^0*JS`_]!n~4%:BB [/~,,o8NbM'XzwiTvAE!M}Z{(P+1*Hp&5._qW2Dc+).X q==/,Dfu~`A4)#M!,Vw#LMl6,TwDd_b#4v=XjPXB)d=0P ?|]$9L"1--U/j Wm:2Y{~UYX"UI8 xRtGnl_ ]1 Ast-:H)a7a L,J%_m5gSR|g9KP*FmN@>?'Fe&GKzwo('++ d|)Q ;g Ylp;pYU7Ul^1g"oX 06b12@k}fyZ:y<)%ESf}v?fUU,nGyb#QyuE,Sh z9j`wf^e,"}[$Y3Kf2MP*&?dm9Y**mTgT1/YYy]rQz-,z-Xn '-Zo8` _^f)FfZhr]Z31Dl`ch%jRe%gr"@WIr Un29l*R2O.lvQ m>Yga:}E olbe^%lVh`:%],]hbu[G(}+[xu:6qY,!r%u (0u/3aeD)r9)-X$p4< t :2v9Jok](M5e!qiZkSi@? %=fZe-Eh^3Qa-Og#A0hj$|G,10DcJe(oTT,AOmY QA(;- '$f."h,.;GEmU/lG[|%+,i5+T QWPb5v2?f@k#VllS.Mb#s!$CcPm/:K4sF)GJGNXa.*=E!aOF/VTu?qMc drJ?Z&!SW|gkD$H2{QoTJ9?n&ilQ` v=gW5W7H]Le8l{WHv2<5]`Rzd1a%*dq{wd7'u"=e1n!6^k!VlV0;gqt4O2}R7E mDu]jhvb>+E+L+AtcXsC G>@=-KnFv9Z"TK2H"MM.~#d9 zq^_n^z?g6(Uo&{F.m[,Ti}1S7ncs.F4jNuXIno$'b''~Za_~w~t) gL3wl=-Qy]LZBLj#wFzm(zr n!S&$U8 YYTLzz8 >+ nVF=4_5)ZnS2W4(-0nC"zf-UV'D5VX_u;kKR t}Rc&@v}2] ZYjqxo5j u!UVnzIf>tS5%Z='0EwEyRoxfe)o N 6|k(m66y[#!f+lw}vFUm/POo9GM?G^ %aV Y;=M&=P{mUj+jq^ skf#0;nG8aNC26E4u{ss? z2rJrwoLk_q"o1(Y^gY:]g,tuBY:eju`x14q n~a5<1B+ {2+@m4x[~]/l<<:FHn_ Ob\sLG%g+EhlQ.LoGDEiMZw|XJl~b'0<3ABz {!xza6Us6fF^p3MH kmUx;)(9)jont*^^^vKq^S Y"!Y$ ["WD "|fQ=C:}|`P;b">k.nR6?bg_eKnqoijPW,gX">.:b6Xs;Pl)"qRZqzKR`xMY _~!=m@[XxNzn3ln}e,?9O>d}bmlsR{{#$ SM@0 ehj>7 0+EP $tM=Wmh{`ip A;qK!K7R|-G~d/.$K47{X=. 6>yTw.ZeK)Zv=es$5.??c>=[y f1 yG+4e2 *xc(LuPE%|S%4[c ~kLIkW8m(1i4+KsHuv )0:mjzTo?`~%)O~dSWxJ~@Ki}J83 ? ?E@Y-dzD>Y#h,Ting#HkUdDQ;{Vp my.<7.vYE]fTR6eE(G[ts]C3~iWHPP"_N&")ZO{!U5@hF;*ff;_lZWAS P rjKMx U*Ns-4c~gMEj[L8j{_4e ,ed~"q=Ik@#6@A5 {%8v+:y0'EAunsAv^ xIgxa>CO@Yuv2L7|0 fhwL/gx,&Cp8kGxbF~^ +toCRc"YNU(SA+vONd!/cZ|d+b@N:}V Ar<$cn[}&q* te ssK~gxIJ6bA t4v)~R0/V3QO:U;quX +RZo'cf%2`2Z)dq=Z)W~A"N/Vw4TvRI&2|-9-WtcAfTc9CAS2h").,/1)LdWMd 1[wUH@"J4s1A0SnXK'%U.>%|z J !,^Dgw(q&Yy5*`]u8=r,]kN09HG}b{HKY 2mzmz p@gj+#pJ e?>cvpvQj7;HJGsP0QQZT+b=duhXr:GH1B_z] HpY=?Pty]1V[9b2mP?g;5um2N><1kDDg2[.4*#,=LZn4cvHhChlJ3&u0%R7?KX E>Bs186OWv!|B3q1/ zQ*@2 Q6GU4jmO;hd*Wk <-qj@[0CZv;S[v:&'.Q uDM?_yWVc!,3q`6Mw?_b2MS+UhO@ Q;{HE>hW1,+@de@_Ym^;"c6QtUy7^kQnnYFUzk5[lWV..{@G]MI9C@Or_mpT&)~ e[fyeO{emNecpYi 0?mju=lj^Z6}`/V%EoxiimmFzj)9C7z~OT3Anwbd=-4 jh=h7728hApwn4T%gGN'x'7 5i(T- b qXOTGmtE=d`}dLia}NkT QUk?[e^eS[O`kD^cW5V@0h6FNH/ 5#{[r;{<~SLm'N{`GzLnY^EzE="uYcW~}*0vQtnT!!j{-MNVVNa:]Y*p[SO`C?pl4Us@OfQ+~joB;B,z3HT5&rLhGIxY68y{5-5L/4Ob*wq9#Nu1rF0ZU*l/:G9Z5ep^.Gdh9ZyxV,/u'qvGh9Z;Thm|rp8y)'oisgkT lzQO,ApY:6,s!^~v 5cp}ec # +V{QRT'/Q?9 hq4{!Hpy#V@"LUZ%`v+.FW_Z5?Sw:]9[M[]aMW:uiytM}^]xdFnDGvJoq[BmZgRs~*b9 r2:jtz?l{Yfd9Y%EZb.Ce>7~[K+t _.m.kOM(5qt+[z'}nF>t[)qIP %iRr]{;txSTR;4AVXA/}}gU}%+,KUxIx~,-xP&)&vD(Q 1vD#Vcke[5tg[zgE 4*g)!Yowt[U6$~~+U Xs@h-*Vk>_Kjr~#c,J4jShsno;TrrdOqdWn}#fQcQ!XO@p0iG,l:-Xg/{dXR=z5$[ G?VG86A$W2+5 */~:%XNpeMn^*lV_Gl]"NQW;@ `2?_zj6fWM!G +r .zez~1R%WLI$y"R:_cNPq-sl9r8rG31w?y13>c3(D1VW 2y8 ^ck|ZOt4r=8yX*96}_1~H.}Ud w XWHu.zuotrah0G0m8gEDz^6H5Z)xKl4vMo5zT]WB[0$,:*i09 5 mW}U{5 Zn_`at>aTW^p0jS_zDes_U4F68(ms|f|MfvE5u,L-6!7RQu?yT{mBuy69 vmZj%utyY $[Ecc,$s6ti%VE^cZq /e$+)%D~BF4JgZw>b]ZDRE0=

Read the original:

AI Could Target Autism Before It Even EmergesBut It's No ... - Wired - WIRED

Facebook says AI helped reduce hate speech on its platform last quarter – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Facebook said nearly 97% of the hate speech and harassment content taken down in the final three months of last year were detected by automated systems, before any human flagged it. In the July to September quarter, AI helped detect 94% of hate content; and 80% were spotted in late 2019.

The social network, in its Community Standards Enforcement Report, noted that in the fourth quarter ending December 2020, hate speech prevalence dropped to about 0.08% of total content from nearly 0.11%.

This means, there were about seven to eight views of hate speech for every 10,000 views of content in Q4, Facebook said in a statement.

Also Read | Facebook to temporarily reduce political content for some users in few countries

The California-based technology company introduced several artificial intelligence-powered systems last year to help detect misinformation. It started using AI technologies to identify hateful online content in 2016, and has since been adding several updates to its systems which now extends to images and other forms of media.

The company said its multilingual systems helped moderate content in several languages including Arabic and Spanish, targeting nearly 27 million piece of hateful content last quarter.

Also Read | Facebook faces new UK class action after data harvesting scandal

Facebook has faced criticism previously for its inability to curb hate speech on the platform. Most recently, the social network said it would reduce the distribution of all content and profiles run by Myanmars military after it seized power and detained civilian leaders in a coup earlier in February.

Facebook also said last year it will undertake an independent audit third-party audit of content moderation systems to validate the numbers it publishes.

You have reached your limit for free articles this month.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Support Quality Journalism.

*Our Digital Subscription plans do not currently include the e-paper, crossword and print.

Dear subscriber,

Thank you!

Your support for our journalism is invaluable. Its a support for truth and fairness in journalism. It has helped us keep apace with events and happenings.

The Hindu has always stood for journalism that is in the public interest. At this difficult time, it becomes even more important that we have access to information that has a bearing on our health and well-being, our lives, and livelihoods. As a subscriber, you are not only a beneficiary of our work but also its enabler.

We also reiterate here the promise that our team of reporters, copy editors, fact-checkers, designers, and photographers will deliver quality journalism that stays away from vested interest and political propaganda.

Suresh Nambath

More here:

Facebook says AI helped reduce hate speech on its platform last quarter - The Hindu

Heres how AI can help you sleep – The Next Web

Modern life is turning us into sleep-deprived zombies.

The traditional distractions of jobs, family, and friends have been exacerbated in recent years by irregular work, long commutes, smartphones, and all-night benders leaving us with little time to snooze. And thats without mentioning what keeps us up at night, whether its drunken revelers in the street, existential angst, or the horrifying screams next door.

Its therefore unsurprising that two-thirds of adults in developed nations dont get the nightly eight hours of kip recommended by the World Health Organization, which doctors warn is leading us down a cheery path towards chronic diseases, mental health disorders, and dysfunctional relationships.

But dont worry my fellow insomniacs, restful nights may soon be on the way. And its all thanks to AI of course, the digital ages panacea/snake oil.

Thats according to the boffins at the American Academy of Sleep Medicine, who believe AI can improve the treatment of sleep disorders.

[Read: If your employees dont get enough sleep, thats on you]

In a statement published yesterday, they explain that the vast volumes of data collected through sleep studies are ripe for algorithmic analysis.

The first application they suggest is in polysomnograms tests, which diagnose sleep disorders by analyzing brain waves, oxygen levels in the blood, heart rates, respiration, and eye and leg movements. Adding AI could both streamline the process and unearth new insights that can predict health outcomes.

But they also envision AI transcending the sleep lab to develop personalized treatments.

TheAmerican Academy of Sleep Medicine isnt the first group of academics to support using AI to help you sleep. In 2018, researchers from Stanford University found that a neural network could detect sleep issues more accurately than a human technician.

Right now [sleep test scoring] is done by technicians, and clearly, there is no reason why it couldnt be done by a computer, Dr Emmanuel Mignot, an author of the study and the director of the Stanford Center for Sleep Sciences and Medicine, told the Sleep Review journal last year.

These scientific endorsements will help legitimize the growing list of products using AI to help you sleep.

They include SleepScore, an app that tracks your breathing rate and movements through your smartphones microphone and speaker; DREEM, a headband that sends you soporific sounds through bone conduction; HEKA, a smart mattress that adjusts its position when you toss and turn; and Sleep.ai, an armband that detects snoring and then emits a vibration that pushes you onto your side.

Their combined efforts show that theres a vast range of ways that AI could help you sleep even if it cant yet mute the drunks shouting outside your window.

Published March 3, 2020 15:43 UTC

See the original post:

Heres how AI can help you sleep - The Next Web

Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)

July 21, 2017 by Ryan Hagemann

At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.

Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.

In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.

These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.

All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.

Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.

Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?

Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.

The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.

While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.

In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.

Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.

Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.

Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.

Original post:

Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)

The Era of AI Computing – FedScoop

At GTC, we unveiled Volta, our greatest generational leap since the invention of CUDA. It incorporates 21 billion transistors. Its built on a 12nm NVIDIA-optimized TSMC process. It includes the fastest HBM memories from Samsung. Volta features a new numeric format and CUDA instruction that perform 44 matrix operationsan elemental deep learning operationat super-high speeds.

Each Volta GPU is 120 teraflops. And our DGX-1 AI supercomputer interconnects eight Tesla V100 GPUs to generate nearly one petaflops of deep learning performance.

Googles TPU Also last week, Google announced at its I/O conference, its TPU2 chip, with 45 teraflops of performance.

Its great to see the two leading teams in AI computing race while we collaborate deeply across the boardtuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. AI is the greatest technology force in human history. Efforts to democratize AI and enable its rapid adoption are great to see.

Powering Through the End of Moores Law As Moores law slows down, GPU computing performance, powered by improvements in everything from silicon to software, surges.

The AI revolution has arrived despite the fact Moores lawthe combined effect of Dennard scaling and CPU architecture advancebegan slowing nearly a decade ago. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics.

CPU architects can harvest only modest ILPinstruction-level parallelismbut with large increases in circuitry and energy. So, in the post-Moores law era, a large increase in CPU transistors and energy results in a small increase in application performance. Performance recently has increased by only 10 percent a year, versus 50 percent a year in the past.

The accelerated computing approach we pioneered targets specific domains of algorithms; adds a specialized processor to offload the CPU; and engages developers in each industry to accelerate their application by optimizing for our architecture. We work across the entire stack of algorithms, solvers and applications to eliminate all bottlenecks and achieve the speed of light.

Thats why Volta unleashes incredible speedups for AI workloads. It provides a 5X improvement over Pascal, the current-generation NVIDIA GPU architecture, in peak teraflops, and 15X over the Maxwell architecture, launched just two years ago-well beyond what Moores law would have predicted.

Accelerate Every Approach to AI A sprawling ecosystem has grown up around the AI revolution.

Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300.

No one wants to miss the next breakthrough. Software is eating the world, as Marc Andreessen said, but AI is eating software.

The number of software developers following the leading AI frameworks on the GitHub open-source software repository has grown to more than 75,000 from fewer than 5,000 over the past two years.

The latest frameworks can harness the performance of Volta to deliver dramatically faster training times and higher multi-node training performance.

Deep learning is a strategic imperative for every major tech company. It increasingly permeates every aspect of work from infrastructure and tools to how products are made. We partner with every framework maker to wring out the last drop of performance. By optimizing each framework for our GPU, we can improve engineer productivity by hours and days for each of the hundreds of iterations needed to train a model. Every frameworkCaffe2, Chainer, Microsoft Cognitive Toolkit, MXNet, PyTorch, TensorFlowwill be meticulously optimized for Volta.

The NVIDIA GPU Cloud platform gives AI developers access to our comprehensive deep learning software stack wherever they want iton PCs, in the data center or via the cloud.

We want to create an environment that lets developers do their work anywhere, and with any framework. For companies that want to keep their data in-house, we introduced powerful new workstations and servers at GTC.

Perhaps the most vibrant environment is the $247 billion market for public cloud services. Alibaba, Amazon, Baidu, Facebook, Google, IBM, Microsoft and Tencent all use NVIDIA GPUs in their data centers.

To help innovators move seamlessly to cloud services such as these, at GTC we launched the NVIDIA GPU Cloud platform, which contains a registry of pre-configured and optimized stacks of every framework. Each layer of software and all of the combinations have been tuned, tested and packaged up into an NVDocker container. We will continuously enhance and maintain it. We fix every bug that comes up. It all just works.

A Cambrian Explosion of Autonomous Machines Deep learnings ability to detect features from raw data has created the conditions for a Cambrian explosion of autonomous machinesIoT with AI. There will be billions, perhaps trillions, of devices powered by AI.

At GTC, we announced that one of the 10 largest companies in the world, and one of the most admired, Toyota, has selected NVIDIA for their autonomous car.

We also announced Isaac, a virtual robot that helps make robots. Todays robots are hand programmed, and do exactly and only what they were programmed to do. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.

Once trained, the brain of the robot would be downloaded into Jetson, our AI supercomputer in a module. The robot would stand, adapt to any differences between the virtual and real world. A new robot is born. For GTC, Isaac learned how to play hockey and golf.

Finally, were open-sourcing the DLA, Deep Learning Acceleratorour version of a dedicated inferencing TPUdesigned into our Xavier superchip for AI cars. We want to see the fastest possible adoption of AI everywhere. No one else needs to invest in building an inferencing TPU. We have one for freedesigned by some of the best chip designers in the world.

Enabling the Einsteins and Da Vincis of Our Era These are just the latest examples of how NVIDIA GPU computing has become the essential tool of the da Vincis and Einsteins of our time. For them, weve built the equivalent of a time machine. Building on the insatiable technology demand of 3D graphics and market scale of gaming, NVIDIA has evolved the GPU into the computer brain that has opened a floodgate of innovation at the exciting intersection of virtual reality and artificial intelligence.

Learn the latest from NVIDIA on AI and Deep Learning in our newsletter.

Read the rest here:

The Era of AI Computing - FedScoop

Foghorn’s Ramya Ravichandar | Ensuring Value with Edge AI in IIoT Applications – IoT For All

In this episode of the IoT For All Podcast, we sat down with Ramya Ravichandar, VP of Products at Foghorn to talk about edge AI and how it ensures value for IIoT and commercial IoT deployments. We cover some of the use cases where edge AI really shines, how machine learning and edge computing enable real-time analytics, and how companies can ensure that their IoT deployments create real value on install.

Ramya has a decades experience in IoT and started in the industry at Cisco, where she headed its streamlining analytics platform. She has a rare combination of technical expertise in real-time analytics, machine learning, and AI, combined with a wealth of experience in Industrial IoT.

To start the episode, Ramya gave us some background on FogHorn. FogHorn was founded in 2014 to address the IoT data deluge at the edge, empowering industrial and commercial sectors to achieve transformational business outcomes through AI and ML capabilities at the edge.

Ramya also shared a couple of use cases to illustrate the power of edge AI when applied in an industrial setting, including the real-time identification of defects on the manufacturing floor, enabling operators to take action immediately to prevent product loss. Ramya said that this represents the fundamental premise of all of the solutions FogHorn is involved with.

One of the big differences over the past several years, Ramya said, was the level of education of customers. The customer journey has evolved alongside technology. Customers used to find it hard to find the use case, Ramya said, today, our customers are more savvy and knowledgeable. When they come to us they know exactly the problems they have and how they want to use IoT to address them. But the key to success, according to Ramya, was embracing the concept of a proof of value, rather than a proof of concept. If you dont have that spark in your first few deployments, youre probably working on the wrong use cases, Ramya said.

Ramya walked us through edge AI at its core and how it enables some of the key features that customers need. At its core, Ramya said that edge AI is about taking a step beyond data collection and applying models to incoming data to gain new insights. FogHorn seeks to be the bridge between the data science expertise companies already have and bringing that data into practice on the manufacturing floor.

She also spoke to the continued importance of the cloud and how it works together with edge computing and edge AI to create more powerful models. As an example, Ramya used a drilling rig. A drilling rig, she said, can generate up to a terabyte of data daily, but less than 1% of that data may end up being analyzed. Moving all of that data could take days, so being able to sort and parse that data at the edge is imperative to putting that data to work in real-time. And while edge computing and edge AI are imperative to that fast turnaround, the only place those models can be trained is in the cloud so, you have a model being trained and retrained in the cloud and pushed to each of those edge devices.

To wrap up the episode, Ramya walked us through some of the challenges FogHorn has faced while building its platform as well as what we can expect on the horizon for FogHorn.

Interested in connecting with Ramya? Reach out to her on Linkedin!

About FogHorn: FogHorn is a leading developer of intelligence edge computing software for industrial and commercial IoT application solutions. FogHorns software platform brings the power of advanced analytics and machine learning to the on-premises edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance, and operational intelligence use cases. FogHorns technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as smart grid, smart city, smart building, and connected vehicle applications.

(02:01) Intro to Ramya

(02:54) Intro to Foghorn

(04:34) Do you have any use cases or customer journey experiences you can share?

(06:49) How does edge computing help organizations move their IIoT projects toward full deployment?

(08:32) How do edge computing and AI play into delivering ROI to these use cases?

(11:04) What role does edge AI play in enabling an IIoT solution? What are the benefits?

(13:05) How does your platform integrate into the cloud structure?

(16:46) How does edge computing help with real-time functionality and accelerating automation?

(20:20) As youve been developing this platform, what are some of the challenges you and your clients have encountered?

(23:06) What stage are your customers usually coming to you in?

(24:32) Is there a stage thats too early to get a company like FogHorn involved?

(26:00) How do you handle IoT devices or deployments that have a smaller footprint?

Read the original:

Foghorn's Ramya Ravichandar | Ensuring Value with Edge AI in IIoT Applications - IoT For All

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies – Bio-IT World

August5,2020 | Sema4 gets $121M tobuild dynamic models of human health and defineoptimal, individualized health trajectories.Glioblastoma, hematology, and acutepancreatitisall see new funding for therapy development. And AI-powered models net cash.

$257M: Series B for Liquid Biopsy for Multiple Cancers

Thrive Earlier Detection, Cambridge, Mass., closed$257millioninSeries Bfinancing. Funds will help advanceCancerSEEK, aliquid biopsy test designed to detect multiple cancers at earlier stages of disease, into registrational trial. The round was led byCasdinCapital and Section 32, with participation from new investors Bain Capital Life Sciences, Brown Advisory, Driehaus Capital Management, Intermountain Ventures, Janus Henderson Investors, Lux Capital, and more.

$121M: Series Cfor Data-Driven Health Intelligence

Sema4, Stamford, Conn., closed a Series C round led by BlackRock with additional new investors including Deerfield Management Company and Moore Strategic Ventures. Sema4 is dedicated to transforming healthcare by building dynamic models of human health and defining optimal, individualized health trajectories. The company began with an emphasis on reproductive health and recently launched Sema4 Signal, a family of products and services providing data-drivenprecision oncology solutions. Over the last several months, Sema4 has also joined the fight against COVID-19. Sema4 has integrated its premier clinical and scientific expertise with its cutting-edge digital capabilities to deliver a holistic testing program that enables organizations to make fast, informed decisions as they navigate COVID-19. The company has also launchedCentrellis, an innovative health intelligence platformdesigned to provide a more complete understanding of disease and wellness and to offer physicians deeper insight into the patient populations they serve.

$112M: Series C for Phase 2 for GlioblastomaTreatment

Imvax, Philadelphia,raised $112 million in series C financing from existing investors HP WILD Holding AG, Ziff Capital Partners, Magnetar Capital, and TLP Investment Partners, and new institutional investor,Invus. The funds will to support Phase 2 Clinical Development of IGV-001 for treatment of Glioblastoma multiforme, Phase 1 research into additional solid tumor indications, and will help build out corporate and manufacturing capabilities.

$97M: Series C for Hematology, OncologyTherapies

AntengeneCorporation, Shanghai,has closed $97 million in Series C financing led by Fidelity Management & Research Company with additional support from new investors including GL Ventures (an affiliate of Hillhouse Capital) and GIC. Existing investors includingQimingVenture Partners andBoyuCapital also participated. Proceeds from the Series C financing will be primarily used to fund the continuing clinical development ofAntengene'srobust pipeline of hematology and oncology therapies, expanding in-house research and development capabilities and strengthening the commercial infrastructures in APAC markets.

$71M:AccelerateFinger-PrickBloodAnalyzer

Sight Diagnostics, Tel Aviv, hasraised$71million from Koch Disruptive Technologies,LonglivVentures, a member of the CK Hutchison Holdings andOurCrowd. The investment is meant to fuel Sights R&D into the automated identification and detection of diseases through its FDA-cleared direct-from-fingerstick Complete Blood Count (CBC) analyzer.Thenew investment will enable Sight to substantially expanditsU.S. footprint.

$50M: Series B Extensionfor Enzymatic DNA Synthesis

DNA Script, Paris,announced a $50 million extension to its Series B financing, bringing the total investment of this round to $89 million. This oversubscribed round is led byCasdinCapital and joined by Danaher Life Sciences, Agilent Technologies, MerckKGaA, Darmstadt, Germany, through its corporate venture arm, M Ventures three of the world's leaders in oligo synthesis LSP, theBpifranceLarge Venture Fund and Illumina Ventures. Funding from this investment round will enable DNA Script to accelerate the development of its suite of enzymatic DNA synthesis (EDS) technologies in particular, to support the commercial launch of the company's SYNTAX DNA benchtop printer.

$25M: Series A for Targeted Exosome Vehicles

Mantra Bio, San Francisco, has raised $25 million in a Series A financing to advance development of next generation, precision therapeutics based on its proprietary platform forengineering Targeted Exosome Vehicles (TEVs). 8VC and Viking Global Investors led the round, which also included Box Group and Allen & Company LLC. Mantra Bios REVEAL is an automated high throughput platform that rapidly designs, tests and optimizes TEVs for specific therapeutic applications. The platform integrates computational approaches, wet biology, and robotics, to leverage the diversity of exosomes and enable the rational design of therapeutics directed at a wide range of tissue and cellular targets.

$23.7M: Shared Grant for Biologically Based Polymers

The National Science Foundation has named the University of California, Los Angeles and the University of California, Santa Barbara, partners in a collaboration calledBioPACIFICMIPforBioPolymers, Automated Cellular Infrastructure, Flow, and Integrated Chemistry: Materials Innovation Platformand has funded the effort with a five-year, $23.7 million grant. The initiative is part of the NSF Materials Innovation Platforms program, and its scientific methodology reflects the broad goals of the federal governments Materials Genome Initiative, which aims to develop new materials twice as fast at a fraction of the cost. The collaboration aims to advance the use of microbes for sustainable production of new plastics.

$17M: Series A to Scale At-Home Blood Collection

Tasso, Seattle,secured a $17 million Series A financing round led by HambrechtDuceraGrowth Ventures and includedForesiteCapital, Merck Global Health Innovation Fund, Vertical Venture Partners,Techstars, and Cedars-Sinai. The company will use the proceeds to scale manufacturing and operations to meet the increased demand for its line of innovative Tasso OnDemand devices, which enable people to collect their own blood using a virtually painless process from anywhere at any time. These fast and easy-to-use products are being adopted by leading academic medical institutions, government agencies, comprehensive cancer centers, and pharmaceutical organizations around the world.

$12M: Molecular Data, AI Build Therapeutic Models

Endpoint Health, Palo Alto, Calif.,emerged from stealth mode in mid-July with $12 million in debt and equity financing led by Mayfield to make targeted therapies for patients with critical illnesses including sepsis and acute respiratory distress syndrome (ARDS). Endpoint Health is led by an experienced executive team including the co-founders ofGeneWEAVE, an infection detection and therapy guidance company that was acquired by Roche in 2015. Endpoint Healths approach combines molecular and digital patient data with AI to create comprehensive therapeutic modelstools that identify distinct patient subgroups and treatmentpatterns in order to highlight unmet therapeutic needs. These models are used to identify late-stage and on-market therapies, often created for other indications, that Endpoint can developinto targeted therapies, which will include the required tests and software to guide their use.

$12M: Start of an NIH Contract For COVID-19 Microfluidics

FluidigmCorporation, South San Francisco, Calif.,announced execution of a letter contract with the National Institutes of Health, National Institute of Biomedical Imaging and Bioengineering, for a proposed project under the agencys Rapid Acceleration of Diagnostics (RADx) program. The project, with a total proposed budget of up to $37 million, contemplates expandingproduction capacity and throughput capabilities for COVID-19 testing withFluidigmmicrofluidics technology. The letter contract providesFluidigmwith access to up to $12 million of initial funding based on completion and delivery of certain validation milestones prior to execution of the definitive contract.A goal of theRADxinitiative is to enable approximately 6 million daily tests in the United States by December 2020.

$6.5M: Series A for AI-Powered Precision Oncology

Nucleai,Tel Aviv,a computational biology company providing an AI-powered precision oncology platform for research and treatment decisions, secured $6.5M Series A initial closing.Debiopharmsstrategic corporate venture capital fund led the round joined by existing investors: Vertex Ventures and Grove Ventures.Nucleaiscore technology analyzes large and unique datasets of tissue images using computer vision and machine learning methods to model the spatial characteristics of both the tumor and the patients immune system, creating unique signatures that are predictive of patient response.

$5M: Pharma Grant for Rural Lung Cancer

Stand UpToCancer, New York,received a new $5 million grant from Bristol Myers Squibb to fund research and education efforts aimed at achieving health equity for underserved lung cancer patients, including Black people and people living in rural communities. The research efforts funded by the three-year grant will consist of supplemental grants to current Stand UpToCancer research teams. The supplemental grants will focus on identifying new and innovative diagnostic and treatment methods for lung cancer patients in need. These supplemental grants will be designed to jumpstart pilot projects at the intersection of lung cancers, health disparities and rural healthcare, for instance increasing clinical trial enrollment among historically under-represented groups. Since 2014, Bristol Myers Squibb has provided funding for important Stand UpToCancer research initiatives.

$2.5M: Cloud-Based XR Platform

Grid Raster, Mountain View, Calif., secured $2.5 million led byBlackhornVentures with participation from other existing investorsMaCVenture Capital andExfinityVenture Partners. This infusion of additional capital enables Grid Raster to continue developing its XR solutions, powered by cloud-based remote rendering and 3D vision-based AI, in key customer markets that include Aerospace, Defense, Automotive and Telecommunications.

$1.5M: SBIR for Acute Pancreatitis

Lamassu Pharma has received $1.5 million in Small Business Innovation Research (SBIR) grant funding from the National Institutes of Health (NIH). This will be used for further development of its lead therapeutic compound, RABI-767, a novel small molecule lipase inhibitor licensed from the Mayo Foundation for Medical Education and Research. Lamassu is developing RABI-767 to fill a critical, unmet clinical need for a treatment for acute pancreatitis (AP). Lamassu's proposed treatment is designed to mitigate the systemic toxicity and organ failure associated with acute pancreatitis that causes lengthy hospitalization, organ failure, and death, thus saving both lives and healthcare system resources. Funding from the NIH will enable Lamassu tofurther its translational research, to bring RABI-767 to human trials, and to partner with clinical and commercial development partners.

$800K: Protein Interaction Platform

A-Alpha Bio, Seattle,has been awarded an $800,000 grant to optimize therapeutics for infectious diseases. Awarded by the Bill & Melinda Gates Foundation, the grant work will be carried out by A-Alpha Bio in partnership with Lumen Bioscience using machine learning models built from data generated by A-Alpha Bios proprietaryAlphaSeqplatform. A-Alpha Bio has already completed a pilot study in partnership with Lumen Biosciences and supported by the Gates Foundation. This pilot study successfully demonstrated theAlphaSeqplatforms ability to characterize binding of therapeutic antibodies against multiple pathogen strains simultaneously. With the latest grant, the companies will useAlphaSeqdata to train machine learning models for the development of potent and cross-reactive therapeutics against intestinal and respiratory pathogens.

$620K: Grant for Gas-Sensing Ingestible

AtmoBiosciences, Melbourne and Sydney, Australia,has been awarded a $620,000 Australian Government grant through theBioMedTechHorizons (BMTH) program.Atmoaddresses the unmet clinical need to interrogate and monitor the function of the gut microbiota, allowing better diagnosis and development of personalized therapies for gastrointestinal disorders, resulting in earlier and more successful relief of symptoms, and reduced healthcare costs.Atmosplatform is underpinned by theAtmoGas Capsule, a world-first ingestible gas-sensing capsule that senses clinically important gaseous biomarkers produced by the microbiome in the gastrointestinal system. This data is wirelessly transmitted to the cloud for aggregation and analysis.

Excerpt from:

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies - Bio-IT World

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible – Business Wire

CHICAGO ANSIBLEFEST--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, and IBM Research today announced Project Wisdom, the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. Using an artificial intelligence (AI) model, the project aims to boost the productivity of IT automation developers and make IT automation more achievable and understandable for diverse IT professionals with varied skills and backgrounds.

According to a 2021 IDC prediction1, by 2026, 85% of enterprises will combine human expertise with AI, ML, NLP, and pattern recognition to augment foresight across the organization, making workers 25% more productive and effective. Technologies such as machine learning, deep learning, natural language processing, pattern recognition, and knowledge graphs are producing increasingly accurate and context-aware insights, predictions, and recommendations.

Project Wisdom underpinned by AI foundation models derived from IBMs AI for Code efforts works by enabling a user to input a command as a straightforward English sentence. It then parses the sentence and builds the requested automation workflow, delivered as an Ansible Playbook, which can be used to automate any number of IT tasks. Unlike other AI-driven coding tools, Project Wisdom does not focus on application development; instead the project centers on addressing the rise of complexity in enterprise IT as hybrid cloud adoption grows.

From human readable to human interactive

Becoming an automation expert demands significant effort and resources over time, with a learning curve to navigate varying domains. Project Wisdom intends to bridge the gap between Ansible YAML code and human language, so users can use plain English to generate syntactically correct and functional automation content.

It could enable a system administrator who typically delivers on-premises services to reach across domains to build, configure, and operate in other environments using natural language to generate playbook instructions. A developer who knows how to build an application, but not the skillset to provision it in a new cloud platform, could use Project Wisdom to expand proficiencies in these new areas to help transform the business. Novices across departments could generate content right away while still building foundational knowledge, without the dependencies of traditional teaching models.

Driving open source innovation with collaboration

While the power of AI in enterprise IT cannot be denied, community collaboration, along with insights from Red Hat and IBM, will be key in delivering an AI/ML model that aligns to the key tenets of open source technology. Red Hat has more than two decades of experience in collaborating on community projects and protecting open source licenses in defense of free software. Project Wisdom, and its underlying AI model, are an extension of this commitment to keeping all aspects of the code base open and transparent to the community.

As hybrid cloud operations at scale become a key focus for organizations, Red Hat is committed to building the next wave of innovation on open source technology. As IBM Research and Ansible specialists at Red Hat work to fine tune the AI model, the Ansible community will play a crucial role as subject matter experts and beta testers to push the boundaries of what can be achieved together. While community participation is still being worked through, those interested can stay up to date on progress here.

Supporting Quotes

Chris Wright, CTO and SVP of Global Engineering, Red HatThis project exemplifies how artificial intelligence has the power to fundamentally shift how businesses innovate, expanding capabilities that typically reside within operations teams to other corners of the business. With intelligent solutions, enterprises can decrease the barrier to entry, address burgeoning skills gaps, and break down organization-wide siloes to reimagine work in the enterprise world.

Ruchir Puri, chief scientist, IBM Research; IBM Fellow; vice president, IBM Technical CommunityProject Wisdom is proof of the significant opportunities that can be achieved across technology and the enterprise when we combine the latest in artificial intelligence and software. Its truly an exciting time as we continue advancing how todays AI and hybrid cloud technologies are building the computers and systems of tomorrow.

1IDC FutureScape: Worldwide Artificial Intelligence and Automation 2022 Predictions, Doc # US48298421, Oct 2021

Additional Resources

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity and service. For more information, visit https://research.ibm.com.

Forward-Looking Statements

Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the companys current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat, the Red Hat logo and Ansible are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

The rest is here:

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible - Business Wire

China wants to be a $150 billion world leader in AI in less than 15 years – CNBC

Zhang Peng | LightRocket | Getty Images

Robots dance for the audience on the expo. On Jul. 8th, Beijing International Consumer electronics Expo was held in Beijing China National Convention Center.

The first part of the plan runs up to 2020 and proposes that China makes progress in developing a "new generation" of AI theory and technology. This will be implemented in some devices and basic software. It will also involve the development of standards, policies, and ethics for AI across the world's second-largest economy.

In the second step of the plan which is up to 2025, China expects to achieve a "major breakthrough" in AI technology and the application of it, which will lead to "industrial upgrading and economic transformation".

The last step, which will happen between 2025 and 2030 sees China become the world leader in AI, with the industry worth 1 trillion yuan.

Read the original post:

China wants to be a $150 billion world leader in AI in less than 15 years - CNBC

Local COVID-19 Forecasts by AI – The UCSB Current

Despite efforts throughout the United States last spring to suppress the spread of the novel coronavirus, states across the country have experienced spikes in the past several weeks. The number of confirmed COVID-19 cases in the nation has climbed to more than 3.5 million since the start of the pandemic.

Public officials in many states, including California, have now started to roll back the reopening process to help curb the spread of the virus. Eventually, state and local policymakers will be faced with deciding for a second time when and how to reopen their communities. A pair of researchers in UC Santa Barbaras College of Engineering, Xifeng Yan and Yu-Xiang Wang, have developed a novel forecasting model, inspired by artificial intelligence (AI) techniques, to provide timely information at a more localized level that officials and anyone in the public can use in their decision-making processes.

We are all overwhelmed by the data, most of which is provided at national and state levels, said Yan, an associate professor who holds the Venkatesh Narayanamurti Chair in Computer Science. Parents are more interested in what is happening in their school district and if its safe for their kids to go to school in the fall. However, there are very few websites providing that information. We aim to provide forecasting and explanations at a localized level with data that is more useful for residents and decision makers.

The forecasting project, Interventional COVID-19 Response Forecasting in Local Communities Using Neural Domain Adaption Models, received a Rapid Response Research (RAPID) grant for nearly $200,000 from the National Science Foundation (NSF).

The challenges of making sense of messy data are precisely the type of problems that we deal with every day as computer scientists working in AI and machine learning, said Wang, an assistant professor of computer science and holder of the Eugene Aas Chair. We are compelled to lend our expertise to help communities make informed decisions.

Yan and Wang developed an innovative forecasting algorithm based on a deep learning model called Transformer. The model is driven by an attention mechanism that intuitively learns how to forecast by learning what time period in the past to look at and what data is the most important and relevant.

If we are trying to forecast for a specific region, like Santa Barbara County, our algorithm compares the growth curves of COVID-19 cases across different regions over a period of time to determine the most-similar regions. It then weighs these regions to forecast cases in the target region, explained Yan.

In addition to COVID-19 data, the algorithm also draws information from the U.S. Census to factor in hyper-local details when calibrating the forecast for a local community.

The census data is very informative because it implicitly captures the culture, lifestyle, demographics and types of businesses in each local community, said Wang. When you combine that with COVID-19 data available by region, it helps us transfer the knowledge learned from one region to another, which will be useful for communities that want data on the effectiveness of interventions in order to make informed decisions.

The researchers models showed that, during the recent spike, Santa Barbara County experienced spread similar to what Mecklenburg, Wake, and Durham counties in North Carolina saw in late March and early April. Using those counties to forecast future cases in Santa Barbara County, the researchers attention-based model outperformed the most commonly used epidemiological models: the SIR (susceptible, infected, recovered) model, which describes the flow of individuals through three mutually exclusive stages; and the autoregressive model, which makes predictions based solely on a series of data points displayed over time. The AI-based model had a mean absolute percentage error (MAPE) of 0.030, compared with 0.11 for the SIR model and 0.072 with autoregression. The MAPE is a common measure of prediction accuracy in statistics.

Yan and Wang say their model forecasts more accurately because it eliminates key weaknesses associated with current models. Census data provides fine-grained details missing in existing simulation models, while the attention mechanism leverages the substantial amounts of data now available publicly.

Humans, even trained professionals, are not able to process the massive data as effectively as computer algorithms, said Wang. Our research provides tools for automatically extracting useful information from the data to simplify the picture, rather than making it more complicated.

The project, conducted in collaboration with Dr. Richard Beswick and Dr. Lynn Fitzgibbons from Cottage Hospital in Santa Barbara, will be presented later this month during the Computing Research Association (CRA) Virtual Conference. Formed in 1972 as a forum for department chairs of computer sciences departments across the country, the CRAs membership has grown to include more than 200 organizations active in computing research.

Yan and Wangs research efforts will not stop there. They plan to make their model and forecasts available to the public via a website and to collect enough data to forecast for communities across the country. We hope to forecast for every community in the country because we believe that when people are well informed with local data, they will make well-informed decisions, said Yan.

They also hope their algorithm can be used to forecast what could happen if a particular intervention is implemented at a specific time.

Because our research focuses on more fundamental aspects, the developed tools can be applied to a variety of factors, added Yan. Hopefully, the next time we are in such a situation, we will be better equipped to make the right decisions at the right time.

See original here:

Local COVID-19 Forecasts by AI - The UCSB Current

This robotic glove uses AI to help people with hand weakness regain muscle grip – The Next Web

A Scottish biotech startup has invented an AI-powered robotic glove that helpspeople recover muscle grip in their hands.

BioLibertydesigned the glove for people who suffer from hand weakness, due to age or illnesses suchas motor neurone disease and carpal tunnel syndrome.

The system detects their intention to grip by usingelectromyography (EMT) to measure the electrical activity generated by a nerves stimulation of the muscle.

An algorithm then converts the intent into force to help the wearer strengthen their grip on an object.

The glove could help users with a wide range of daily tasks, from driving to opening jars.

[Read:How Polestar is using blockchain to increase transparency]

BioLiberty cofounder Ross Hanlon said he got the idea when an aunt with multiple sclerosis started struggling with simple tasks like drinkingwater:

Being an engineer, I decided to use technology to tackle these challenges head-on with the aim of helping people like my aunt to retain their autonomy. As well as those affected by illness, the population continues to age and this places increasing pressure on care services. We wanted to support independent living and healthy aging by enabling individuals to live more comfortably in their own homes for longer.

Hanlons aunt is one of around 2.5 million UK citizens who suffer from hand weakness. An aging population means this number will only increase.

BioLibertys robotic glove and digital therapy platform could help them regain their strength.

The company has already developed a working prototype of the glove. The team now plans to use supportfrom Edinburgh Business Schools Incubator tobring the glove into homes.

Ultimately, they want their tech to help people suffering from reduced mobility to regain their independence.

Published February 16, 2021 16:17 UTC

The rest is here:

This robotic glove uses AI to help people with hand weakness regain muscle grip - The Next Web