AI FEARS: New laws DEMANDED over concerns at speed of super robots taking over our jobs – Express.co.uk

GETTY

Vast swathes of the global workforce could be replaced by machines thanks to rapid technological change and innovation in artificial intelligence and robotics.

And future governments will be forced to bring in legislation to ensure quotas of human workers as traditional working practices are turned on their head.

Gerlind Wisskirchen, a Cologne-based employment lawyer who is vice-chair of the International Bar Associations (IBA) global employment institute, said existing legal frameworks regulating employment and safety are becoming rapidly outdated.

GETTY

Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI

Gerlind Wisskirchen

He said: What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics.

Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI.

And the legislation once in place to protect the rights of human workers may be no longer fit for purpose.

In some cases new labour and employment legislation is urgently needed to keep pace with increased automation.

GETTY

Mr Wisskirchens report for the IBA said the competitive advantages of poorer, emerging economies which rely on cheaper workforces will soon be a thing of the past as robot production lines and intelligent computer systems undercut the cost of humans.

A German car worker costs more than 34 an hour but a robot costs around 5 per hour.

He said: A production robot is thus cheaper than a worker in China. Nor does a robot become ill, have children or go on strike and it is not entitled to annual leave.

Mr Wisskirchen warned that white collar professions will not be immune to the AI revolution and predicted a third of graduate level jobs would eventually be replaced by machines or software around the world.

GETTY

Among the professions most likely to disappear are accountants, court clerks and desk officers at fiscal authorities.

The report covers both changes already transforming work and the future consequences of what it describes as industrial revolution 4.0.

The three preceding revolutions are listed as: industrialisation, electrification and digitalisation.

Industry 4.0 involves the integration of the physical and software in production and the service sector.

The report names Amazon, Uber, Facebook, smart factories and 3D printing as pioneers.

Mr Wisskirchen said governments will one day have to decide what jobs should be performed exclusively by humans such as childcare.

GETTY

Getty Images

1 of 9

CHEF: A start-up called Moley Robotics has invented a 100% automated, intelligent robot chef. The cooking automaton can learn recipes and techniques, whip up gourmet meals and even clean up after itself.

He said: The state could introduce a kind of human quota in any sector and decide whether it intends to introduce a made by humans label or tax the use of machines.

He noted that the traditional workplace is disintegrating, with more part time employees, distance working, and the blurring of professional and private time.

He said: It is being replaced by the latte macchiato workplace where employees or freelance workers are based in the cafe around the corner, working from their laptops.

The workplace may eventually only serve the purpose of maintaining social network between colleagues.

See the article here:

AI FEARS: New laws DEMANDED over concerns at speed of super robots taking over our jobs - Express.co.uk

AI and Blockchain: Double the Hype or Double the Value? – Forbes

blockchain

Artificial Intelligence (AI) as a market is full of hype, with vendors, customers, and press all speaking breathlessly about the capabilities for AI in general and their offerings specifically. Likewise, blockchain is also a widely hyped market, with technology providers and customers claiming all sorts of capabilities that may or may not be possible. Combining AI and blockchain then must be double the hype? On the other hand, AI is providing real, tangible value in many myriad ways we talk about every day. Likewise, blockchain is starting to show value across a range of applications and industry. So, perhaps combining AI and blockchain will also show twice the value combined together.

The role of blockchain in the context of AI

Blockchain is a decentralized, distributed ledger of transactions that has elements of transparency, trust, verifiability, and something called smart contracts. Decentralized and distributed means that information that is stored across the network in such a way that each end point has access to the data without requiring access to a central server. The network is also distributed because the transactions happen at each end point without requiring centralized coordination. A ledger is a record of transactions. Blockchain records a ledger of interactions between two separate parties whether it be a financial exchange or even a chain of custody showing when things have changed hands over time.

Since every block in the blockchain contains a different piece of information that is encrypted or encoded, the blockchain can help guarantee trust and verifiability of data. The concept of the chain and block in blockchain is that each block has its own information and that information contains a link to the block before it, which develop the chain and provides a verifiable chain of custody. No individual actor can change the information in a block without invalidating the entire chain of information in a particular block and thus messing up the chain. Since the chain is distributed in myriad of other places and between other parties, to make changes to the chain, a consensus of all the parties would need to agree to the changes being made.

Adding to the concept of the blockchain are smart contracts, which are decentralized pieces of code that can be triggered when a specific chain of actions has been met. In this way, when you have two parties wanting to execute a secure, trusted transaction without the use of an intermediary this is what blockchain ideally would be used for.

So how can blockchain help with AI? The first benefit is that you can share machine learning models among all parties without an intermediary. A good example is with facial recognition software. If one device knows what a person looks like and uploads it to the chain, other devices hooked up also know what that person looks like. When other devices upload their own facial recognition data, the other devices will gain the ability to use that for the facial recognition model. Since this happens on the blockchain, there is no central control over facial recognition, and as such no one company owns or stores the data. This approach also allows for everyone to learn faster and collaboratively through the use of integrated AI with blockchain.

AI systems can also use blockchain to facilitate the sharing of data used across multiple models. A great example is the use of machine learning models for product recommendations in online retail. If an online store knows the preferences of one shopper, and then that customer goes to a different store website, these two sites can be connected through a blockchain to share trusted personalization information. This could potentially become a place of competition for smaller ecommerce sites that want to share with each other personalization information. Instead of each company gathering their own personalization data, they can share it amongst each other in a blockchain, providing competition to sites like Amazon and Walmart who have already developed their own data systems to gather this information about their customers. The benefit for this to the customer is that in exchange for sharing their information, they can get everything from better pricing to customized shopping experiences. This could also prevent data breaches if all information and payment systems are stored and shared through a blockchain rather than a centralized data server.

How blockchain can benefit AI

Not only can blockchains be used to share models and data, but blockchains can help serve a role as a master brain in a manner shared across multiple AI systems. If we can put all of these shared learning benefits and blockchain and AI together, the possibility may also be there to combine all of these things that can learn from their surroundings and then share that learning with all the AI systems on the network. A major benefit would be that no one owns it and theres no government control over the bot or the shared brain. It could potentially be unbiased because of the sheer amount of information coming in from different areas and different angles.

Another application is addressing the challenge of explainable AI. One of the more significant problems with deep learning is that there isnt a clear idea about what inputs result in what output and how that affected the whole sequence. If something goes wrong in a deep learning neural network, we dont have a clear idea of how to identify the problem and correct it. This is the problem of neural networks in effect being a black box without any real transparency or explainability. However, if we use blockchains, we can record how individual actions result in a final decision in a non-reputable manner, which allows us to go back and see where things went wrong and then fix the problem. The blockchain would be used to record events, such as autonomous vehicle decisions and actions in a way that will not be modified later. This can also increase trust since the blockchain element is unbiased and is just for storage and analysis, anyone can go in and see what has happened.

Finally, AI systems can be used to improve blockchains in general. Machine learning systems can keep an eye on what is happening in the blockchain. It can look for patterns and anomalies in the types of data being stored and actions being performed on a particular server and be used to alert users when something may be happening. The systems can look for normal behavior and flag what seems to be unusual. The AI systems can help keep blockchain more secure, more reliable, and more efficient.

While its quite possible that the worlds of AI and blockchain are full of hype, there are actual, tangible, realistic ways in which the two emerging technologies can be used in ways that benefit each other and provide real outcomes for those looking to implement the technologies today in their environments.

Go here to see the original:

AI and Blockchain: Double the Hype or Double the Value? - Forbes

NASAs impressive new AI can predict when a hurricane intensifies – The Next Web

Meteorologists have gotten pretty damn good at forecasting a hurricanes track. But they still struggle to calculate when it will intensify, as its seriously hard to understand whats happening inside atropical cyclone.

A new machine learning model developed by NASA could dramatically improve their calculations,and give people in a hurricanes path more time to prepare.

Scientists at the space agencys Jet Propulsion Laboratory in Southern California developed the system after searching through years of satellite data.

They discovered three strong signalsthat a hurricane will become more severe: abundant rainfall inside the storms inner core; the amount of ice water in the clouds within the tropical cyclone; and the temperature of the air flowing away from the eye of the hurricane.

[Read:We asked 3 CEOs what tech trends will dominate post-COVID]

The team then used IBM Watson Studio to build a model that analyzes all these factors, as well as those already used bythe National Hurricane Center, a US government agency that monitors hazardous tropical weather.

The researchers trained the model to detect when a hurricane will undergo rapid intensification which happens when wind speeds increase by 56 kmph or more within 24 hours on storms that swept across the US between 1998 and 2008. They then tested iton a separate set of stormsthat hit the country from 2009 to 2014. Finally, they compared the systems forecasts to the model used by the National Hurricane Center for the latter set of storms.

The teamsays their modelwas 60% more likely to predict a hurricanes winds would increase by at least 56 kmph within 24 hours. But for hurricanes whose windsshot up by at least 64 kmph, the new system had a 200% higher chance of detecting these events.

The team is now testing the model on storms during the current hurricane season. If that proves successful, it could help minimize the loss of life and property caused when futuretropical cyclones hit.

You can read a research paper on the modelin the journal Geophysical Research Letters.

So youre interested in AI? Then join our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.

Published September 3, 2020 10:23 UTC

Here is the original post:

NASAs impressive new AI can predict when a hurricane intensifies - The Next Web

There is a good case to unleash job-killing AI on the high seas – New Scientist

Log in

Create an account for free access to:

With a free New Scientist account you'll enjoy increased access to New Scientist content and ideas.

Every week the editors release a selection of articles to New Scientist account holders. These articles are available exclusively to logged in account holders and subscribers. The editors selection can range from new features, opinions and interviews to fascinating content from the New Scientist archive.

You'll also receive the latest news and top stories in your inbox every week with the New Scientist email newsletter.

Get more from New Scientist. To create your free account, simply complete this quick form.

Special rates for students, teachers, libraries, schools, colleges and universities

Special rates for companies and group subscriptions

Give New Scientist to a friend or loved one, or activate your gift subscription

More here:

There is a good case to unleash job-killing AI on the high seas - New Scientist

Facebook’s radioactive data tracks the images used to train an AI – MIT Technology Review

The news: A team from Facebook AI Research has developed a way to track exactly which images in a data set were used to train a machine-learning model. By making imperceptible tweaks to images, creating a kind of watermark, they were able to make tiny corresponding changes to the way an image classifier trained on those images works, without impairing its overall accuracy. This let them later match models up with the images that were used to train them.

Why it matters: Facebook calls the technique radioactive data because it is analogous to the use of radioactive markers in medicine, which show up in the body under x-ray. Highlighting what data has been used to train an AI makes models more transparent, flagging potential sources of biassuch as a model trained on an unrepresentative set of imagesor revealing when a data set was used without permission or for inappropriate purposes.

Make no mistake: A big challenge was to change the images without breaking the resulting model. Tiny tweaks to an AIs input can sometimes lead to it making stupid mistakes, such as identifying a turtle as a gun or a sloth as a racecar. Facebook made sure to design its watermarks so that this did not happen. The team tested its technique on ImageNet, a widely used data set of more than 14 million images, and found that they could detect use of radioactive data with high confidence in a particular model even when only 1% of the images had been marked.

View post:

Facebook's radioactive data tracks the images used to train an AI - MIT Technology Review

Award-Winning Artwork Uses AI-Generated Bird Song To Recreate The Dawn Chorus – Forbes

Bird sounds at the crack of dawn are not part of everyones morning. If you live in a city, theres a good chance that bird populations have migrated elsewhere, as they struggled to compete with urban lights and sounds. That inspired artist Alexandra Daisy Ginsberg to question what would happen if birds disappeared altogether. How would the dawn chorus change?

Alexandra Daisy Ginberg's Machine Auguries, 2019. Installation shot from 24/7 exhibitionat Somerset ... [+] House, London, open 31 Oct 2019 23 Feb 2020. Commissioned by Somerset House and A/D/O by MINI. With additional support from Faculty and The Adonyeva Foundation.

Ginsbergs work Machine Auguries, which uses artificial intelligence to mimic bird sounds, was commissioned by Somerset House in London. Its currently part of an exhibit at FACT in Liverpool, which recently reopened after Englands latest lockdown. The work was also one of this years ten Science Breakthroughs of the Year at the annual Falling Walls event in Berlin. Being entirely online this year, the event didnt fully do Machine Auguries justice. Its an immersive work, meant to be experienced by walking inside of it.

When you go into the installation, you're in the silvery blue light of the pre-dawn, says Ginsberg. The first bird sings, and it's a real bird. Then you hear another bird sing from somewhere else in the room also a real bird. And then you hear the first machines bouncing back.

This call-and-response between artificial bird sounds and the recorded sound of real birds in mimics how birds communicate. But the way the generative adversarial network

(GAN) learns how to make bird sounds also resembles how real birds learn to sing. One bird sings and another bird copies it in response.

To create this collection of artificial bird sounds, Ginsberg worked with physicist Przemek Witaszczyk, sound recordist Chris Watson, and several others. One of the challenges they encountered is that they would need a collection of bird sounds with very little variation in ambient sound, to prevent the AI from interpreting background noise as bird song. Through the British Library they found a collection of dawn chorus sounds recorded every day for a year in the same location. They also used the website xeno-canto, where people share bird sounds from around the world, to get species-specific recordings, and with this and other collections they had a wide variety of training data for the GAN.

The work includes artificial bird sounds taken at different points in the AI training process. We wanted to show this evolution, says Ginsberg. The beginning of the piece has the first attempts, which are very clunky, noisy and weird. But over the next ten minutes, the machine ones become increasingly lifelike.

With Machine Auguries, Ginsberg wants to encourage people to think about the birds around them and reflect. Do you normally hear a dawn chorus where you live? If not, why not? Whats happening to the birds? And what would it be like if we lost this altogether?

Recently, Ginsberg moved from London to the countryside, and suddenly she hears birds real birds around her every day. She has also started to take up gardening, but thats part of the preparation for her next large exhibit: Ginsberg has recently been commissioned to create a new permanent installation for the Eden Project.

The Eden Project consists of several biodomes and a large garden in a disused clay quarry in Cornwall, England. With a mission to educate people about plants and the environment, the organisation regularly works with artists to start conversations around conservation, and the site includes several permanent and temporary artworks.

But Ginsberg didnt want to build yet another physical structure. Instead, I said, why are we making an artwork for humans? Lets make one for pollinators!

The proposed project is a garden on a 45- by 30-meter outdoor plot, designed using an algorithm that creates the perfect garden for different types of pollinators, including bees, moths and butterflies.

Prepatory sketch suggesting what the final pollinator garden at the Eden Project might look like.

But its not all for the bees: Theres an interactive element for humans as well. The algorithm used to design the garden will be made available online, so that people can create their own unique garden.

Ginsberg is looking forward to seeing the outcome of the interactive part of the project. I hope that for people who aren't experienced gardeners, this might be a fun way to plant something that they might never have planted and be guided through it. And they can say, here's my artwork.

Like Machine Auguries, Ginsbergs upcoming work for the Eden Project combines technology with ecology to encourages visitors to reflect on our interactions with the natural world whether thats waking up to bird song or creating a garden for bees.

Disclaimer: I was a recipient of a Falling Walls / Berlin Science Week journalism grant, and participation in this programme introduced me to Alexandra Daisy Ginsberg and her work.

More here:

Award-Winning Artwork Uses AI-Generated Bird Song To Recreate The Dawn Chorus - Forbes

Voices in AI Episode 103: A Conversation with Ben Goertzel – Gigaom

Today's leading minds talk AI with host Byron Reese

On Episode 103 of Voices in AI, Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the concepts of a master algorithm and AGIs.

Listen to this episode or read the full transcript at http://www.VoicesinAI.com

Byron Reese: This is Voices in AI brought to you by GigaOm, Im Byron Reese. Today, my guest is Ben Goertzel. He is the CEO of SingularityNET, as well as the Chief Scientist over at Hanson Robotics. He holds a PhD in Mathematics from Temple University. And hes talking to us from Hong Kong right now where he lives. Welcome to the show Ben!

Ben Goertzel: Hey thanks for having me. Im looking forward to our discussion.

The first question I always throw at people is: What is intelligence? And interestingly you have a definition of intelligence in your Wikipedia entry. Thats a first, but why dont we just start with that: what is intelligence?

I actually spent a lot of time working on the mathematical formalization of a definition of intelligence early in my career and came up with something fairly crude which, to be honest, at this stage Im no longer as enthused about as I was before. But I do think that that question opens up a lot of other interesting issues.

The way I came to think about intelligence early in my career was simply: achieving a broad variety of goals in a broad variety of environments. Or as I put it, the ability to achieve complex goals in complex environments. This tied in with what I later distinguish as AGI versus no AI. I introduced the whole notion of AGI and that term in 2004 or so. That has to do with an AGI being able to achieve a variety of different or complex goals in a variety of different types of scenarios, different than the narrow AIs that we have all around us that basically do one type of thing in one kind of context.

I still think that is a very valuable way to look at things, but Ive drifted more into a systems theory perspective. Ive been working with a guy named David (Weaver) Weinbaum who did a piece recently in the Free University of Brussels on the concept of open ended intelligence, which is more looking at intelligence, than just the process of exploration and information creation than those in the interaction with an environment. And in this open ended intelligence view, youre really looking at intelligent systems and complex organizing systems and the creation of goals to be pursued, is part of what an intelligence system does, but isnt necessarily the crux of it.

So I would say understanding what intelligence is, is an ongoing pursuit. And I think thats okay. Like in biology the goal is to define what life is in the once and for all formal sense, before you can do biology or an art, the goal isnt to define what beauty is before you can proceed. These are sort of umbrella concepts which can then lead to a variety of different particular innovations and formalizations of what you do.

And yet I wonder, because youre right, biologists dont have a consensus definition for what life is or even death for that matter, you wonder at some level if maybe theres no such thing as life. I mean like maybe it isnt really and so maybe you say thats not really even a thing.

Well, this is that one of my favorite quotes of all time [from] former President Bill Clinton which is, That all depends on what the meaning of IS is.

There you go. Well let me ask you a question about goals, which you just brought up. I guess when were talking about machine intelligence or mechanical intelligence, let me ask point blank: is a compass goal to point to North? Or does it just happen to point to north? And if it isnt its goal to point to North, what is the difference between what it does and what it wants to do?

The standard example used in resistance theory is the thermostat. The thermostats goal is to keep the temperature above a certain level and below a certain level or in a certain range and then in that sense the thermostat does haveyou know it as a sensor, it has an actual mechanism thats a very local control system connecting the two. So from the outside, its pretty hard not to call the thermostat a goal to a heating system, like a sensor or an actor and a decision making process in between.

Again the word goal, its a natural language concept that can be used for a lot of different things. I guess that some people have the idea that there are natural definitions of concepts that have profound and unique meaning. I sort of think that only exists in the mathematics domain where you say a definition of a real number is something natural and perfect because of the most beautiful theorems you can prove around it, but in the real world things are messy and there is room for different flavors of a concept.

I think from the view of the outside observer, the thermostat is pursuing a certain goal. And the compass may be also if you go down into the micro physics of it. On the other hand, an interesting point is that from its own point of view, the thermostat is not pursuing a goal, like the thermostat lacks a deliberative reflective model of itself either as a goal-achieving agent. To an outside observer, the thermostat is pursuing a goal.

Now for a human being, once youre beyond the age of six or nine months or something, you are pursuing your goal relative to the observer, that is yourself. But youre pursuing that goalyou have a sense of, and I think this gets at the crucial connection between reflection and meta thinking, self-observation and general intelligence because its the fact that we represent within ourselves, the fact that we are pursuing some goals, this is what allows us to change and adapt the goals as we grow and learn in a broadly purposeful and meaningful way. Like if a thermostat breaks, its not going to correct itself and go back to its original goal or something right? Its just going to break, and it doesnt even make a halting and flawed defense to understand what its doing and why, like we humans do.

So we could say that something has a goal if theres some function which its systematically maximizing, in which case you can say of a heating or compass system that they do have a goal. You could say that it has a purpose if it is representing itself as the goal maximizing system and can manipulate its representation somehow. And thats a little bit different, and then also we get to the difference between narrow AIs and AGIs. I mean AlphaGo has a goal of winning at Go, but it doesnt know that Go is a game. It doesnt know what winning is in any broad sense. So if you gave it a version of Go with like a hexagonal board and three different players or something, it doesnt have the basis to adapt behaviors in this weird new context and like figure out what is the purpose of doing stuff in this weird new context because its not representing itself in relation to the Go game and the reward function in the way the person playing Go does.

If Im playing Go, Im much worse than AlphaGo, Im even worse than say my oldest son whos like a one and done type of Go player. Im way down on the hierarchy and I know that its a game manipulating little stones on the board by analogy to human warfare. I know how to watch the game between two people and that winning is done by counting stones and so forth. So being able to conceptualize my goal as a Go player in the broader context of my interaction with the world is really helpful when things go crazy and the world changes and the original detailed goals didnt make any sense anymore, which has happened throughout my life as a human with astonishing regularity.

Listen to this episode or read the full transcript at http://www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

See the article here:

Voices in AI Episode 103: A Conversation with Ben Goertzel - Gigaom

Artificial Intelligence Is Creating New Art From The Work Of A Deceased Manga Great – Kotaku

Kotaku EastEast is your slice of Asian internet culture, bringing you the latest talking points from Japan, Korea, China and beyond. Tune in every morning from 4am to 8am.

One of Japans greatest, if not the greatest, manga creators is having his work tapped into with the power of AI. Osamu Tezuka died in 1989, but next year, artificial intelligence will create new art based on Tezukas work.

Tezuka is known for influential works like Astro Boy, Princess Knight, and Kimba the White Lion.

Toshibas latest data project is called Kioxia and using its high-speed, large-capacity memory, artificial intelligence will create new Tezuka art based on immense digitalized volumes of work the artist produced during his lifetime.

The project is supported by Tezuka Productions.

Clarification: The headline of this post was altered slightly for clarity, and the language regarding the use of AI was changed to reflect the fact that new art was being created based on Tezukas work.

Read this article:

Artificial Intelligence Is Creating New Art From The Work Of A Deceased Manga Great - Kotaku

Everything we think we know about ‘Bixby’ Samsung’s AI assistant for the Galaxy S8 – The Verge

Samsungs next flagship, the Galaxy S8, is due to be revealed some time at the end of March, and it seems a new AI assistant will be a cornerstone of the companys pitch. Samsung executives confirmed last November that a fully fledged AI assistant on the S8 would differentiate the device from its competitors, but we dont know much more than that.

Voice commands have come back into vogue in the last year, after an uptick in interest in Amazons Echo and the launch of Googles Assistant. But weve yet to see an assistant that really fulfills its promise on mobile. (Sorry, Siri youre just too dumb.) Could Samsung be the one to pull it off? Heres what we know and whats been rumored so far:

If all these rumors are true, then Bixby (or Bix, or Kestra) will be a powerful assistant. The last bullet point above is especially interesting, as Samsung is in a unique position of being able to integrate its AI platform not only with a mobile device, but also gadgets in your home. So far, though, AI assistants have promised more than theyve delivered, and well have to wait and see what Bixby can really do when Samsung unveils the S8, reportedly at the end of March.

Read the rest here:

Everything we think we know about 'Bixby' Samsung's AI assistant for the Galaxy S8 - The Verge

Workers Can Be Hired Back by Employers Using a Fully Automated AI Recruiter From Avrio – Business Wire

BOSTON--(BUSINESS WIRE)--Avrio now provides recruiters the first fully automated end-to-end recruiting technology, leveraging AI to increase recruiter efficiency while eliminating non-strategic manual tasks. To Avrio Eftase! It means Tomorrow has Arrived! in Greek, and with this version, the world of tomorrow is here today for employers. The whole process takes minutes and hours, not days and weeks to get top candidates.

Now, employers have a fully-scalable AI recruiter that requires no upfront integrations, no messy data replication and no ATS tracking codes and source-of-hire reports to reconcile. You post a job on the system and get hires. You pay for hires, when you acquire a new employee.

The AI engine can find candidates via job postings and across resume databases, contact them directly, confirm both job skills and fit criteria and find a time that works for them to connect with a human decision maker. Avrio is pre-integrated with Nexxt to publish jobs across 50 career sites with access to a diversified talent network of more than 75 million candidates from the Nexxt database. Machine learning and semantic matching is used to drive efficiency throughout the entire process for candidates and for employers.

With Avrio, customers can take advantage of the HR Tech industrys first and only risk free business model to make their lives easier. Customers only pay for actual hires. There are no upfront user licenses, no lock-ins, no CPC, PPC, PPV or CPA to worry about. You pay to hire an employee.

Alex Knowles, Talent Manager at Copenhagen Capacity, an early Avrio customer said, Avrio is a proven solution, not just for employers but also talent attraction agencies looking to create growth in their cities. As the official organisation for investment promotion and economic development in Greater Copenhagen, we are excited to use the new capabilities to further our mission to recruit top talent from all across the world to work in Denmark.

Avrio has taken a very unique and innovative approach when it comes to AI for hiring, said Nikos Livadas, Vice President of Strategic Alliances at Nexxt. We are very excited to be partnering with Avrio to help create a compelling recipe for talent acquisition leaders, enabling them to increase candidate engagement while also decreasing time to hire. With Nexxts more than 27 million resumes and 75 million candidates fueling Avrios conversational AI sourcing and ranking platform we cant wait to see how this new offering exceeds customer expectations in todays challenging environment.

Job applicants have long been stymied by the black hole of hiring processes, said Javid Muhammedali, Head of Product at Avrio AI. A responsive, scalable AI recruiter that can review resumes, ask personalized questions, have a full conversation and answer candidates questions is a compelling solution given all the uncertainty in the hiring process. HR leaders can have peace of mind in being able to scale up when called upon, and yet have a consistent process.

Were excited to bring to market a revolutionary solution that accelerates hiring, said Nachi Junankar, CEO. Recruiters and managers face a massive increase in applicants with a smaller team. At the same time, applicants are looking for employers to be more responsive. Avrio ensures that the right candidate gets to the front of the line and that both recruiters and applicants get the speed and effectiveness they deserve.

About Avrio AI Inc.Avrio, a leader in AI for recruiting helps employers and staffing firms match, engage and hire top talent. To see how AI is making breakthrough changes in recruiting, visit https://www.goavrio.com or book a demo at https://www.goavrio.com/chatbot

See the article here:

Workers Can Be Hired Back by Employers Using a Fully Automated AI Recruiter From Avrio - Business Wire

Reality check: The state of AI, bots, and smart assistants – InfoWorld

Artificial intelligencein the guises of personal assistants, bots, self-driving cars, and machine learningis hot again, dominating Silicon Valley conversations, tech media reports, and vendor trade shows.

AI is one of those technologies whose promise is resurrected periodically, but only slowly advances into the real world. I remember the dog-and-pony AI shows at IBM, MIT, Carnegie-Mellon, Thinking Machines, and the like in the mid-1980s, as well as the technohippie proponents like Jaron Lanier who often graced the covers of the eras gee-whiz magazine like Omni.

AI is an area where much of the science is well established, but the implementation is still quite immature. Its not that the emperor has no clothesrather, the emperor is only now wearing underwear. Theres a lot more dressing to be done.

Thus, take all these intelligent machine/software promises with a big grain of salt. Were decades away from a Star Trek-style conversational computer, much less the artificial intelligence of Stephen Spielbergs A.I.

Still, theres a lot happening in general AI. Smart developers and companies will focus on the specific areas that have real current potential and leave the rest to sci-fi writers and the gee-whiz press.

For years, popular fiction has fused robots with artificial intelligence, from Gort of The Day the Earth Stood Stillto the Cylons of Battlestar Galactica, from the pseudo-human robots of Isaac Asimovs I Robotnovel to Data of Star Trek: The Next Generation. However, robots are not silicon intelligences but machines that can perform mechanical tasks formerly handled by peopleoften more reliably, faster, and without demands for a living wage or benefits.

Robots are common in manufacturing and becoming used in hospitals for delivery and drug fulfillment (since they wont steal drugs for personal use), but not so much in office buildings and homes.

Thereve been incredible advances lately in the field of bionics, largely driven by war veterans whove lost limbs in the several wars of the last two decades. We now see limbs that can respond to neural impulses and brain waves as if they were natural appendages, and its clear they soon wont need all those wires and external computers to work.

Maybe one day well fuse AI with robots and end up slaves to the Cylonsor worse. But not for a very long while. In the meantime, some advances in AI will help robots work better, because their software can become more sophisticated.

Most of what is now positioned as the base of AIproduct recommendations at Amazon, content recommendations at Facebook, voice recognition by Apples Siri, driving suggestions from Google Maps, and so onis simply pattern matching.

Thanks to the ongoing advances in data storage and computational capacity, boosted by cloud computing, more patterns can be stored, identified, and acted on then ever before. Much of what people do is based on pattern matchingto solve an issue, you first try to figure out what it is like that you already know, then try the solutions you already know. The faster the pattern matching to likeliest actions or outcomes, the more intelligent the system seems.

But were still in early days. There are some cases, such as navigation, where systems have become very good, to the point where (some) people will now drive onto an airport tarmac, into a lake, or onto a snowed-in country road because their GPS told them to, contrary to all the signals the people themselves have to the contrary.

But mostly, these systems are dumb. Thats why when yougo to Amazon and look at products, many websites you visit feature those products in their ads. Thats especially silly if you bought the product or decided not tobut all these systems know is you looked at X product, so theyll keep showing you more of the same. Thats anything but intelligent. And its not only Amazon product ads; Apples Genius music-matching feature and Googles Now recommendations are similarly clueless about the context, so they lead you into a sea of sameness very quickly.

They can actually work against you, as Apples autocorrection now does. It epitomizes a failure of the crowdsourcing, where peoples bad grammar, lack of clarity on how to form plurals or use apostrophes, inconsistent capitalization, and typos are imposed on everyone else. (Ive found that turning it off can result in fewer errors, even for horrible typists like myself.)

Missing is the nuance of more context, such as knowing what you bought or rejected, so you dont get advertisements for more of the same but another item you may be more interested in. Ditto with musicif your playlists is varied, so should be the recommendations. And ditto with, say, recommendation of where to eat that Google Now makesI like Indian food, but I dont want it every time I go out. What else do I like and have not had lately? And what about the patterns and preferences of the people Im dining with?

Autocorrect is another example of where context is needed. First, someone should tell Apple the difference between its and its, as well as explain that there are legitimate, correct variations in English that people should be allowed to specify. For example, prefixes can be made part of a word (like preconfigured) or hyphenated (like pre-configured), and users should be allowed to specify that preference. (Putting a space after them is always wrong, such as pre configured, yet thats what Apple autocorrect imposes unless you hyphenate.)

Dont expect botsautomated software assistants that do stuff for you based on all the data theyve monitoredto be useful for anything but the simplest tasks until problem domains like autocorrection work. They are, in fact, the same kinds of problems.

Pattern matching, even with rich context, is not enough. Because it must be predefined. Thats where pattern identification comes in, meaning that the software detects new patterns or changed patterns by monitoring your activities.

Thats not easy, because something has to define the parameters for the rules that undergird such systems. Its easy to either try to boil the ocean and end up with an undifferentiated mess or be too narrow and end up not being useful in the real world.

This identification effort is a big part of what machine learning is today, whether its to get you to click more ads or buy more products, better diagnose failures in photocopiers and aircraft engines, reroute delivery trucks based on weather and traffic, or respond to dangers while driving (the collision-avoidance technology soon to be standard in U.S. cars).

Because machine learning is so hardespecially outside highly defined, engineered domainsyou should expect slow progress, where systems get better but you dont notice it for a while.

Voice recognition is a great examplethe first systems (for phone-based help systems) were horrible, but now we have Siri, Google Now, Alexa, and Cortana that are pretty good for many people for many phrases. Theyre still error-pronebad at complex phrasing and niche domains, and bad at many accents and pronunciation patternsbut usable in enough contexts where they can be helpful. Some people actually can use them as if they were a human transcriber.

But the messier the context, the harder it is for machines to learn, because their models are incomplete or are too warped by the world in which they function. Self-driving cars are a good example: A car may learn to drive based on patterns and signals from the road and other cars, but outside forces like weather, pedestrian and cyclist behaviors, double-parked cars, construction adjustments, and so on will confound much of that learningand be hard to pick up, given their idiosyncracies and variability. Is it possible to overcome all that? Yesthe crash-avoidance technology coming into wider use is clearly a step to the self-driving futurebut not at the pace the blogosphere seems to think.

For many years, IT has been sold the concept of predictive analytics, which has had other guises such as operational business intelligence. Its a great concept, but requires pattern matching, machine learning, and insight. Insight is what lets people take the mental leap into a new area.

For predictive analytics, that doesnt go so far as out-of-the-box thinking but does go to identifying and accepting unusual patterns and outcomes. Thats hard, because pattern-based intelligencefrom what search result to display to what route take to what moves to make in chessis based on the assumption that the majority patterns and paths are the best ones. Otherwise, people wouldnt use them so much.

Most assistive systems use current conditions to steer you to a proven path. Predictive systems combine current and derivable future conditions using all sorts of probablistic mathematics. But those are the easy predictions. The ones that really matter are the ones that are hard to see, usually for a handful of reasons: the context is too complex for most people to get their heads around, or the calculated path is an outlier and thus rejected as suchby the algorithm or the user.

As you can see, theres a lot to be done, so take the gee-whiz future we see in the popular press and at technology conferences with a big grain of salt. The future will come, but slowly and unevenly.

Read more here:

Reality check: The state of AI, bots, and smart assistants - InfoWorld

The Army AI task force takes on two ‘key’ projects – C4ISRNet

The Armys artificial intelligence task force is working on two key projects, including one that would allow unmanned vehicles in the air to communicate with autonomous vehicles on the ground, after securing new funding, a service official said June 10.

Gen. Mike Murray, commander of Army Futures Command, said during a June 10 webinar hosted by the Association of the United States Army that the task force has moved forward on the projects through its partnership with Carnegie Mellon University, launched in late 2018 .

First, the team is working on programs dedicated to unmanned-unmanned teaming, or developing the ability of air and ground unmanned vehicles to talk to one other.

The other effort underway is on a DevSecOps environment to develop future algorithms to work with other Army systems, Murray said. He did not offer further detail.

The task force force has fewer than 15 people, Murray said, and fiscal 2021 will be the first year that it receives appropriated funds from Congress. Much of the work the task force has done so far as been building the team.

In response to an audience question, Murray said that the task force is not yet working on defending against adversarial machine learning, but added that leaders recognize thats an area the team will need to focus on.

Were going to have to work on how do we defend our algorithms and really, how do we defend our training data that were using for our algorithms," Murray said.

In order to train effective artificial intelligence, the team needs significant amounts of data. One of the first projects for the task force was collecting data to develop advanced target recognition capabilities. For example, Murray said, being able to identify different types of combat vehicles. When the work started, the training data for target recognition didnt exist.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

If youre training an algorithm to recognize cats, you can get on the internet and pull up hundreds of thousands of pictures of cats, Murray said. You cant do that for a T-72 [a Russian tank]. You can get a bunch of pictures, but are they at the right angles, lighting conditions, vehicle sitting camouflaged to vehicle sitting open desert?

Murray also said he recognizes the Army needs to train more soldiers in data science and artificial intelligence. He told reporters in late May that the Army and CMU have created a masters program in data science that will begin in the fall. He also said that the software factory, a six- to 12-week course to teach soldiers basic software skills. That factory will be based in Austin, where Futures Command is located, and will work with industrys local tech industry.

We have got to get this talent identified Im convinced we have it in our formations, Murray said.

Go here to see the original:

The Army AI task force takes on two 'key' projects - C4ISRNet

AMP Robotics Named to Forbes AI 50 – Business Wire

DENVER--(BUSINESS WIRE)--Forbes has named AMP Robotics Corp. (AMP), a pioneer and leader in artificial intelligence (AI) and robotics for the recycling industry, one of Americas most promising AI companies. The publications annual AI 50 list distinguishes private, U.S.-based companies that are wielding some subset of artificial intelligence in a meaningful way and demonstrating real business potential from doing so. To be included on the list, companies needed to show that techniques like machine learning, natural language processing, or computer vision are a core part of their business model and future success.

Earlier this year, we notched a milestone of one billion picks over 12 months that demonstrates the productivity, precision, and reliability of our AI application for the recycling industry. Its an honor to be deemed one of the countrys most promising AI companies, and were just getting started, said Matanya Horowitz, AMP founder and chief executive officer. Theres growing appreciation for the role of recycling in the domestic supply chain, in terms of keeping resources flowing and products on shelves, and resultant momentum around supportive policy initiatives that are putting some real wind in the sail for the industry. Were pleased to play a role in enabling better efficiency, safety, and transparency to help transform recycling.

AMPs technology recovers plastics, cardboard, paper, metals, cartons, cups, and many other recyclables that are reclaimed for raw material processing. AMPs AI platform uses computer vision to visually identify different types of materials with high accuracy, then guides high-speed robots to pick out and recover recyclables at superhuman speeds for extended periods of time. The AI platform transforms images into data to recognize patterns, using machine learning to train itself by processing millions of material images within an ever-expanding neural network of robotic installations.

We consider AMP a category-defining business and believe its artificial intelligence and robotics technology are poised to solve many of the central challenges of recycling, said Shaun Maguire, partner at Sequoia Capital and AMP board member. The opportunity for modernization in the industry is robust as the demand for recycled materials continues to swell, from consumers and the growing circular economy.

AMPs AI 50 recognition comes on the heels of receiving a 2020 RBR50 Innovation Award from Robotics Business Review for the companys Cortex Dual-Robot System. Earlier this year, Fast Company named AMP to its Worlds Most Innovative Companies list for 2020, and the company captured a Rising Star Company of the Year Award in the 2020 Global Cleantech 100.

Since its Series A fundraising in November, AMP has been on a major growth trajectory as it scales its business to meet demand. The company announced a 50% increase in revenue in the first quarter of 2020, a rapidly growing project pipeline, a facility expansion in its Colorado headquarters, and a new lease program that makes its AI and robotics technology even more attainable for recycling businesses.

About AMP Robotics Corp.

AMP Robotics is applying AI and robotics to help modernize recycling, enabling a world without waste. The AMP Cortex high-speed robotics system automates the identification and sorting of recyclables from mixed material streams. The AMP Neuron AI platform continuously trains itself by recognizing different colors, textures, shapes, sizes, patterns, and even brand labels to identify materials and their recyclability. Neuron then guides robots to pick and place the material to be recycled. Designed to run 24/7, all of this happens at superhuman speed with extremely high accuracy. With deployments across the United States, Canada, Japan, and now expanding into Europe, AMPs technology recycles municipal waste, e-waste, and construction and demolition debris. Headquartered and with manufacturing operations in Colorado, AMP is backed by Sequoia Capital, Closed Loop Partners, Congruent Ventures, and Sidewalk Infrastructure Partners (SIP), an Alphabet Inc. (NASDAQ: GOOGL) company.

View original post here:

AMP Robotics Named to Forbes AI 50 - Business Wire

AI spending and adoption on the rise, says IDC survey – Technology Decisions

Artificial intelligence (AI) spending and adoption within business is on the rise, according to an IDC survey of more than 2000 IT and line-of-business (LoB) decision-makers.

Delivering a better customer experience was the leading driver of AI implementation with threat analysis, process automation, supply and logistics, and human resources among the most popular use-case categories.

Ritu Jyoti, IDC Program Vice President, Artificial Intelligence Strategies, said the technology was supporting executives in their quest to achieve a host of business objectives.

Organisations worldwide are adopting AI in their business transformation journey, not just because they can but because they must to be agile, resilient, innovative and able to scale, she said.

Early adopters report an improvement of almost 25% in customer experience, accelerated rates of innovation, higher competitiveness, higher margins and better employee experience with the rollout of AI solutions.

Despite the overall optimism, a lack of quality training data and high costs when procuring from certain vendors remained a challenge in terms of further scaling the technology.

Image credit: stock.adobe.com/au/Tierney

Go here to read the rest:

AI spending and adoption on the rise, says IDC survey - Technology Decisions

Salesforce’s Einstein: One smart way to upsell AI – ZDNet

Salesforce has built in its Einstein artificial intelligence platform and the upshot is that the move appears to be a smart way to grow organically and hit its revenue growth targets.

How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

The company hosted a strategy powwow for the fiscal year ahead, rolled out its Spring '17 release, which integrated Einstein integration through its platform. Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, Analytics Cloud and Community Cloud will all get Einstein integration and add-on features.

Meanwhile, Salesforce and IBM announced a partnership that will integrate Einstein with Watson. In other words, the two most marketing heavy artificial intelligence brands have teamed up. Watson has become a marketing juggernaut for IBM's future. It's safe to say that Salesforce will push Einstein on name recognition alone. Who doesn't want the Einstein name on its cloud?

Salesforce and IBM will integrate APIs and use IBM's BlueWolf consulting unit to combine Watson and Einstein. IBM will use Salesforce's Service Cloud. See: IBM, Salesforce announce AI partnership | Salesforce Einstein: Here's what's real and what's coming.

CEO Marc Benioff trotted out Amazon Web Services, a key customer for Einstein lead scoring and Salesforce. The two companies are partners. AWS marketing chief Ariel Kelman said it's early days, but the company plans to roll out Einstein lead scoring and other tools throughout its sales processes.

Benioff, who called Einstein "a new member on the management team," noted Einstein will evolve through multiple clouds. Salesforce also teased out the Summer release of its platform too. In a slide it's clear that there will be a lot more of Einstein.

Now it's clear that artificial intelligence will be critical, but it didn't take long for analysts to start pondering the financial ramifications. Macquarie Capital analyst Sarah Hindlian said in a research note:

Salesforce outlined Einstein customers such as Coca-Cola, AWS, Seagate, U.S. Bank and Air France-KLM.

Lead scoring and processing may be strong enough to get customers to add Einstein to the mix. In the Salesforce pricing list, Einstein is typically denoted with a "$" to indicate an additional charge.

Here are a few screens that will tell the Einstein upsell tale.

Now what remains to be seen is how quickly customers take the Einstein add-ons, but it's likely more than a few will because enterprises aren't going to have the AI knowhow or talent base around. Wall Street expects Salesforce to deliver $10.18 billion in revenue in fiscal 2018, up from $8.39 billion for the just closed year. By fiscal 2021, Salesforce is expected to have revenue of $16.68 billion.

Excerpt from:

Salesforce's Einstein: One smart way to upsell AI - ZDNet

AI can speed up the search for new treatments here’s how – World Economic Forum

The sudden appearance and rapid spread of COVID-19 took governments and society by surprise. As they dusted off pandemic response plans and geared up to fight the virus, it became clear that we needed to turbo-charge R&D efforts and find better ways to hunt down promising treatments for emerging diseases.

Artificial intelligence (AI) has proven a powerful tool in this fight.

In a pandemic, speed is of the essence. Although scientists managed to sequence the genetic code of the new coronavirus and produce diagnostic tests in record time, developing drugs and vaccines against the virus remains a long haul.

AI has the power to accelerate the process by reasoning across all available biomedical data and information in a systematic search for existing approved medicines a vital step in helping patients while the world waits for a vaccine.

Machines excel in handling data in fast-changing circumstances, which means machine learning systems can be harnessed to work as tireless and unbiased super-researchers.

This is not just theory. In late January, using its proprietary platform of AI models and algorithms to search through the scientific literature, researchers at BenevolentAI in London identified an established, once-daily arthritis pill as a potential treatment for COVID-19. The findings were published in two papers in The Lancet and The Lancet Infectious Diseases, in line with our commitment under the Wellcome Trust pledge to share our coronavirus-related research rapidly and openly.

BenevolentAI's COVID-19 timeline

Image: BenevolentAI

The discovery followed a computer-driven hunt for drug candidates with both antiviral and anti-inflammatory properties, since in severe cases of COVID-19 it is the bodys overactive immune response that can cause significant and sometimes fatal damage.

The drug, baricitinib, is currently marketed by Eli Lilly to treat rheumatoid arthritis. Now, thanks to AI, it is being tested against COVID-19 in a major randomised-controlled trial in collaboration with the U.S. National Institute for Allergies and Infectious Diseases (NIAID) in combination with remdesivir, an antiviral drug from Gilead Sciences that recently won emergency-use approval for COVID-19. Eli Lilly has now commenced its own independent trial of baricitinib as a therapy for COVID-19 in South America, Europe and Asia.

The BenevolentAI knowledge graph found that baricitinib might help treat COVID-19.

Image: BenevolentAI

The system used to identify baricitinib was not actually set up to find new uses of existing medicines, but rather to discover and develop new drugs a sign of the potential for AI to uncover novel insights and relationships across an unlimited number of biological entities. In a crisis like COVID-19, it clearly makes sense to hunt through already approved drugs that can be ready for large-scale clinical trials until vaccines are approved and readily available in the global supply chain.

BenevolentAIs vision is to dramatically improve pharmaceutical R&D productivity across the board and to expand the drug discovery universe by making predictions in novel areas of biology. Currently, around half of late-stage clinical trials fail due to ineffective drug targets, resulting in only 15% of drugs advancing from mid-stage Phase 2 testing to approval.

Using a knowledge graph composed of chemical, biological and medical research and information, the companys AI machine learning models and algorithms can identify potential drug leads currently unknown in medical science and far faster than humans. While such systems will never replace scientists and clinicians, they can save both time and money. And the agnostic approach adopted by machine learning means such platforms can generate leads that may have been overlooked by traditional research.

The endeavour has already led to an in-house project on amyotrophic lateral sclerosis (ALS), ulcerative colitis, atopic dermatitis and programmes with partners on progressive kidney and lung diseases, as well as hard-to-treat cancers like glioblastoma.

The ability of machines to solve complex biological puzzles more rapidly than human experts has prompted increased investment in AI drug discovery by a growing number of large pharmaceutical companies.

And AI is also being harnessed in other areas of medicine, such as the analysis of medical images. This encompasses long-standing work on cancer scans and much more recent efforts to use computer power to identify COVID-19 from chest X-rays, including the open-access COVID-Net neural network.

The application of precision medicine to save and improve lives relies on good-quality, easily-accessible data on everything from our DNA to lifestyle and environmental factors. The opposite to a one-size-fits-all healthcare system, it has vast, untapped potential to transform the treatment and prediction of rare diseasesand disease in general.

But there is no global governance framework for such data and no common data portal. This is a problem that contributes to the premature deaths of hundreds of millions of rare-disease patients worldwide.

The World Economic Forums Breaking Barriers to Health Data Governance initiative is focused on creating, testing and growing a framework to support effective and responsible access across borders to sensitive health data for the treatment and diagnosis of rare diseases.

The data will be shared via a federated data system: a decentralized approach that allows different institutions to access each others data without that data ever leaving the organization it originated from. This is done via an application programming interface and strikes a balance between simply pooling data (posing security concerns) and limiting access completely.

The project is a collaboration between entities in the UK (Genomics England), Australia (Australian Genomics Health Alliance), Canada (Genomics4RD), and the US (Intermountain Healthcare).

Clearly, COVID-19 has been a wake-up call for the world. It seems this outbreak may be part of an increasingly frequent pattern of epidemics, fuelled by our hyper-connected modern world. As a result, medical experts are braced for more previously unknown Disease X threats in the years ahead as viruses jump from animals to humans and jet around the world.

Technology has helped create a world in which pathogens like COVID-19, SARS and Zika can spread. But technology, in the form of AI, can also provide us with the weapons to fight back.

Read this article:

AI can speed up the search for new treatments here's how - World Economic Forum

The lethal combination of AI and IoT – Fortune India

The Internet of Things has become a ubiquitous buzzword today. Backed by the added horsepower of artificial intelligence, we are living in an age of intelligent and interconnected systems that have enormous predictive capabilities. From smart devices that surround us in our home to the fitness wearables we wear during our daily runs, the convergence of AI and IoT is gaining velocity. Especially for consumer brands, AIoT is the key to understand the evolving aspirations of customers, pushing the boundaries of innovation, and unlocking a world of opportunities and possibilities.

Some emerging trends that we see taking shape are:

Healthy and safe living, at your fingertips

It is safe to predict that, even when the pandemic subsides, social-distancing will be deeply embedded in our consciousness owing to its drastic impact. As a result, brands will increasingly introduce AIoT solutions that will enable consumers to maintain social hygiene and well-being. We are looking at a world of products that will guide us toward a holistic lifestylefrom smart sensors in homes that monitor air pollutants and signals an alarming rise to occupants to fitness trackers which alert you to maintain a safe distance from your fellow joggers at all times.

A new definition of connected living

There is always a risk of exposure to the virus while operating physical objects. With contactless becoming a way of life, electronic brands will have to shift gears to meet emerging needs. Imagine that as you step into your home, smart lights, and smart plugs switch on immediately, thanks to the voice command you send through Alexa. Automated heating sensors fitted in your car or home can adjust the temperature according to the surrounding environment, without you having to regulate it manually.

Television dons the role of a smart hub

Smart TVs, since their introduction in the early 2000s, have lent a new patina of ease and convenience to everyday life. Not only has the genre found popularity with users browsing social media platforms, streaming music, listening to podcasts, video calling en masse, but advanced tech features such as 4K UHD streaming has also found favour. Soon, we will see smart TVs becoming new smart hubs, through which consumers can operate and control multiple smart AI and IoT devices.

Mapping the digitally savvy shopper

The rules of engagement in retail are changing. As we see a surge of omnichannel shopping in a post-pandemic future, brands will find innovative ways to attract consumers and retain lasting loyalty. With intelligent solutions and AI-driven data analysis, brands will be able to understand consumption patterns, lifestyle goals, immediate requirements, and cater to them at a more personalised level, bringing products and services to the doorstep of the consumer with high agility and nimbleness. Retailers are already using RFID and beacon technologies to offer highly curated, personalised experiences for shoppers in-store.

With the hitherto accepted definition of a new normal changing, brands need to make targeted innovation a lynchpin of their consumer outreach strategy. The lethal combination of artificial intelligence and IoT will help garner vast pools of data at a granular level, using which brands can get a 360, deep-dive view into consumers of the future.

Views are personal. The author is CEO, realme India, and vice president, realme.

Read more here:

The lethal combination of AI and IoT - Fortune India

Search Earth with AI eyes via a powerful new satellite image tool … – CNET

A GeoVisual search for baseball stadiums in the lower 48.

Want to know where all the wind and solar power supplies in the US are for some brilliant renewable-energy project? Or plot a round-the-world trip hitting every major soccer stadium along the way? It should be possible with a new tool that lets anyone scan the globe through AI "eyes" to instantly find satellite images of matching objects.

Descartes Labs, a New Mexico startup that provides AI-driven analysis of satellite images to governments, academics and industry, on Tuesday released a public demo of its GeoVisual Search, a new type of search engine that combines satellite images of Earth with machine learning on a massive scale.

The idea behind GeoVisual is pretty simple. Pick an object anywhere on Earth that can be seen from space, and the system returns a list of similar-looking objects and their locations on the planet. It's cool to play with, which you can do at the Descartes site here. A short search for wind turbines had me dreaming of a family road trip where every pit stop was sure to include kite-flying for the kids.

Perhaps this sounds just like Google Earth to you, but keep in mind that tool just allows you to find countless geotagged locations around the world. GeoVisual Search actually compares all the pixels making up huge photos of the world to find matching objects as best it can, an ability that hasn't been available to the public before on a global scale.

Mark Johnson, Descartes Lab CEO

Fun as it is, the tool also gives the public a taste of Descartes' broader work, which so far has focused largely on agricultural datasets that can do things like analyze crop yields.

"The goal of this launch is to show people what's possible with machine learning. Our aim is to use this data to model complex planetary systems, and this is just the first step," CEO and co-founder Mark Johnson said via email. "We want businesses to think about how new kinds of data will help to improve their work. And I'd like everyone to think about how we can improve our life on this planet if we better understood it."

The tool's not perfect. I tried searching for objects that look similar to a large coal mine and power plant here in northern New Mexico and ended up with a list of mostly similar-looking lakes and bridges. Searching for locations similar to the launch pads at Cape Canaveral returned an odd assortment of landscapes that seemed to have nothing in common besides a passing resemblance to concrete surfaces.

The algorithm can easily mistake a whole lot of coal for a whole lots of water.

"Though this is a demo, GeoVisual Search operates on top of an intelligent machine-learning platform that can be trained and will improve over time," Johnson said. "We've never taught the computer what a wind turbine is, it just determines what's unique about that image (i.e., the fact there is a wind turbine there) and automatically recognizes visually similar scenes."

Right now the demo relies on three different imagery sources that include more than 4 petabytes of data altogether. You can search in the most detail using the National Agriculture Imagery Program (NAIP) data for the lower 48 United States because it has the highest resolution of one meter per pixel, making it possible to spot orchards, solar farms and turbines, among other objects.

Four-meter imagery is available for China that makes it possible to recognize slightly larger things like stadiums. For the rest of the world, Descartes uses 15-meter resolution images from Landsat 8 that are more coarse but still allow for identification of larger-scale objects like pivot irrigation and suburbs.

"As a next step, we certainly want to start to understand specific objects and count them accurately through time," Johnson said. "At that point, we'll have turned satellite imagery into a searchable database, which opens up a whole new interface for dealing with planetary data."

10

Earth's recent changes, from space (pictures)

Descartes was spun out of Los Alamos National Lab (LANL) and co-founded by Steven Brumby, who spent over a decade working in information sciences for the lab. Near the start of his time at LANL, a massive wildfire nearly destroyed the lab and Brumby's home. More importantly, it sparked Brumby's interest in developing machine-learning tools to map the world's fires.

"At that time when we did the analysis (of satellite images of the fire's aftermath) it was pretty clear the fire had been catastrophic, but there was a lot of fuel left," Brumby told me when I visited Descartes' offices in Los Alamos last year.

When some of that remaining fuel burned in another big Los Alamos wildfire in 2011, Brumby says he was able to help out. During his time at LANL he was often called on for imagery analysis when disaster struck, from 9/11 to Hurricane Katrina and the breakup of the Space Shuttle Columbia. All those years of insight led to another Descartes project to analyze satellite imagery to better understand and perhaps even predict wildfires around the globe.

"You can use satellite imagery to warn you of stuff that's coming down the road and if you listen to it, you can be prepared for it," Brumby said.

Before and after the 2000 Cerro Grande fire, with the burn scar shown in bright red.

Brumby and Johnson spent the better part of an afternoon laying out the short- and long-term vision for Descartes Labs when I visited. In the short term, the company has been working in agriculture to better monitor crops, feed lots and other data sources.

"One of the things we're building with our current system is a continuously updating living map of the world, which is the platform I wish we had when we had to deal with some of these disasters back in the day," Brumby said.

Being able to check in on any part of the world in real time is one thing, but Descartes hopes to go further by applying artificial intelligence to see things in all those images that might not be immediately obvious to our eyes: the patterns that tie together all the activities captured in those countless pixels.

If a picture really is worth a thousand words, tools like the ones Descartes is developing could help write volumes about what our satellites are really seeing.

Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."

Crowd Control: A crowdsourced science fiction novel written by CNET readers.

Go here to see the original:

Search Earth with AI eyes via a powerful new satellite image tool ... - CNET

MIT researchers use AI to find drugs that could be repurposed for COVID-19 – Healthcare IT News

The Massachusetts Institute of Technology announced this week that researchers had used machine learning to identify medications that may be repurposed to fight COVID-19.

"Making new drugs takes forever," Caroline Uhler, a computational biologist in MIT's Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, said in a press statement. "Really, the only expedient option is to repurpose existing drugs."

The research from Uhler's team, which appears in the journal Nature Communications, notes that the novel coronavirus tends to have much more severe effects in older patients.

"Since the mechanical properties of the lung tissue change with aging, this led us to hypothesize an interplay between viral infection/replication and tissue aging," wrote the researchers.

WHY IT MATTERS

The researchers pointed out that lung tissue becomes stiffer as a person gets older, and it shows different patterns of gene expression than in younger people in response to the same signal.

"We need to look at aging together with SARS-CoV-2 what are the genes at the intersection of these two pathways?" said Uhler.

As the study explains, the team generated a list of possible drugs using an autoencoder before mapping the network of genes and proteins involved in aging and novel coronavirus infection. They then pinpointed genes causing cascading effects throughout the network using statistical algorithms.

"Among the various protein kinases ... identified by our drug repurposing pipeline, RIPK1 was singled out by our causal analysis as being upstream of the largest number of genes that were differentially expressed by SARS-CoV-2 infection and aging," wrote the researchers in the study.In other words, drugs that act on RIPK1 may have the potential to treat COVID-19.

"Given the distinct pathways elicited by RIPK1, there is a need to develop appropriate cell culture models that can differentiate between young and aging tissues to validate our findings experimentally and allow for highly specific and targeted drug discovery programs," read the study.

THE LARGER TREND

Machine learning and artificial intelligence have been instrumental for many facets of COVID-19 research, with scientists using them to predict the length of hospitalization and probable outcomes among patients, as well as to detect the disease in lung scans and improve treatment options.

Cris Ross, CIO at the Mayo Clinic, said in December that AI has been key to understanding COVID-19.

Around the world, Ross said, algorithms are being used to "find powerful things that help us diagnose, manage and treat this disease, to watch its spread, to understand where it's coming next, to understand the characteristics around the disease and to develop new therapies."

ON THE RECORD

"While our work identified particular drugs and drug targets in the context of COVID-19, our computational platform is applicable well beyond SARS-CoV-2, and we believe that the integration of transcriptional, proteomic, and structural data with network models into a causal framework is an important addition to current drug discovery pipelines," wrote the MIT research team.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

MIT researchers use AI to find drugs that could be repurposed for COVID-19 - Healthcare IT News

How AI is Transforming the Contact Center – Customer Think

Todays contact center agents must be able to communicate with customers not only on the phone but via social media, instant messaging, video conferencing and web chat. How can humans do it all? Increasingly, they cant.

Thats why many companies are implementing bots powered by artificial intelligence to work in their contact centers and communicate with their customers. Gartner predicts that, by 2020, 85% of all customer interactions will no longer be managed by humans.

Facebook, Apple, Microsoft and Google are all building virtual assistants and chatbots that can respond to voice queries and engage in a fairly natural dialog with users. Even Taco Bell has a TacoBot that helps customers place pickup orders for select menu items.

In theory, virtual assistants will greatly improve the customer experience, because AI bots can store endless amounts of data and access relevant information at the right time to give customers exactly what they want. Would you like Sprite with your Crunchwrap Supreme? AI bots can also help organizations boost efficiencies and reduce costs, because organizations will no longer have to operate contact centers staffed 24/7 by employees around the globe.

In fact, AI is already dramatically changing contact centers. Its making contact centers more efficient with bots that can quickly answer the questions most commonly asked by customers. AI is even helping to predict customer behavior, providing advice to customer service reps on how best to solve a particular issue.

If AI can improve the operation of contact centers, thats a win for customers and companies. In many ways, contact centers are the heart and soul of the enterprise. Theyre often the most intimate point of contact between a business and its customers, and what happensor doesnt happenin the contact center can make or break the customer experience.

Conversely, the better AI gets, the greater the potential for error. As AI handles moreand more types ofdata, the complexity of its data interactions grows and so does the possibility for mistakes.

Thus, as contact centers move toward automation, its crucial that companies be able to observe the effectiveness of their customer interactions and their AI solutions. They must be aware when interactions wander off-trackbecause their customers certainly will.

If a company deploys a chatbot and the bot misbehaves, customers will notice it right away. And if the misbehavior is egregious enough, it can damage the company brand. Perhaps you remember Microsofts Tay bot disaster of 2016, when it took Twitter users less than 24 hours to turn Microsoft Tay from an innocent chatbot modeled to speak like a teenage girl to misogynistic, racist monster.

So how can your business take advantage of AI while guaranteeing a consistently excellent experience for your customers? You need to constantly monitor the experiences you deliver. Monitoring enables you to identify issues fast so you can take rapid action to protect the customer experience. You can keep systems humming and nip issues in the budin real-time.

For instance, with AI you still need to monitor the time it takes to complete a particular interaction and know if the customer was satisfied with the experience. AI promises to alleviate many of the burdens associated with the contact center, but you still need a complete view of the customer experience. You still need to avoid issues and delays that frustrate your customers. And, best case, you will need to identify those issues before your customers do.

You will also still be required to record customer interactions if you are in a regulated industry. Aside from asking a human to check every recorded voice call, which defeats the purpose of using bots, you will need to invest in call recording assurance technology. This helps validate the physical presence of the recording files on the servers and then validates their audio quality using complex algorithms.

Regardless of humans or bots interacting with customers, the quality of the communication remains very important, including voice quality. The best responses are worthless if they cannot be understood. Organizations considering AI and bots for their contact center first need to ensure that they are proactively managing voice quality.

AI is poised to radically transform the customer experience. And were only at the beginning. In a 2016 worldwide survey by Xerox, 42% of respondents predicted that the contact center as we know it now will cease to exist by 2025. Indeed, the steady progress toward deeper implementation of AI in the contact center is inevitable because it will allow organizations to improve service levels and reduce costs.

However, those responsible for implementing AI and bots into their contact center would be wise to remember, when there is only one human on the line and that is the customer, there is nobody to hear them scream.

SkipChilcott

IR

Skip leads worldwide Product Marketing at IR. He's a 20+ year veteran of the unified communications and collaboration industry starting in the late-90s with Placeware Web Conferencing. After the Microsoft acquisition of Placeware in 2003, he spent 13 years at Microsoft, involved early with Real-time Collaboration, Unified Communications, and Cloud Productivity products and services. Most recently he lead go-to-market strategy and execution for Skype for Business in the US. He has a passion for helping organizations large and small, to maximize success, effectiveness, and growth through the ado

The rest is here:

How AI is Transforming the Contact Center - Customer Think