Why impartiality has to be a key part of your AI game plan – E&T Magazine

A few simple principles can help businesses avoid the potentially disastrous implications of unleashing artificial-intelligence-based agents that develop their own unconscious bias.

With 60 per cent of UK companies already using or planning to implement artificial intelligence, the debate around its ethical challenges is gaining momentum. Earlier this year, for example, the European Commission drafted a white paper on AI outlining a new approach to excellence and trust a clear indication that Europe is seriously considering stricter measures its use.

According to Gartner, a quarter of employees who already use technology in their work will have an AI-powered digital colleague that they interact with on a daily basis by 2021. While the opportunity to create new efficiencies and offer an elevated level of service to consumers is immense, so are the risks.

For organisations just getting started, there are some key considerations to apply in the process of defining a responsible AI game plan.

First, you need to plan for the unexpected. Remember what happened when Microsoft deployed its customer-facing chatbot, Tay, on Twitter? Within 24 hours, Tay had become a racist and sexist spokesperson by re-posting content learned from her online interactions. This illustrates that while you cannot predict every possible interaction, your implementation strategy must be carefully planned. Common roll-out strategy safety checks should include testing with representative users before roll-out, using human oversight as an extra layer of analysis and safety, and designing the way the digital employee responds to unplanned and out-of-scope interactions.

Companies using intelligent digital employees must also think about how their brand fits in; the style and tone of the dialogue, the level to which the agent can execute tasks or not and the audiences they interact with. Whether an agent is used internally or in customer service, the AI solution inevitably becomes a brand ambassador. This understanding will help define appropriate and consistent guidelines, which will ensure the AI is trained to represent the values, goals, and culture of the business.

The collection and provisioning of adequate training data is another big challenge for AI developers. Amazon, for example, has been scrutinised for the bias of its automated recruitment too, trained with data from CVs submitted over a 10-year period, which reflected the gender imbalance within the tech industry and therefore unfairly disadvantaged female applicants. The impartiality of training data and modelling is only as impartial as the design team. It is of critical importance for companies to reflect on the potential gaps in their data objectivity, use test methods that explore the potential impacts and operate with transparency on the progress of these issues.

To build a diverse team from a list of qualified applicants, consider using techniques that mitigate unconscious bias. Three tactics that can help are: removing names from the applicant review processes, augmenting candidate evaluation with a skills test to evaluate potential, and introducing structure to the interview process in order to focus on the same set of questions and answers for every possible hire. Lastly, consider setting diversity goals for your AI team that emphasise the importance of representation and balance of decision-making power for successful AI projects.

One way to leverage the potential of machine learning without risking a Tay-like disaster is through managed self-learning. This means that a digital employee is first trained via a defined data set and then learns through user interactions a process that is overseen by a supervisory body to check and approve the newly acquired knowledge. This approach ensures that learned behaviours correspond to the organisations functional and ethical vision for its AI implementation, and are based on accurate and unbiased data.

Before embarking on any AI implementation, you need to think hard about what your ideal digital employee would look like. Ensuring its ethical development requires comprehensive and thoughtful planning: reviewing the make-up of the team, the representativeness of datasets, and how it will be managed on an ongoing basis.

Considering the potentially disastrous repercussions of unconscious bias in this field, its clear that organisations need to make an enormous effort to ensure their AI solutions are designed and set up to act impartially. By adhering to the principles described here, companies can not only minimise the risk of reputational damage, but also benefit from better outcomes of their AI investments in the medium and long term.

Esther Mahr is a conversational experience designer with IPsoft. Noelle Langston is the companys director of experience design.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Read more from the original source:

Why impartiality has to be a key part of your AI game plan - E&T Magazine

See how old Amazon’s AI thinks you are – The Verge

Amazons latest artificial intelligence tool is a piece of image recognition software that can learn to guess a humans age. The feature is powered by Amazons Rekognition platform, which is a developer toolkit that exists as part of the companys AWS cloud computing service. So long as youre willing to go through the process of signing up for a basic AWS account that entails putting in credit card info but Amazon wont charge you you can try the age-guessing software for yourself.

In what sounds like a smart move on Amazons end, the tool gives a wide range instead of trying to pinpoint a specific number, along with the likelihood that the subject of the image is smiling or wearing glasses. Microsoft tried the latter approach back in 2015 with its own AI tool, resulting in some hilariously bad estimates that exposed fundamental weaknesses in how these types of image recognition algorithms function. Still, these experiments are more for fun, and both companies cracks at age-guessing algorithms are a good way to mess around with AI if youre so inclined.

For instance, heres Amazons tool trying to digest an old photo of me in my early twenties:

Heres what it had to say about a more recent photo:

And heres what it has to say about a drastically different image of me from nearly ten years ago, sans glasses and short hair:

Needless to say, I am not 30, 47, or any age in between in any of those photos. Microsoft is equally guilty of thinking I am far older than I actually am perhaps a product of the beard, at least for the first two images. When giving both tools a photo of clean-shaven Microsoft CEO Satya Nadella, we get a slightly more accurate description: Amazon thinks Nadella is between 48 and 68 years old, while Microsofts tool thinks hes 67. (Nadella is 49 years old). Trying Bezos yields similar results that are only kinda, sorta on point, yet still within a range of acceptability.

The goal here of course is not to try and trick the software. After all, these tools are not supposed to 100 percent accurate all of the time, and purely for fun in Microsofts case. Amazon, on the other hand, offers Rekognition to developers who are interested in implementing general object recognition, labeling, and other likeminded features for their products and services.

In this case, Amazons Jeff Barr sees the age range feature as a way to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame, he writes in a blog post. For those purposes, Amazons tool may be good enough. Even when it isnt, we know it will be getting better all the time, thanks to deep learning methods that train it using billions of publicly available images.

View original post here:

See how old Amazon's AI thinks you are - The Verge

MWC: Completely superfluous ‘AI’ added to consumer items – Naked Security

Naked Security is reporting this week from Mobile World Congress in Barcelona

All of a sudden, Olay famous for its skincare products and cheering inducements to Love the skin youre in has an artificial intelligence (AI) capability. You heard it right: Olay, maker of your mums face cream, is now in the machine intelligence game.

Naked Security reported on a crop of smart devices that are anything but smart from the CES tech show in Las Vegas in January, but tech companies have moved on for MWC, focusing instead on adding AI and machine learning to products that would do just as well without them.

With the launch of Olay Skin Advisor, an app that claims to reveal womens true skin age based on a selfie, Olays new skinbot delivers personalised product information based on perceived improvement areas.

So now Olays answer to HAL 9000 wants you to know that your skin might not be quite so loveable after all. That blotch on your left cheek it could do with some work. The greasy patch on your nose theres a potion for that!

Olay claims to be the first company of its kind to use such deep learning technology, and has the entirely modest aim of transforming the way that women shop for beauty products. Indeed, so serious is it about these new opportunities that its is sending a whole team of scientists, researchers and AI experts to Mobile World Congress (MWC) in Barcelona in order to big up its credentials.

And Olay is far from being the only non-tech company at the event pushing a dubious AI message. Somewhat bizarrely, machine learning has also found its way into the dental hygiene segment, including through an AI-enabled toothbrush called Ara.

Designations such as smart toothbrush are clearly now entirely insufficient; this is a dental implement with actual intelligence! You can bet that Isaac Asimov never saw that one coming.

Ara apparently has AI embedded directly in the brush handle, enabling it to capture data about brushing efficiency and thereby remove more bacteria, reduce plaque and prevent gingivitis.

Kolibree, the products manufacturer, claims that it uses patent-pending M2M technology to provide a personalised, interactive tooth brushing experience; each time the user brushes, embedded algorithms in the toothbrush learn the users brushing pattern, meaning it can make personalised recommendations.

Of course, the toothbrush also syncs with the obligatory app, which serves as a personal and highly useful record of your brushing history. The possibilities are therefore endless. In future years, feeling nostalgic perhaps on a rainy Sunday afternoon sometime in the early 2020s, you might be tempted to access the record of that particularly vigorous session back in March 17 Such is the transformative power of technology.

View post:

MWC: Completely superfluous 'AI' added to consumer items - Naked Security

How AI Is Transforming Healthcare – Forbes

Artificial intelligence is driving massive improvement and innovation in the healthcare and life sciences sectors. AI is expediting advances in drug research and discovery. Its allowing for better and faster diagnoses. And its enabling far greater efficiency in business processes.

Thats noteworthy considering so many of us stand to benefit from it. Healthcare is truly one of the industries in which AI stands to have the greatest impact.

Practitioners Can Leverage AI For Faster Diagnosis

AI allows for more accurate diagnoses by getting practitioners clean data quickly.

That clearly addresses a significant pain point in the healthcare arena. Misdiagnosis is estimated to cause up to 80,000 hospital deaths each year and results in billions of dollars in wasted medical spending.

Traditionally, a person would have to crunch terabytes of data to diagnose a patient. Now businesses can leverage AI for number crunching, and human beings can validate the results.

Scientists in China and the U.S. are experimenting with AI in medical diagnoses. My company did a test showing how our platform can be used to detect breast cancer. Bayer, Lunit and PathAI are applying AI to medical diagnosis, too.

Health Insurance Companies Can Employ AI To Automate Claims Processing

Insurance businesses can use AI to improve and automate their operations as well.

Lets say an insurance company reads the patient email and attached claim. Then the company opens its health claim system and inputs the patients policy number. That way the insurance company can see if the patient is eligible for the claim. The insurer then sends a patient email saying something like, "Thank you for your claim. Out of $2,000, you're eligible for $1,762.26. Your check will go out in 22 days." This would work in a similar matter in cases in which insurance companies work directly with healthcare providers.

This process can be automated thanks to AI-based integrated automation platforms (IAPs). They automate the claims process end to end. IAPs can ingest, extract and analyze data from claims. They can leverage business rules to understand patient eligibility and coverage. And they can provide that information to the insurer, its partners and the patients.

Aetna recently enlisted AI to settle health insurance claims. Cigna and Humana are using bots. And Mercer uses our IAP to optimize and personalize quotes for new policies and renewals.

Pharmaceutical Companies Can Use AI To Accelerate Development

AI is also accelerating the delivery of new findings in drug research. That will enable the pharmaceutical industry to provide doctors and patients with better treatments, faster.

The pharmaceutical industry generated $1.2 trillion in worldwide revenue in 2018, and its poised to grow by 160% between 2017 and 2030. However, reports indicate that drug discovery and development have declining success rates and a stagnant pipeline.

AI could help drive improvement on these fronts.

For example, in 2007 a robot identified the function of a yeast gene. A more advanced robot discovered that a common ingredient in toothpaste offered the potential to treat drug-resistant malaria parasites. A machine learning algorithm helped identify a new antibiotic compound. And AI created a drug used to treat patients with obsessive-compulsive disorder.

Broad AI Adoption Hinges On Data, Trust, Education and Ethical AI

Having the right platform is just part of the equation to get to widespread AI adoption. AI also needs to have the right data. In addition, AI must gain user trust, which will continue to be built with ethical and responsible use of the technology.

By the right data, I mean enough representative data to get accurate results.

But that doesnt necessarily mean you need giant datasets. Fractal technology uses relatively small data sets to train the AI engine. It is based on a deterministic science and has been proven by such organizations as NASA.

Imagine, however, that your doctor advised you to buy a $149 thermal camera on Amazon. She told you to take a selfie of your breast and run the image through our platform to get results. Youd probably prefer to go through the painful yet more familiar experience of having a mammogram. Choosing the mammogram might not provide a better experience or results, but its whats known.

Thats where the need for education comes in. Those providing and using AI need to educate patients and healthcare providers that they are in safe hands. They can do that by demonstrating this fact.

For example, for the first 250,000 patient cases, you could have the AI engine provide a result, and you could have a doctor do the diagnosis as well.

You could present both to the patient and/or practitioner, and they would then see the AI is just as good at diagnosing illness as humans.

In other words, the right data will yield accurate results. And when people learn about that accuracy, they will trust the technology. Adoption will increase, and more people will benefit.

Everyone also benefits from ethical AI, which allows for greater accountability, traceability and sustainability. Ethical AI can work to define which AI use cases are and are not acceptable. And it can set rules for specific application requirements. Thats important in healthcare, which can involve life-or-death decisions. The application requirements for AI in healthcare are obviously unique from banking requirements, for example.

Everybody Wants Faster, Better Results

Whatever the sector, reducing time and enhancing customer delight and outcomes are the goals of automation. Those goals are now achievable with AI, fractal science and IAPs.

The future for what this technology will bring to patient care and outcomes stands to be truly transformational. Weve barely scratched the surface of whats possible.

Read more here:

How AI Is Transforming Healthcare - Forbes

A New Study Shows Basic Income Doesn’t Deter People From Working

Ya Basic!

If you’ve ever talked about basic income — that’s the idea that the government should give everyone a monthly stipend to cover their everyday needs odds are you’ve run into That Guy. Maybe it was the chap in your undergrad philosophy class who loved Ayn Rand and playing devil’s advocate. Maybe it was a high-profile investor on cable news.

No matter who it was, they share the same argument — if the government provides for people by giving out a universal basic income, those people will lose the motivation to work and become totally dependent.

Plot twist: there’s now compelling evidence showing that in at least one case, basic income didn’t discourage people from working.

Everybody Wins

The National Bureau of Economic Research (NBER) published research earlier this year showing that Alaska’s ongoing basic income program — started in 1982 — had no direct impact on full-time employment in the state.

The state has the highest unemployment levels in the country, but that’s totally unrelated to its basic income program and the payments actually helped boost part-time employment within the state by 17 percent. Take that, Kyle from Philosophy 101!

Ya Extra!

Basic income can be a great tool and safety net for those who need it, but some have argued that the government should go even further. Futurist Kai-Fu Lee argued in his new book “AI Superpowers: China, Silicon Valley, and the New World Order” that as more jobs are automated, basic income will only serve as a painkiller.

To truly serve people, the government should kick it up a notch and actively provide careers and salaries for those who serve their community in addition to providing that safety net. And given the NBER’s new findings, he may be on to something.

READ MORE: Critics of universal basic income argue giving people money for nothing discourages working — but a study of Alaska’s 36-year-old program suggests that’s not the case [Business Insider]

More on universal basic income: Finland’s Assessment of Basic Income Trial: Not Impressed

See the original post here:

A New Study Shows Basic Income Doesn’t Deter People From Working

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire

Artificial intelligence is helping researchers through different stages of the COVID-19 pandemic. (NIST Illustration / N. Hanacek)

Artificial intelligence is playing a part in each stage of the COVID-19 pandemic, from predicting the spread of the novel coronavirus to powering robots that can replace humans in hospital wards.

Thats according to Oren Etzioni, CEO of Seattles Allen Institute for Artificial Intelligence (AI2) and a University of Washington computer science professor. Etzioni and AI2 senior assistant Nicole DeCario have boiled down AIs role in the current crisis to three immediate applications: Processing large amounts of data to find treatments, reducing spread, and treating ill patients.

AI is playing numerous roles, all of which are important based on where we are in the pandemic cycle, the two told GeekWire in an email. But what if the virus could have been contained?

Canadian health surveillance startup BlueDot was among the first in the world to accurately identify the spread of COVID-19 and its risk, according to CNBC. In late December, the startups AI software discovered a cluster of unusual pneumonia cases in Wuhan, China, and predicted where the virus might go next.

Imagine the number of lives that would have been saved if the virus spread was mitigated and the global response was triggered sooner, Etzioni and DeCario said.

Can AI bring researchers closer to a cure?

One of the best things artificial intelligence can do now is help researchers scour through the data to find potential treatments, the two added.

The COVID-19 Open Research Dataset (CORD-19), an initiative building on Seattles Allen Institute for Artificial Intelligence (AI2) Semantic Scholar project, uses natural language processing to analyze tens of thousands of scientific research papers at an unprecedented pace.

Semantic Scholar, the team behind the CORD-19 dataset at AI2, was created on the hypothesis that cures for many ills live buried in scientific literature, Oren and DeCario said.Literature-based discovery has tremendous potential to inform vaccine and treatment development, which is a critical next step in the COVID-19 pandemic.

The White House announced the initiative along with a coalition that includes the Chan Zuckerberg Initiative, Georgetown Universitys Center for Security and Emerging Technology, Microsoft Research, the National Library of Medicine, and Kaggle, the machine learning and data science community owned by Google.

Within four days of the datasets release on March 16, itreceived more than 594,000 views and 183 analyses.

Computer models map out infected cells

Coronaviruses invade cells through spike proteins, but they take on different shapes in different coronaviruses. Understanding the shape of the spike protein in SARS-Cov-2 that causes coronavirus is crucial to figuring out how to target the virus and develop therapies.

Dozens of research papers related to spike proteins are in the CORD-19 Explorer to better help people understand existing research efforts.

The University of Washingtons Institute for Protein Design mapped out 3D atomic-scale models of the SARS-CoV-2 spike protein that mirror those first discovered in a University of Texas Austin lab.

The team is now working to create new proteins to neutralize the coronavirus, according to David Baker, director of the Institute for Protein Design. These proteins would have to bind to the spike protein to prevent healthy cells from being infected.

Baker suggests that its a pretty small chance that artificial intelligence approaches will be used for vaccines.

However, he said, Asfar as drugs, I think theres more of a chance there.

It has been a few months since COVID-19 first appeared in a seafood-and-live-animal market in Wuhan, China. Now the virus has crossed borders, infecting more than one million people worldwide, and scientists are scrambling to find a vaccine.

This is one of those times where I wish I had a crystal ball to see the future, Etzioni said of the likelihood of AI bringing researchers closer to a vaccine. I imagine the vaccine developers are using all tools available to move as quickly as possible. This is, indeed, a race to save lives.

More than 40 organizations are developing a COVID-19 vaccine, including three that have made it to human testing.

Apart from vaccines, several scientists and pharmaceutical companies are partnering to develop therapies to combat the virus. Some treatments include using antiviral remdesivir, developed by Gilead Sciences, and the anti-malaria drug hydroxychloroquine.

AIs quest to limit human interaction

Limiting human interaction in tandem with Washington Gov. Jay Inslees mandatory stay-at-home order is one way AI can help fight the pandemic, according to Etzioni and DeCario.

People can order groceries through Alexa without stepping foot inside a store. Robots are replacing clinicians in hospitals, helping disinfect rooms, provide telehealth services, and process and analyze COVID-19 test samples.

Doctors even used a robot to treat the first person diagnosed with COVID-19 in Everett, Wash., according to the Guardian. Dr. George Diaz, the section chief of infectious diseases at Providence Regional Medical Center, told the Guardian he operated the robot while sitting outside the patients room.

The robot was equipped with a stethoscope to take the patients vitals and a camera for doctors to communicate with the patient through a large video screen.

Robots are one of many ways hospitals around the world continue to reduce risk of the virus spreading. AI systems are helping doctors identify COVID-19 cases through CT scans or x-rays at a rapid rate with high accuracy.

Bright.md is one of many startups in the Pacific Northwest using AI-powered virtual healthcare software to help physicians treat patients more quickly and efficiently without having them actually step foot inside an office.

Two Seattle startups, MDmetrix and TransformativeMed, are using their technologies to help hospitals across the nation, including University of Washington Medicine and Harborview Medical Center in Seattle. The companies software helps clinicians better understand how patients ages 20 to 45 respond to certain treatments versus older adults. It also gauges the average time period between person-to-person vs. community spread of the disease.

The Centers for Disease Control and Prevention uses Microsofts HealthCare Bot Service as a self-screening tool for people wondering whether they need treatment for COVID-19.

AI raises privacy and ethics concerns amid pandemic

Despite AIs positive role in fighting the pandemic, the privacy and ethical questions raised by it cannot be overlooked, according to Etzioni and DeCario.

Bellevue, Wash., residents are asked to report those in violation of Inslees stay home order to help clear up 911 lines for emergencies, Geekwire reported last month. Believe police then track suspected violations on the MyBellevue app, which shows hot spots of activity.

Bellevue is not the first. The U.S. government is using location data from smartphones to help track the spread of COVID-19. However, privacy advocates, like Jennifer Lee of Washingtons ACLU, are concerned about the long-term implications of Bellevues new tool.

Etzioni and DeCario also want people to consider the implications AI has on hospitals. Even though deploying robots to take over hospital wards helps reduce spread, it also displaces staff. Job loss because of automation is already at the forefront of many discussions.

Hear more from Oren Etzioni on this recent episode of the GeekWire Health Tech podcast.

View post:

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future - GeekWire

With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals – TechCrunch

Just about everyone can benefit from an extra ear listening in at the right time. And while an ear dedicated to helping me remembertheitems my housemate asked me topickup at the store last week has yet to be commercialized into a startup,Chorus.ai is riffing off the concept to deliver a solution to help sales teams close more deals. The Chorus team is announcing a $16 million Series A today led by Redpoint.

Taking a page from companies like Cogito and Deepgram,Chorus.ai is first and foremost a system for extracting insights from audio. But unlike Cogito that got its start servicing call centers,Chorus.ai is setting its sights on sales.

In the style of X.ai, Chorus simply joins conference calls, in the same way a human would, to record and transcribe content in real-time. The platform flags important action items and topics that came up over the duration of calls.

We have invested in algorithms that are tuned to sales, but evensome simple keyword matching adds a lot of value, explainsRoy Raanani, co-founder and CEO ofChorus.ai.

The platform that the Chorus team built broadly serves two functions. Because it transcribes calls, it serves as a valuable reference for sales reps when completing follow-ups on action items. But Chorus can also add enterprise value by acting as a training ground for reps to share best practices and closing strategies.

Chorus.ai is the latest example of an AI startup finding vitality through verticalization. ThoughRaanani was careful not tocommit to any numbers, he explained that Chorus is likely better than products like IBMs Watson at thespecialized task of sales support.

Intuitively, mastery of general speech recognition is a harder task than mastery of language commonly used in the domain of sales.Even today, with speech recognition mostly a solved problem, many systems still struggle to parse the complexities (or lack there of) in the speech of young children for example.

In just four months, the company transitioned through the gears of a seed stage startup. Its first institutional round, led by Emergence Capital, who also participated in todays round, closed in October of last year for $6.3 million. All the whileRaanani, and his co-founderMicha Breakstone, continuedpolishing off the Chorus platform. The team built out key integrations with a number of meeting and support platforms like Zoom, BlueJeans, WebEX and Salesforce. And they closed customers likeQualtrics andMarketo.

In the future,Raanani and his team want to double down on the real-time advantage of the Chorus platform. The idea being that sales reps pitching to potential clients could leverage the speed of machines to pull up content in real-time to help close deals. If a customer on the phone references a competitor, Chorus could flash an informational aid on screen with known differentiators andpast successful pitches to give the sales rep a smarterace card.

See the original post here:

With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch

Caption Health AI Awarded FDA Clearance for Point-of-Care Ejection Fraction Evaluation – HIT Consultant

What You Should Know:

Caption Health AI is awarded FDA 510(k) clearance forits innovative point-of-care ejection fraction evaluation.

Latest AI ultrasound tool makes it even easier toautomatically assess ejection fraction, a key indicator of cardiac function, atthe bedsideincluding on the front lines of the COVID-19 pandemic.

Caption Health, a Brisbane,CA-based leader in medical AI technology, today announced it has received FDA510(k) clearance for an updated version of Caption Interpretation, whichenables clinicians to obtain quick, easy and accurate measurements of cardiacejection fraction (EF) at the point of care.

Impact of Left Ventricular Ejection Fraction

Left ventricular ejection fraction is one of the most widelyused cardiac measurements and is a key measurement in the assessment of cardiacfunction across a spectrum of cardiovascular conditions. Cardiovasculardiseases kill nearly 700,000 Americans annually, according to the Centers forDisease Control and Prevention; furthermore, considering EF as a new vital signmay shed light on determining cardiac involvement in the progression of COVID-19. Arecent global surveypublished inEuropean Heart Journal Cardiovascular Imagingreportedthat cardiac abnormalities were observed in half of all COVID-19 patientsundergoing ultrasound of the heart, and clinicalmanagement was changed inone-third of patients based on imaging.

How Caption Interpretation Works

Caption Interpretation applies end-to-end deep learning toautomatically select the best clips from ultrasound exams, perform qualityassurance and produce an accurate EF measurement. The technology incorporatesthree ultrasound views into its fully automated ejection fraction calculation:apical 4-chamber (AP4), apical 2-chamber (AP2) and the readily-obtainedparasternal long-axis (PLAX) viewan industry first. While ejection fraction iscommonly measured using the more challenging apical views, the PLAX view is ofteneasier to acquire at the point of care in situations where patients may not beable to turn on their sides, such as intensive care units, anesthesiapreoperative settings and emergency rooms. This software provides unprecedentedaccess for healthcare providers to bring specialized ultrasound techniques tothe bedside.

Developing artificial intelligence that mimics an expert physicians eye with comparable accuracy to automatically calculate EFincluding from the PLAX view, which has never been done beforeis a major breakthrough, saidRoberto M. Lang, MD, FASE, FACC, FESC, FAHA, FRCP, Professor of Medicine and Radiology andDirector of Noninvasive Cardiac Imaging Laboratories at theUniversity of ChicagoMedicine and past president of the American Society of Echocardiography. Whether you are assessing cardiac function rapidly, or looking to monitor changes in EF in patients with heart failure, Caption Interpretation produces a very reliable assessment.

Caption Interpretation Benefits

At the point of care, a less precise visual assessment of EFis frequently performed in lieu of a quantitative measurement due to resourceand time constraints. Using Caption Interpretation in these settings providesthe best of both worlds: it is as easy as performing a visual assessment, butwith comparable performance to an expert quantitative measurement.

Caption Interpretation was trained on millions of imageframes to correctly estimate ejection fraction, emulating the way an expertcardiologist learns by evaluating EF as part of their clinical practice. Whilevirtually all commercially available EF measurement software works by tracingendocardial borders, Caption Interpretation analyzes every pixel and frame in agiven clip to produce highly accurate EF measurements.

Caption Health broke new ground in 2018 when it received thefirst FDA clearance for a fully automated EF assessment software. Two yearslater, Caption Interpretation remains the only fully automated EF toolavailable to providers, and, with todays clearance, continues to be the pacesetterin ultrasound interpretation.

We are pleased to have received FDA clearance for our latest AI imaging advancementour third so far this year, said Randolph P. Martin, MD, FACC, FASE, FESC, Chief Medical Officer of Caption Health, Emeritus Professor of Cardiology atEmory University School of Medicine, and past president of the American Society of Echocardiography. An accurate EF measurement is an indispensable tool in a cardiac functional assessment, and this update to Caption Interpretation makes it easier for time-constrained clinicians to incorporate it into their practice.

Recent Traction/Milestones

Caption Interpretation works in tandem with CaptionGuidance,cleared by the FDA earlier this year,as part of the Caption AI platform. Caption Guidanceemulates the expertise of a sonographer by providing over 90 types of real-timeinstructions and feedback. These visual prompts direct users to make specifictransducer movements to optimize and capture a diagnostic-quality image. Incontrast, use of other ultrasound systems requires years of expertise torecognize anatomical structures and make fine movements, limiting access toclinicians with specialized training.

The company recently closedits Series B funding round with$53 millionto further develop andcommercialize this revolutionary ultrasound technology that expands patientaccess to high-quality and essential care.

See the article here:

Caption Health AI Awarded FDA Clearance for Point-of-Care Ejection Fraction Evaluation - HIT Consultant

Cities aren’t even close to being ready for the AI revolution – Axios

Globally, no city is even close to being prepared for the challenges brought by AI and automation. Of those ranking highest in terms of readiness, nearly 70% are outside the U.S., according to a report by Oliver Wyman.

Why it matters: Cities are ground zero for the 4th industrial revolution. 68% of the world's population will live in cities by 2050, per UN estimates. During the same period, AI is expected to upend most aspects of how those people live and work.

The big picture: Many cities are focused on leveraging technology to improve their own economies such as becoming more efficient and sustainable "smart cities" or attracting companies to compete with Silicon Valley.

What they found: No city or continent has a significant advantage when it comes to AI readiness, but some have parts of the recipe.

By the numbers: Here are the survey stats that stood out.

Cities to watch:

Reality check: Cities can't deal with the repercussions of AI on their own. National and regional governments will also have to step in with policy strategies in collaboration with businesses.

Go deeper: See how your city measures up

Read more from the original source:

Cities aren't even close to being ready for the AI revolution - Axios

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage – PRNewswire

To do this, Admiral Seguros is using an AI solution, developed by the technology company Tractable, which accurately evaluates vehicle damage with photos sent through a web application. The app, via the AI, completes the complex manual tasks that an advisor would normally perform and produces a damage assessment in seconds, often without the need for further review.

Upon receiving the assessment, Admiral Seguros will use it to make immediate payment offers to policyholders when appropriate, allowing them to resolve claims in minutes, even on the first call.

Jose Maria Perez de Vargas, Head of Customer Management at Admiral Seguros, said: "Admiral Seguros continues to advance in digitalisation as a means to provide a better service to our policyholders, providing them with an easy, secure and transparent means of evaluating damages without the need for travel, achieving compensation in a few hours. It's a simple, innovative and efficient claims management process that our clients will surely appreciate."

Adrien Cohen, co-founder and president of Tractable, said: "By using our AI to offer immediate payments, Admiral Seguros will resolve many claims almost instantly, to the delight of its customers. This is central to our mission of using Artificial Intelligence to accelerate recovery, converting the process from weeks to minutes."

Tractable's AI uses deep learning for computer vision, in addition to machine learning techniques. The AI is trained with many millions of photographs of vehicle damage, and the algorithms learn from experience by analyzing a wide variety of different examples. Tractable's technology can be applied globally to any vehicle.

The AI enables insurers to assess car damage, shares recommended repair operations, and guides the claims management process to ensure these are processed and settled as quickly as possible.

According to Admiral Seguros, the application of this technology in the insurance sector will be a great step in digitization and will offer a great improvement in the customer experience of Admiral's insurance brands in Spain, Qualitas Auto and Balumba.

About Tractable:

Tractable develops artificial intelligence for accident and disaster recovery. Its AI solutions have been deployed by leading insurers across Europe, North America and Asia to accelerate accident recovery for hundreds of thousands of households. Tractable is backed by $55m in venture capital and has offices in London, New York City and Tokyo.

About Admiral Seguros

In Spain, Admiral Group plc has been based in Seville since 2006 thanks to the creation of Admiral Seguros. More than 700 people work from there and for the entire national territory, cementing and marketing their two commercial brands: Qualitas Auto, and Balumba.

Recognized as the third best company to work for in Spain, the sixth in Europe and the eighteenth in the world by the consultancy Great Place to Work, Admiral Seguros is committed to a corporate culture focused on people.

SOURCE Tractable

https://tractable.ai

Originally posted here:

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage - PRNewswire

Pimloc gets $1.8M for its AI-based visual search and redaction tool – TechCrunch

U.K.-based Pimloc has closed a 1.4 million (~$1.8 million) seed funding round led by Amadeus Capital Partners. Existing investor Speedinvest and other unnamed shareholders also participated in the round.

The 2016-founded computer vision startup launched a AI-powered photo classifier service called Pholio in 2017 pitching the service as a way for smartphone users to reclaim agency over their digital memories without having to hand over their data to cloud giants like Google.

It has since pivoted to position Pholio as a specialist search and discovery platform for large image and video collections and live streams (such as those owned by art galleries or broadcasters) and also launched a second tool powered by its deep learning platform.This product, Secure Redact, offers privacy-focused content moderation tools enabling its users to find and redact personal data in visual content.

An example use case it gives is for law enforcement to anonymize bodycam footage so it can be repurposed for training videos or prepared for submitting as evidence.

Pimloc has been working with diverse image and video content for several years supporting businesses with a host of classification, moderation and data protection challenges (image libraries, art galleries, broadcasters and CCTV providers), CEO Simon Randall tells TechCrunch.

Through our work on the visual privacy side we identified a critical gap in the market for services that allow businesses and governments to manage visual data protection at scale on security footage. Pimloc has worked in this area for a couple of years building capability and product; as a result, Pimloc has now focused the business solely around this mission.

Secure Redact has two components: A first (automated) step that detects personal data (e.g. faces, heads, bodies) within video content. On top of that is what Randall calls a layer of intelligent tools letting users quickly review and edit results.

All detections and tracks are auditable and editable by users prior to accepting and redacting, he explains, adding: Personal data extends wider than just faces into other objects and scene content, including ID cards, tattoos, phone screens (body-worn cameras have a habit of picking up messages on the wearers phone screen as they are typing, or sensitive notes on their laptop or notebook).

One specific user of redaction with the tool he mentions is the University of Bristol. There, a research group, led by Dr Dima Damen, an associate professor in computer vision, is participating in an international consortium of 12 universities which is aiming to amass the largest data set on egocentric vision and needs to be able to anonymise the video data set before making it available for academic/open source use.

On the legal side, Randall says Pimloc offers a range of data processing models thereby catering to differences in how/where data can be processed. Some customers are happy for Pimloc to act as data processor and use the Secure Redact SaaS solution they manage their account, they upload footage and can review/edit/update detections prior to redaction and usage. Some customers run the Secure Redact system on their servers where they are both data controller and processor, he notes.

We have over 100 users signed up for the SaaS service covering mobility, entertainment, insurance, health and security. We are also in the process of setting up a host of on-premise implementations, he adds.

Asked which sectors Pimloc sees driving the most growth for its platform in the coming years, he lists the following:smart cities/mobility platforms (with safety/analytics demand coming from the likes of councils, retailers, AVs); the insurance industry, which he notes is capturing and using an increasing amount of visual data for claims and risk monitoring and thus looking at responsible systems for data management and processing; video/telehealth, with traditional consultations moving into video and driving demand for visual diagnosis; and law enforcement, where security goals need to be supported by visual privacy designed in by default (at least where forces are subject to European data protection law).

On the competitive front, he notes that startups are increasingly focusing on specialist application areas for AI arguing they have an opportunity to build compelling end-to-end propositions which are harder for larger tech companies to focus on.

For Pimlock specifically he argues it has an edge in its particular security-focused niche given deep expertise and specific domain experience.

There are low barriers to entry to create a low-quality product but very high technical barriers to create a service that is good enough to use at scale with real in the wild footage, he argues, adding: The generalist services of the larger tech players do not match up with domain specific provisions of Pimloc/Secure Redact. Video security footage is a difficult domain for AI, systems trained on lifestyle/celebrity or other general data sets perform poorly on real security footage.

Commenting on the seed funding in a statement, Alex van Someren, MD of Amadeus Capital Partners, said: There is a critical need for privacy by design and large-scale solutions, as video grows as a data source for mobility, insurance, commerce and smart cities, while our reliance on video for remote working increases. We are very excited about the potential of Pimlocs products to meet this challenge.

Consumers around the world are rightfully concerned with how enterprises are handling the growing volume of visual data being captured 24/7. We believe Pimloc has developed an industry leading approach to visual security and privacy that will allow businesses and governments to manage the usage of visual data whilst protecting consumers. We are excited to support their vision as they expand into the wider Enterprise and SaaS markets, added Rick Hao, principal at Speedinvest, in another supporting statement.

Continue reading here:

Pimloc gets $1.8M for its AI-based visual search and redaction tool - TechCrunch

How to Keep an Edge in the Era of Pervasive AI? – Via News Agency

AI adopters should be proactive in implementing certain measures if they want to gain or maintain an advantage over their industry peers as the disruptive technology becomes mainstream, according to global accounting firm Deloitte.

In the 3rd edition of its State of AI in the Enterprise report, Deloitte says there are three things that both current and future adopters can do.

Deloittes analysiswhich is based on a survey of2,737 IT and line-of-business executives from around the worldfound that many AI adopters are focused more on improving what they have than on creating something new.

Making processes more efficientandenhancing existing products and serviceswere the top two benefits that the respondents were seeking from AI technologies.

Read more: COVID-19: A Pivotal Moment for Future of AI

We found that companies are still using AI technologies mostly in IT- and cybersecurity-related functions. Forty-seven percent of respondents indicated that IT was one of the top two functions for which AI was primarily used, the accounting firm wrote.

As major functions for AI applications, IT was followed by cybersecurity, production and manufacturing, and engineering and product development. This is while marketing, human resources, legal, and procurement ranked at the bottom of the list.

VIA NEWS LIVE TV

According to the report, businesses will soon need to use AI to differentiate themselves by going beyond using the technology to just improve efficiency and automation.

They will need to use the technology to differentiate themselves and can do that by taking inspiration from inventive use cases to come up with solutions that are both usefuland novel.

Push boundaries.Businesses should try to pursue a more diverse set of projects that could help enhance multiple business functions across the enterprise.

Create the new. A great area of opportunity for businesses. is developing new AI-powered products and services that can solve problems that humans cant.

Expand the circle.In addition to the IT department, employees in other sections should become involved in AI efforts. New vendors, data sources, tools, techniques, and partnerships can be instrumental in achieving this goal.

The survey showed that AI adopters tend to buy their capabilities rather than build them.

About 50 percent are buying more than they build, and another 30 percent use an even blend of buying and building from scratch. Seasoned (53 percent) and Skilled (51 percent) adopters are more likely than Starters (44 percent) to buy the AI systems they need.

According to Deloitte, this suggests that many organizations prefer to experience a period of internal learning and experimentation before deciding on what is necessary.

AI adopters view being a smart consumer as critical to boosting competitive advantage. When asked to select the top initiative for increasing their competitive advantage from AI, adopters picked modernizing our data infrastructure for AI as their top choice, closely followed by gaining access to the newest and best AI technologies.

The findings of the survey show that less than half of adopters (47 percent) claim to possess a high level of skill for selecting AI technologies and technology suppliers.

As more partners and vendors are offering their services, Deloitte says organizations should become savvier and choose the best-equipped ones to gain access to the latest and greatest technologies.

Leverage a diverse team. Both technical and business experts should be included in selecting AI technologies and suppliers as it is important to have a broad perspective from developers, integrators, end-users, and business owners.

Take a centralized approach. Businesses are advised to coordinate experiments, implementations, selection of AI technologies, and vendors across the enterprise as it helps avoid duplication of effort, competing methods, and multiple vendors.

The use of working groups, dedicated leaders, or communities of practice should be considered.

Focus on integrating and scaling.It should be made sure that vendors and partners can help the business integrate AI solutions into its broader IT infrastructure. It should also be verified that solutions can grow with the needs of the enterprise.

Read more: AI Replacing Teachers Will Have Dire Consequences

Deloitte says adopters face reservations as well despite strong enthusiasm for their AI efforts.

In fact, they rank managing AI-related risks as the top challenge for their AI initiatives, tied with persistent difficulties of data management and integrating AI into their companys processes.

Additionally, the surveys findings show that a troubling preparedness gap exists for adopters across a wide range of these potential operational, strategic, and ethical risks.

More than half of adopters report major or extreme concerns about these potential risks for their AI initiatives, while only four in 10 adopters rate their organization as fully prepared to address them.

According to the survey, 56 percent agree that their organization is slowing the adoption of AI due to the emerging risks.

The same proportion believes that negative public perceptions will slow or stop the adoption of some AI technologies, reads the report.

Deloitte says businesses should develop a set of principles and processes to manage potential AI risks if they want to build trust within their business and with customers and partners.

Align risk-related efforts. As many of the risks associated with AI are not unique, it is important to integrate AI-related risk management with broader risk efforts. An AI specialist can help with training and coordination of efforts in this regard.

Challenge your vendors. Organizations should make sure that the AI solutions used are aligned with their ethical principles.

Monitor regulatory efforts. Businesses need to ensure that legal, risk, compliance, and IT leaders are informed of the latest laws and policies regarding AI technologies.

Deloitte states in its report that AIs early adopter phase is apparently ending and the market is now heading into the early majority chapter of this maturing set of technologies.

This is reflected in the forecasts of global market intelligence firm IDC, which predicts that spending on AI technologies will grow to $97.9 billion in 2023more than two and a half times the spending level of 2019.

VIA NEWS LIVE TV

See the original post here:

How to Keep an Edge in the Era of Pervasive AI? - Via News Agency

AI photo check exposes scale of diversity problem at top firms – New Scientist

Men are on board

H. Armstrong Roberts/ClassicStock/Getty

By Timothy Revell

Bias in boardrooms is tricky to assess. Many companies dont publish diversity reports, making useful information difficult to come by and hampering efforts to tackle institutional biases. Now artificially intelligent algorithms have been used to dig down into the data, confirming that there is a lack of diversity at the top of the worlds corporate ladder.

To evaluate the situation, researchers from biotech firm Insilico Medicine compiled pictures of the top executives taken from the websites of nearly 500 of the largest companies in the world. The final dataset comprised over 7200 photographs from companies spanning 38 countries.

They trained image recognition algorithms to automatically detect the age, race and sex of the board members, and compared the results to the age, race and gender profile of each firms country to see if they reflected the general population. AI is far from perfect at interpreting images and Insilico Medicine doesnt specialise in this particular area, so the results should be taken with a pinch of salt. But, nonetheless, they do give an impression of the current state of play.

Evidence from other studies suggests that boardroom diversity is increasing year on year, but it is clear there is still a long way to go. Overall, the team found that only 21.2 per cent of the corporate executives in the study were female. And in every single company, the percentage of female board members was lower than the percentage of women capable of work in that country. Twenty-two companies had no women on their boards, with the majority of those firms being in Asia.

Nearly 80 per cent of the corporate executives in the study were white, with 3.6 per cent black and 16.7 per cent Asian. South Africa had the highest proportion of black executives, representative of the fact that 80 per cent of its population is black. However, the two South African companies included in the list still only reached 54 and 35 per cent in terms of the proportion of black board members.

In the US, many companies reflected the 12 per cent of the population that is black in their boardrooms, although there were also 30 companies without any black board members at all. The median age across all corporate executives in the study was 52.

These huge companies lead industries and influence our everyday lives. Using machine learning makes it possible to examine their diversity in a way that couldnt be done before, says Polina Mamoshina at Insilico Medicine. The data for the study was collected on 20 March.

This paper confirms that we live in a biased world, says Sandra Wachter at the Oxford Internet Institute, UK. However, acknowledging the problems this causes is only a crucial first step. Having a public discourse about these issues is vital. It is important to find out where the biases stem from and tackle the roots, she says.

Anti-discrimination laws should be used to achieve parity at the top of companies, and a shift in mentality is required to start viewing diversity as an advantage, says Wachter. The studys methods could be used in any situation where management profiles are available, but diversity data isnt, and could help examine the diversity within governments, universities or media outlets.

More on these topics:

View post:

AI photo check exposes scale of diversity problem at top firms - New Scientist

AI Plant and Animal Identification Helps Us All Be Citizen Scientists – Smithsonian

Screenshots from the iNaturalist app, which uses "deep learning" to automatically identify what bugor fish, bird, or mammalyou might be looking at.

On a recent trip to the local botanical gardens, I noticed a tall, striking purple flower Id never noticed before. I tried to Google it, but I didnt know quite what to ask. Purple flower brought me pictures of narcissus and freesia, orchids and primrose, gladiolus and morning glory. None of them were the flower Id seen.

But thanks to artificial intelligence, curious amateur naturalists like me now have better ways to identify the nature around us. Several new sites and apps use AI technology to put names to photographs.

iNaturalist.orgis one of these sites. Founded in 2008, has until now been solely a crowdsourcing site. Users post a picture of a plant or animal and a community of scientists and naturalists will identify it. Its mission is to connect experts and amateur "citizen scientists," getting people excited about plants and wildlife while using the data gathered to potentially help professional scientists monitor changes in biodiversity or even discover new species.

The crowdsourced model generally works well, says Scott Loarie, iNaturalists co-director. But there are some limitations. First, it can be much harder to get an identification of your photograph depending on where you live. In California, where Loarie is based, he can get an identification within an hour. Thats because a large number of the experts that frequent iNaturalist are based on the West Coast. But someone in, say, rural Thailandmay have to wait much longer to receive an ID: The average amount of time it takes to get an identification is 18 days. Another issue:As the site has become more popular, the balance of observers (people posting pictures) to identifiers (people telling you what the pictures are) has become skewed, with far more observers than identifiers. This threatens to overwhelm the volunteer experts.

This month, iNaturalist plans to launch an app that uses AI to identify plants and animals down to the species level. The app takes advantage of so-called deep learning, using artificial neural networks that allow computers to learn as humans do, so their capabilities can advance over time.

Were hopeful this will engage a whole new group of citizen scientists, Loarie says.

The app is trained by being fed labeled images from iNaturalists massive database of research grade observationsobservations that havebeen verified by the sites community of experts. Once the model has been trained on enough labeled images, it begins to be able to identify unlabeled images. Currently iNaturalist is able to add a new species to the model every 1.7 hours. The more images uploaded by users and identified by experts, the better.

The more stuff we get, the more trained up the model will be, Loarie says.

The iNaturalist team wants to the model to always be accurate, even if that means not being as precise as possible. Right now the model tries to give a confident response about the animal's genus, then a more cautious response about the species, offering the top 10 possibilities. It currently is correct about the genus 86 percent of the time, and gives the species in its top 10 results 77 percent of the time. These numbers should improve as the model continues to be trained.

Playing around with a demo version, I entered a picture of a puffin perched on a rock. We're pretty sure this is in thegenusPuffins, it said, giving the correct speciesAtlantic puffinas the top suggested result. Then I entered a picture of an African clawed frog. We're pretty sure this is in thegenusWestern spadefoot toads, it told me, offering African clawed frog as among its top 10 results.

The AI was not confident enough to make a recommendation about a picture of my son, but suggested he might be a northern leopard frog, a garden snail or a gopher snake, among other, non-human creatures. As all of these are spotted, I realized the computer vision was seeing the polka-dot background of my sons highchair and misidentifying it as part of the specimen. So I cropped the picture until only his face was visible and pressed classify. We're pretty sure this is in thesuborderLizards, the AI responded. Either my baby looks like a lizard orthe real answer, I presumethis shows that the model only recognizes what its been fed. And no one is feeding it pictures of humans, for obvious reasons.

iNaturalist hopes the app will take pressure off its community of experts, and allow for a larger community of observers to participate, such as groups of schoolchildren. It could also allow for camera trapping sending in streams of images from a camera trap, which takes a picture when its triggered by motion. iNaturalist has discouraged camera trapping, as it floods the site with huge amounts of images that may or may not actually need expert identification that (some images will be empty, while others would catch common animals like squirrels that the camera's owner could easily identify himself or herself). But with the AI that wouldnt be a problem. iNaturalist also hopes the new technology will engage a new community of users, including people who might have an interest in nature but wouldnt be willing to wait several days for an identification under the crowdsourced model.

Quick species identificationcould also be useful in other situations, such as law enforcement.

Lets say TSA workers open a suitcase and someones got geckos, says Loarie. They need to know whether to arrest someone or not.

In this case, the AI could tell the TSA agents what type of gecko they were looking at, which could aid in an investigation.

iNaturalist is not the only site taking advantage of computer vision to engage citizen scientists. The CornellsMerlin Bird IDapp uses AI to identify more than 750 North American birds. You just have to answer a few simple questions first, including the size and color of the bird you saw.Pl@ntNetdoes the same for plants, after you tell it what part of the plant its looking at (flower, fruit, etc.).

This is all part of a larger wave of interest in using AI to identify images. There are AIprograms that canidentify objects from drawings(even bad ones).AIs can look at paintingsand identify artists and genres. Many experts think computer vision will play ahuge role in healthcare, making it easier to identify, for example, skin cancers. Car manufacturersuse computer vision to teach carsto identify and avoid hitting pedestrians. A plot point of arecent episode of the comedy Silicon Valleydealt with a computer vision app for identifying food. But since its creator only trained it on hot dogssince training a neural network requires countless hours of human laborit could only distinguish between hot dogs and not hot dogs.

This question of humor labor is important. Massive databases of correctly labeled images are crucial to training AIs, and can be hard to come by. iNaturalist, as a longtime crowdsourced site, already has exactly this kind of database, which is why its model has been advancing so quickly, Loarie says. Other sites and apps have to find their data elsewhere, often from academic images.

Its still early days, but I guarantee in the next year youre going to see a proliferation of these kinds of apps, Loarie says.

Excerpt from:

AI Plant and Animal Identification Helps Us All Be Citizen Scientists - Smithsonian

Artificial intelligence expert moves to Montreal because it’s an AI hub – Montreal Gazette

Irina Rish, now a renowned expert in the field of artificial intelligence, first became drawn to the topic as a teenager in the former Soviet republic of Uzbekistan. At 14, she was fascinated by the notion that machines might have their own thought processes.

I was interested in math in school and I was looking at how you improve problem solving and how you come up with algorithms, Rish said in a phone interview Friday afternoon. I didnt know the word yet (algorithm) but thats essentially what it was. How do you solve tough problems?

She read a book introducing her to the world of artificial intelligence and that kick-started a lifelong passion.

First of all, they sounded like just mind-boggling ideas, that you could recreate in computers something as complex as intelligence, said Rish. Its really exciting to think about creating artificial intelligence in machines. It kind of sounds like sci-fi. But the other interesting part of that is that you hope that by doing so, you can also better understand the human mind and hopefully achieve better human intelligence. So you can say AI is not just about computer intelligence but also about our intelligence. Both goals are equally exciting.

Read the original here:

Artificial intelligence expert moves to Montreal because it's an AI hub - Montreal Gazette

An ever-changing room of Ikea furniture could help AI navigate the world – MIT Technology Review

In a building across from its main office in Seattle, the Allen Institute for Artificial Intelligence (AI2) has enough Ikea furniture to configure 14 different apartments. The lab isnt going into interior designnot exactly. The resources are meant to train smarter algorithms for controlling robots.

Household robots like the Roomba function well only because their tasks are relatively simple. Meandering around, doubling back, and returning to the same spots over and over dont really matter when the objective is to relentlessly clean the same floor.

But anything that requires more efficient or complex navigation still trips up many state-of-the-art robots. The research needed to improve this status quo is also expensivelimiting most cutting-edge progress to well-funded commercial labs.

Sign up for The Algorithm artificial intelligence, demystified

Now AI2 wants to kill two birds with one stone. On Tuesday, it announced a new challenge called RoboTHOR (THOR for The House Of inteRactionsyes, really). It will double as a way to crowdsource better navigation algorithms and lower the financial barriers for researchers who may not have robotics resources of their own.

The ultimate goal is to more rapidly advance AI by getting more research groups involved. Different communities should bring different perspectives and use cases that will expand the repertoire of robot capabilities, driving the field closer to more generalizable intelligence.

AI2

The lab has designed an easily reconfigurable room, the size of a cramped studio, to be the staging ground for all 14 apartment variations. It has also re-created identical virtual replicas in Unity, a popular video-game engineas well as 75 other configurationsthat have all been open-sourced online. Together, these 89 total configurations will offer realistic simulation environments for teams around the world to train and test their navigation algorithms. The environments also come pre-loaded with models of AI2s robots and mirror real-world physics like gravity and light reflections as closely as possible.

The challenge specifically asks teams to develop algorithms that can get a robot from a random starting location within a room to an object in that room just by telling it the objects name. This will be more difficult than simple navigation because it will require the robot to understand the command and recognize the object in its visual field as well.

AI2

Teams will compete in three phases. In phase one, they will be given the 75 digital-only simulation environments to train and validate their algorithms. In phase two, the highest performers will then be given four new simulation environments with corresponding physical doppelgangers. The teams will be able to remotely refine their algorithms by loading them into AI2s real robots.

In the final phase, the highest performers will need to demonstrate the generalizability of their algorithms in the last 10 digital and corresponding physical apartments. Whichever teams perform the best in this final phase will win bragging rights and an invitation to demo their models at the Conference on Computer Vision and Pattern Recognition, a leading AI research conference for vision-based systems.

AI2

After the challenge is over, AI2 plans to keep the setup available, giving anyone access to the environment to continue conducting robotics research. Researchers who clear a certain threshold of accuracy in the simulated environmentsproving they wont crash the robotswill be allowed to remotely deploy their algorithms in the physical ones. The room will rotate between the different furniture configurations.

We are going to maintain this environment, and we are going to maintain these robots, says Ani Kembhavi, a research scientist at AI2 who is leading the project. His team plans to develop a time-sharing system to allow different researchers to take turns remotely testing their algorithms in the real world.

AI2 hopes the strategy will make robotics research more accessible by eliminating as much of the associated hardware costs as possible. It also hopes that the scheme will inspire other well-funded organizations to open up their resources in similar ways. Additionally, it purposely designed its reconfigurable room with low materials costs and globally available Ikea furniture; the setup cost roughly $10,000. Should other researchers want to build their own physical training spaces, they can easily replicate it locally and still match the virtual simulation environments.

Kembhavi, whose dad is an astronomer, likens the idea to the global sharing of telescopes. Communities like astronomy have figured out how to take expensive resources and make it available to researchers all around the world, he says.

That's our vision for this environment, he adds. Embodied AI for all.

Link:

An ever-changing room of Ikea furniture could help AI navigate the world - MIT Technology Review

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level – Unite.AI

Sarah Tatsis, is the Vice President of Advanced Technology Development Labs at BlackBerry.

BlackBerry already secures more than 500M endpoints including 150M cars on the road. BlackBerry is leading the way with a single platform for securing, managing and optimizing how intelligent endpoints are deployed in the enterprise, enabling customers to stay ahead of the technology curve that will reshape every industry.

BlackBerry launched the Advanced Technology Development Lab (Blackberry Labs) in late 2019. What was the strategic importance of creating an entire new business division for BlackBerry?

As an innovation accelerator, BlackBerry Advanced Technology Development Labs is an intentional investment of 120 team members into the future of the company. The rise of the Internet of Things (IoT) alongside a dynamic threat landscape has fostered a climate where organizations have to guard against new threats and breaches at all times. Weve handpicked the team to include experts in the embedded IoT space with diverse capabilities, including strong data science expertise, whose innovation funnel investigates, incubates and develops technologies to keep BlackBerry at the forefront of security innovation. ATD Labs works in strong partnership with the other BlackBerry business units, such as QNX, to further the companys commitment to safety, security and data privacy for its customers. BlackBerry Labs is also partnering with universities on active research and development. Were quite proud of these initiatives and think they will greatly benefit our future roadmap.

Last year, BlackBerry Labs successfully integrated Cylances machine learning technology into BlackBerrys product pipeline. BlackBerry Labs is currently focused on incubating and developing new concepts to accelerate the innovation roadmaps for our Spark and IoT business units. My role is primarily helping to drive the innovation funnel and partner with our business units to deliver valuable solutions for our customers.

What type of products are being developed at BlackBerry Labs?

BlackBerry Labs is facilitating applied research and using insights gained to innovate in the lines of business where were already developing market-leading solutions. For instance, were applying machine learning and data science to our existing areas of application, including automotive, mobile security, etc. This is possible in large part due to the influx of BlackBerry Cylance technology and expertise, which allows us to combine our ML pipeline and market knowledge to create solutions that are securing information and devices in a really comprehensive way. As new technologies and threats emerge, BlackBerry Labs will allow us to take a proactive approach to cybersecurity, not only updating our existing solutions, but evaluating how we can branch out and provide a more comprehensive, data-based, and diverse portfolio to secure the Internet of Things.

At CES, for instance, we unveiled an AI-based transportation solution geared towards OEMs and commercial fleets. This solution provides a holistic view of the security and health of a vehicle and provides control over that security for a manufacturer or fleet manager. It also uses machine learning based continuous authentication to identify a driver of a vehicle based on past driving behavior. Born in BlackBerry Labs, this concept marked the first time BlackBerry Cylances AI and ML technologies have been integrated with BlackBerry QNX solutions, which are currently powering upwards of 150 million vehicles on the road today.

For additional insights into how we envision AI and ML shaping the world of mobility in the years to come, I would encourage you to read Security Confidence Through Artificial Intelligence and Machine Learning for Smart Mobility from our recently released Road to Mobility guide. Also released at this years CES, The Road to Mobility: The 2020 Guide to Trends and Technology for Smart Cities and Transportation, is a comprehensive resource that government regulators, automotive executives and technology innovators can turn to for forward-thinking considerations for making safe and secure autonomous and connected vehicles a reality, delivering a transportation future that drivers, passengers and pedestrians alike can trust.

Featuring a mix of insights from both our own internal experts and recognized voices from across the transportation industry, the guide provides practical strategies for anyone whos interested in playing a vital role in shaping what the vehicles and infrastructure of our shared autonomous future will look like.

How important is artificial intelligence to the future of BlackBerry?

As both IoT and cybersecurity risk explodes, traditional methods of keeping organizations, things, and people safe and secure are becoming unscalable and ineffective. Preventing, detecting, and responding to potential threats needs to account for large amounts of data and intelligent automation of appropriate responses. AI and data science include tools that address these challenges and are therefore critical to the roadmap of BlackBerry. These tools allow BlackBerry to provide even greater value to our customers by reducing risk in efficient ways. BlackBerry leverages AI to deliver innovative solutions in the areas of cybersecurity, safety and data privacy as part of our strategy to connect, secure, and manage every endpoint in the Internet of Things.

For instance, BlackBerry trains our end point protection AI model against billions of files, good and bad, so that it learns to autonomously convict, or not convict files, pre-execution. The result of this massive, ongoing training effort is a proven track record of blocking payloads attempting to exploit zero-days for up to two years into the future.

The ability to protect organizations from zero-day payloads, well before they are developed and deployed, means that when other IT teams are scrambling to recover from the next major outbreak, it will be business as usual for BlackBerry customers. For example, WannaCry, which rendered millions of computers across the globe useless, was prevented by a BlackBerry (Cylance) machine learning model developed, trained, and deployed 24 months before the malware was first reported.

BlackBerrys QNX software is embedded in more than 150 million cars. Can you discuss what this software does?

Our software provides the safe and secure software foundation for many of the systems within the vehicle. We have a broad portfolio of functional safety-certified software including our QNX operating system, development tools and middleware for autonomous and connected vehicles. In the automotive segment, the companys software is deployed across the vehicle in systems such as ADAS and Safety Systems, Digital Cockpits, Digital Instrument Clusters, Infotainment, Telematics, Gateways, V2X and increasingly is being selected for chassis control and battery management systems that are advancing in complexity.

QNX software includes cybersecurity which protects autonomous vehicles from various cyber-attacks. Can you discuss some of the potential vulnerabilities that autonomous vehicles have to cyberattacks?

I think there is still a misconception out there that when you get into your car to drive home from work later today you might fall prey to a massive and coordinated vehicle cyberattack in which a rogue state threatens to hold you and your vehicle ransom unless you meet their demands. Hollywood movies are good at exaggerating what is possible, for example, instant and entire compromise of fleets that undermines all safety systems in cars. Whilst there are and always will be vulnerabilities within any system, to exploit a vulnerability and on scale with unprecedented reliability presents all kinds of hurdles that must be overcome, and would also require a significant investment of time, energy and resources. I think the general public needs to be reminded of this and the fact that hacking, if and when they do occur, are undesirable but not as movies would have you believe.

With a modern connected vehicle now containing well over 100 million lines of code and some of the most complex software ever deployed by automakers, the need for robust security has never been more important. As the software in a car grows so does the attack surface, which makes it more vulnerable to cyberattacks. Each poorly constructed piece of software represents a potential vulnerability that can be exploited by attackers.

BlackBerry is perfectly positioned to address these challenges as we have the solutions, the expertise and pedigree to be the safety certified and secure foundational software for autonomous and connected vehicles.

How does QNX software protect vehicles from these potential cyberattacks?

BlackBerry has a broad portfolio of products and services to protect vehicles against cybersecurity attacks. Our software has been deployed in critical embedded systems for over three decades and its worth pointing out, has also been certified to the highest level of automotive certification for functional safety with ISO 26262 ASIL D. As a company, we are investing significantly to broaden our safety and security product and services portfolio. Simply put, this is what our customers demand and rely on from us a safe, secure and reliable software platform.

As it pertains to security, we firmly believe that security cannot be an afterthought. For automakers and the entire automotive supply chain, security should be inherent in the entire product lifecycle. As part of our ongoing commitment to security, we published a 7-Pillar Cybersecurity Recommendation to share our insight and expertise on this topic. In addition to our safety-certified and secure operating system and hypervisor, BlackBerry provides a host of security products such as managed PKI, FIPS 140-2 certified toolkits, key inject tools, binary code static analysis tools, security credential management systems (SCMS), and secure Over-The-Air (OTA) software update technology. The worlds leading automakers, tier ones, and chip manufacturers continue to seek out BlackBerrys safety-certified and highly-secure software for their next-generation vehicles. Together with our customers we will help to ensure that the future of mobility is safe, secure and built on trust.

Can you elaborate on what is the QNX Hypervisor?

The QNX Hypervisor enables developers to partition, separate, and isolate safety-critical environments from non-safety critical environments reliably and securely; and to do so with the precision needed in an embedded production system. The QNX Hypervisor is also the worlds first ASIL D safety-certified commercial hypervisor.

What are some of the auto manufacturers using QNX software?

BlackBerrys pedigree in safety, security, and continued innovation has led to its QNX technology being embedded in more than 150 million vehicles on the road today. It is used by the top seven automotive Tier 1s, and by 45+ OEMs including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.

Is there anything else that you would like to share about Blackberry Labs?

BlackBerry is committed to constant and consistent innovation its at the forefront of everything we do but we also have a unique legacy of being one of the pioneers of mobile based security, and further the idea of a truly secure devices, endpoints, and communications. The lessons we learned over the past decades, as well as the technology we developed, will be instrumental for helping us to create a new standard for privacy and security as the tsunami of connected devices enter the IoT. Much of what BlackBerry has done in the past is re-emerging in front of us, and were one of the only companies prioritizing a fundamental belief that all users deserve solutions that allow them to own their data and secure communications its baked into our entire development pipeline and is one of our key differentiators. BlackBerry Labs is combining this history with new technology innovations to address the rapidly expanding landscape of mobile and connected endpoints, including vehicles, and increased security threats. Through our strong partnerships with BlackBerry business units we are creating new features, products, and services to deliver value to both new and existing customers.

Thank you for the wonderful interview and for your extensive responses. Its clear to me that Blackberry is at the forefront of technology and its best days are still ahead. Readers who wish to learn more should visit the Blackberry website.

Excerpt from:

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level - Unite.AI

AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy – Newsweek

An artificial intelligence tool has been developed to help predict the structure of the universe and aid research into the mysteries of dark energy and dark matter.

Researchers in Japan used two of the world's fastest astrophysical simulation supercomputers, known as ATERUI and ATERUI II, to create an aptly-named "Dark Emulator" tool, which is able to ingest vast quantities of data and produce analysis of the universe in seconds.

The AI could play a role in studying the nature of dark energy, which seems to make up a large amount of the universe but remains an enigma.

Read more

When observed from a distance, the team noted how the universe appears to consist of clusters of galaxies and massive voids that appear to be empty.

But as noted by NASA, leading models of the universe indicate it is made of entities that cannot be seen. Dark matter is suspected of helping to hold galaxy clusters in place gravitationally, while dark energy is believed to play a role in how the universe is expanding.

According to the researchers responsible for Dark Emulator, the AI tool is able to study possibilities about the "origin of cosmic structures" and how dark matter distribution may have changed over time, using data from some of the top observational surveys conducted about space.

"We built an extraordinarily large database using a supercomputer, which took us three years to finish, but now we can recreate it on a laptop in a matter of seconds," said Associate Prof. Takahiro Nishimichi, of the Yukawa Institute for Theoretical Physics.

"Using this result, I hope we can work our way towards uncovering the greatest mystery of modern physics, which is to uncover what dark energy is. I also think this method we've developed will be useful in other fields such as natural sciences or social sciences."

Nishimichi added: "I feel like there is great potential in data science."

The teams, which included experts from the Kavli Institute for the Physics and Mathematics of the Universe and the National Astronomical Observatory of Japan, said in a media release this week that Dark Emulator had already shown promising results during extensive tests.

In seconds, the tool predicted some of effects and patterns found in previous research projects, including the Hyper Suprime-Cam Survey and Sloan Digital Sky Survey. The emulator "learns" from huge quantities of data and "guesses outcomes for new sets of characteristics."

As with all AI tools, data is key. The scientists said the supercomputers have essentially created "hundreds of virtual universes" to play with, and Dark Emulator predicts the outcome of new characteristics based on data, without having to start new simulations every time.

Running simulations through a supercomputer without the AI would take days, researchers noted. Details of the initial study were published in The Astrophysical Journal last October. The team said they hope to input data from upcoming space surveys throughout the next decade.

While work on this one study remains ongoing, there is little argument within the scientific community that understanding dark energy remains a key objective.

"Determining the nature of dark energy [and] its possible history over cosmic time is perhaps the most important quest of astronomy for the next decade and lies at the intersection of cosmology, astrophysics, and fundamental physics," NASA says in a fact-sheet on its website.

See the original post here:

AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy - Newsweek

How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising – Forbes


Windows IT Pro
How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising
Forbes
(Note: After an award-winning career in the media business covering the tech industry, Bob Evans was VP of Strategic Communications at SAP in 2011, and Chief Communications Officer at Oracle from 2012 to 2016. He now runs his own firm, Evans Strategic ...
Street Sees Dollar Signs as Microsoft Invests in Cloud, Artificial IntelligenceWindows IT Pro

all 34 news articles »

Continue reading here:

How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising - Forbes

Dynatrace Drives Digital Innovation With AI Virtual Assistant – Forbes


Forbes
Dynatrace Drives Digital Innovation With AI Virtual Assistant
Forbes
Innovation in the white-hot digital performance management (DPM) market continues to accelerate, and it was clear from this week's Perform conference in Las Vegas that Dynatrace is setting the pace. In fact, Dynatrace's innovations are so cutting-edge ...

Here is the original post:

Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes