Air Force issues strategy for artificial intelligence – FedScoop

Written by Billy Mitchell Sep 19, 2019 | FEDSCOOP

The Air Force has publicly released its strategy for artificial intelligence, building onto theongoing work at the Pentagon level.

The strategy is meant to be an annex tothe Department of Defenses AI strategy in support of its Joint AI Center. It will serve as a mechanism to align the Air Force with the larger AI efforts across the department and leverage the JAICs progress as an AI center of excellence.

The technology is crucial to fielding tomorrows Air Force faster and smarter, executing multi-domain operations in the high-end fight, confronting threats below the level of open conflict and partnering with our allies around the globe,write acting Secretary Matthew DonovanandChief of Staff Gen. David Goldfein in the introduction to the dual-signed strategy. AI, they say, willunderpin our ability to compete, deter and win across all five of the Air Forces missions: air and space superiority; global strike capability; rapid global mobility; intelligence, surveillance and reconnaissance; and command control.

The goal is to providefundamental principles, enabling functions, and objectives necessary to effectively manage, maneuver, and lead in the digital age, the document says. The strategy is framed around five driving focus areas:

Each of those areas also has corresponding lines of effort and objective that go into more detail on what the service looks to accomplish to support the adoption of AI in the near term.

Every Airman in the field, staff member at higher headquarters, or unit working with an industry partner has a responsibility in transforming our five core missions through the use of AI, the strategy says. The Air Force cannot effectively execute missions in todays complex security environment without embracing these technologies and delivering capabilities to our Airmen.

Visit link:

Air Force issues strategy for artificial intelligence - FedScoop

An AI startup tries to take better pictures of the heart – STAT

Lets assume you are not an expert highly trained in medical imaging. And lets assume you were invited one day to try out a new technology for heart ultrasounds diagnostic tools that are notoriously difficult to use because of the chest wall and because some shots must be made while the heart is in motion.

Could you do it?

Maybe. When I was given the shot on a recent day, I was able to take the ultrasound in a matter of minutes with the help of software, developed by a San Francisco-based startup called Caption Health. The software told me how to hold the ultrasound probe against the ribs of a model who had been hired for the purpose of my visit and knew on its own when to snap the image. It was a little like having Richard Avedons knowledge of photography uploaded into the guts of my iPhone camera.

advertisement

You can see the image I took of the parasternal long axis view of the heart pumping at the top of this page.

If the technology holds up, Caption, until recently called Bay Labs, could succeed in solving the problem of making heart sonograms easier to obtain. Its already impressed some in the life sciences. Among them is health care executive Andy Page, who spent four years as Anne Wojcickis right-hand man at 23andMe and a year as the president and chief financial officer at digital health startup Livongo. He was introduced to Caption Health last fall by one of its investors, the billionaire Vinod Khosla. He has chosen to become its chief executive.

I was interested in how AI could impact health care, Page told STAT. Knowing it was a trend that was coming, my thought was that to really impact health care, the AI implementation would have to be straightforward, understandable, practical, trusted. And thats exactly what the company was doing.

The use of AI in ultrasound is becoming a hot area. Butterfly Network, which launched a handheld ultrasound device that is much cheaper than competitors early this year, is also working on AI. Ultromics, based in London, is also working on using AI in ultrasound.

I had used Butterfys technology two years ago to take images of my carotid artery. The experience of using Captions was similar in a lot of ways, but it was obvious that the images I captured with the latter technology were harder-to-get shots.

The word revolutionary is probably overused a lot these days with a lot of the tech things we have coming out, but this has the potential to really change how were treating our patients in the not-distant future, said Dr. Patrick McCarthy, the executive director of the Northwestern Bluhm Cardiovascular Institute, who was primary investigator of a study of Captions AI but said he has no financial relationship with the company. McCarthy said he thinks the AI could democratize heart ultrasound by increasing the number of health care professionals who can give the test, meaning that more patients who should have it will.

Caption was founded in 2013 by Charles Cadieu, its president, and Kilian Koepsell, its chief technology officer. Cadieu spent his early adult life moving between the Massachusetts Institute of Technology, from which he has a masters degree in engineering, and the University of California, Berkeley, where he received a Ph.D. in neuroscience. Im kind of the planning/thinker/architect and Kilian is the tuned-in laser beam to get things done, said Cadieu.

Cadieu and Koepsell both wound up on the founding team at IQ Engines, a company that was involved in using deep learning to identify images. After it was sold to Yahoo in 2013, the pair started working on the idea that deep learning was ready to be applied to medicine. I was always inspired by applying science to medicine, said Koepsell, who grew up in a family of doctors. According to family legend, he said, his great-grandfather was present in the lecture where the use of X-rays was demonstrated for the first time.

Ultrasound is particularly suited not just for having AI interpret ultrasound images, but also for tackling a more immediate challenge: getting the images in the first place.

If you dont do these every day, you get hesitant about, What am I looking at? said Dr. Mark Schneider, chair of the department of anesthesiology and director of applied innovation at Christiana Care in Wilmington, Del. And then you get hesitant to use it.

Right now, the image quality that gets taken of patients is all over the place, said Dr. Arun Nagdev, director of point-of-care ultrasound at Highland General Hospital in Oakland, Calif. The ability to obtain that image is crucial, Nagdev said. Once novice users can use the technology, he foresees a hockey stick growth in the ultrasound.

Page said he thinks of the technology under development as a co-pilot that can assist doctors who have trouble getting particular scans, as well as those who have not used ultrasound much before a use that could expand to hospitalists, who focus on hospitalized patients, anesthesiologists, and nurses.

Caption Health provided me with unpublished data from a study in which 8 nurses with no previous experience in cardiac ultrasound performed four different types of scans on 240 patients.

For assessing patients left ventricular size and function, as well as assessment of pericardial effusion, or fluid around the heart, the AI took the same number of usable images. For each, 240 scans were performed, and 237, or 98.8%, were of sufficient quality, according to a panel of five cardiologists. For images of the right ventricle, which is harder to see, the results were a bit worse: 222 images, or 92.5% of them, were of adequate quality. Eric Topol, the director and founder of the Scripps Research Translational Institute, commented that this was still a small number of samples for AI work; Caption Health said it respectfully disagrees because the study was prospective. The goal of the study was to show the test was 80% accurate.

Caption Health will need to partner with device makers in order to bring its device to market; it does not make ultrasound equipment. It is currently partnered with one ultrasound manufacturer, Terason. Caption has received breakthrough device status from the Food and Drug Administration, which could expedite its regulatory review. Caption said it has raised $18 million to date; a recent valuation is not available.

Page is nothing if not confident. Nothing else exists in the market of this nature, he said.

View original post here:

An AI startup tries to take better pictures of the heart - STAT

Intel Edged Aside on Rise of AI, Cloud Computing – Investopedia


Investopedia
Intel Edged Aside on Rise of AI, Cloud Computing
Investopedia
Take AI: While the technology is just taking off. it's expected to be a huge market with computers doing everyday tasks quicker and better than humans. Incorporating AI into existing systems and programs as well as developing new ones requires complex ...

and more »

Go here to read the rest:

Intel Edged Aside on Rise of AI, Cloud Computing - Investopedia

Is AI making credit scores better, or more confusing? – American Banker

A consumers credit score used to be a commonly understood number the time-honored FICO score that banks all used in their underwriting. But banks increasingly are relying on dozens of scores that reflect a variety of data sources, analytics and use of artificial intelligence technology.

The use of AI offers lenders the ability to get a precise look into someones creditworthiness and score those previously deemed unscorable.

But such scoring techniques also bring uncertainty: What it will take to convince regulators that AI-based credit scores are not a black box? How do you get a system trained to look at the interactions of many variables, to produce one clear reason for declining credit? Data scientists at credit bureaus and banks are working to find answers to questions like these.

The benefits of AI-powered credit scores

There are two main reasons to use artificial intelligence to derive a credit score. One is to assess creditworthiness more precisely. The other is to be able to consider people who might not have been able to get a credit score in the past, or who may have been too hastily rejected by a traditional logistic regression-based score. In other words, a method that looks at certain data points from consumers credit history to calculate the odds that they will repay.

[Digital identity is broken, and fixes are urgently needed. Learn how large financial service and healthcare companies are tackling the issue to enhance customer experience, to stake out positions in their business ecosystems, and to manage risk on our Feb. 23 web seminar. Click here for details.]

Machine learning can take a more nuanced look at consumer behavior.

A neural network more closely mimics the way humans think and reason, whereas linear models are more dogmatic youre imposing structure on data as opposed to letting the data talk to you, said Eric VonDohlen, chief analytics officer at the online lender Elevate. The more complex reasoning of artificial intelligence can find things in the data that wouldnt be apparent otherwise.

And instead of considering one variable at a time, an artificial intelligence engine can look at interactions between multiple variables.

Its harder for the workhorse, logistic regression, to do that, said Dr. Stephen Coggeshall, chief analytics and science officer at ID Analytics. You have to do a lot of data preprocessing using expert knowledge to even attempt to find those nonlinear interactions.

Consumers with several chargeoffs in their histories would most likely be considered high-risk borrowers by most traditional models. But an AI engine might perceive mitigating variables; though the consumers might have skipped payments on three debts in the past 24 months, they have paid on time consistently for the past year and have successfully obtained new lines of credit.

It looks like that bad performance or bad history is in your past, VonDohlen said. That would be a simple example of how an AI world might help cast data in a more positive and more accurate light.

AI-based credit scoring models let Elevate make sharper predictions of credit risk, approve the right people and offer better pricing to people who deserve it, VonDohlen said.

Elevate is deploying its new, AI-based models gradually, starting with 1% of potential borrowers, testing the results, and gradually applying them to more people.

Credit bureaus are starting to adopt AI in their credit scores, too.

Equifax calls the machine-learning software that it uses in credit scores NeuroDecision Technology.

Technologies like Hadoop, which allow massive amounts of data to be stored and analyzed quickly, are making AI-based credit scores possible, said Peter Maynard, senior vice president of global analytics at Equifax.

Before, if you gave me a million observations, it would take a week to sort through it, Maynard said.

ID Analytics uses what it calls convolutional neural nets, a flavor of deep learning, in its fraud and credit scores, Coggeshall said. For its Credit Optics Full Spectrum credit score, AI engines look at consumer payment data from wireless, utility and marketplace loan providers, to score consumers who have thin or no credit bureau files, including young people and new credit seekers.

FICO also offers a score called XD thats based on telephone and utility bill payments and property records. It gives high marks to people who have faithfully paid their phone, oil and gas bills and who have not moved around too much.

Experian is taking a more cautious approach. It uses traditional logistic regression methods for its credit scores, but in its labs it experiments with machine learning.

When the technology seems to make a significant difference in performance, the company will provide credit scores based on machine learning, said Eric Haller, executive vice president of Experians Global DataLabs. For now, he sees machine learning giving only a nominal lift in results.

The opportunity is not building the next VantageScore, because believe it or not, those scores work really well, he said.

To let clients experiment with machine learning, Experian offers an analytical sandbox with its credit data loaded into it.

They can load their own data in and well sync it up with historical credit archive data, and weve overlaid it with a set of machine-learning tools, Haller said. Most large financial institutions are using the sandbox today, he said.

TransUnion, the other major credit rating agency, did not respond to requests for an interview.

Now for the confusing part

The cons of AI-enhanced credit scores include the risk that the full underwriting process will be hidden from consumers and that the practice would raise transparency questions among regulators.

Last month, the Consumer Financial Protection Bureau imposed $23 million in fines to TransUnion and Equifax, noting they claim banks use their scores to determine creditworthiness, when that isnt always the case.

In their advertising, TransUnion and Equifax falsely represented that the credit scores they marketed and provided to consumers were the same scores lenders typically use to make credit decisions, the CFPB said in a press release announcing the fines. In fact, the scores sold by TransUnion and Equifax were not typically used by lenders to make those decisions.

Some say the regulator was misguided.

Both scores being sold by TransUnion and Equifax, VantageScore and the Equifax RiskScore, are real credit scores that are Equal Credit Opportunity Act compliant, are commercially available to lenders and are, in fact, used by lenders, said credit expert John Ulzheimer.

Equifax says it ran all of its AI-based scoring technology past the OCC, Fed and CFPB and got a positive response. ID Analytics said it worked closely with lawyers, compliance officers and regulators to assure the technology complied with various lending rules.

Another challenge to using artificial intelligence, specifically neural networks, in credit scores and models, is that its harder to provide the needed reason code to borrowers the explanation of why they were denied credit.

Concerns about the reason code are the main reason many businesses dont use nonlinear machine-learning models for credit scores yet.

A lot of the confusion and heartburn is around, How do you boil an extremely data-rich learning process into a marginal rationale for declining a loan? VonDohlen said.

However, neural networks, which are essentially designed to think like a brain, can also be used to help find the one variable that represents the greatest risk.

Its almost never the case that you would decline someone for a rats nest of variable relationships, VonDohlen said. The reasons for credit denial, he said, "are almost always very clear.

Equifax has developed a proprietary algorithm that can generate reason codes for consumers, Maynard said.

Experian is also working on techniques that would make AI credit-based score decisions more explainable and auditor friendly.

Were not operating under any assumption that a black box credit scoring model would even work or be accepted in the market, Haller said. We are 100% focused on how do we bridge the gap such that we can bring better performance to models, but still maintain the same integrity, where they can be explained to the OCC and our clients are comfortable with understanding how the models are working and the results theyre getting.

But in the end, consumers wont be confused, according to Ulzheimer.

Regardless of how many scoring systems are being used, they are all based on three credit reports, he said. If you've got three great credit reports, then every single scoring system being used is going to yield a high score.

Penny Crosman is Editor at Large at American Banker.

View original post here:

Is AI making credit scores better, or more confusing? - American Banker

Jewelers Mutual Teams with H2O.ai to Drive AI Innovation in the Jewelry Insurance Business – PRNewswire

MOUNTAIN VIEW, Calif., Dec. 17, 2019 /PRNewswire/ -- H2O.ai, the open source leader in artificial intelligence (AI) and machine learning (ML), today announcedJewelers Mutual, one of the United States' and Canada's most established and trusted providers of affordable and comprehensive insurance for jewelers and consumers, has chosen its award winning AI platforms to provide AI and machine learning capabilities to better serve its customers. As a leader in driving customer-focused innovation and providing the latest technology to a long-standing industry, Jewelers Mutual is using H2O-3 open source and H2O Driverless AI to deliver exceptional customer experiences, prevent losses, and provide better protection and policies for both jewelers and customers.

"We have been in the jewelry insurance business for over 100 years, and our leadership team has been looking to raise the bar for technology-driven innovation in the industry. After two years of experimentation with AI and machine learning, we came to place a high value on model transparency and explainability. Our business end-users demanded it. The initial AI platform we used was lacking in this area so we began searching for a new platform," said Andrew Langsner, Senior Manager, Embedded Analytics at Jewelers Mutual. "After evaluating several solutions in our search for the ideal AI platform, we chose H2O.ai because it provides us with the transparency we needed into our machine learning processes, much more flexibility than the other tools we evaluated, and the strongest machine learning explainability capabilities on the market. H2O.ai provides new avenues of innovation and allows us to build quality and insightful ML tools for our business stakeholders."

Since working with the H2O.ai platforms, the company has been able to build unique models and recalibrate its rating systems based on the additional customer data generated, making its insurance rates more competitive. In addition to its underwriting, customer experience and claims use cases, the company is also applying analytics to identify ways to improve security, particularly for commercial customers. For example, during recent California wildfires and power outages, the team was able to identify customers that would need additional physical security personnel to protect their inventory. These critical use cases have streamlined the data science process and helped the team save valuable time and resources.

"Jewelers Mutual is using H2O Driverless AI to automatically build machine learning models for predicting customer lifetime value and prevent high severity losses and suspicious claims, amongst other high value use cases," said Sri Ambati, Founder and CEO of H2O.ai. "Jewelers Mutual isthe leader in jewelry and insurance, has been using data and predictive analytics to improve its customer experiences and is not new to AutoML. H2O Driverless AI provides an easy on-ramp to operationalize and explain its machine learning models, increasing the productivity of its data science teams. We are thrilled to be its team's chosen trusted partner in the AI transformation journey and hope to make them an AI company, further establishing their technological leadership in the space."

H2O-3, A Leading Open Source AI PlatformH2O-3 is the leading open source, scalable and distributed in-memory AI and machine learning platform. H2O-3 supports the most widely used statistical and machine learning algorithms including gradient boosted machines, generalized linear models, deep learning and more. H2O-3 also has an industry leading AutoML functionality that automatically runs through all the algorithms and their hyper-parameters to produce a leaderboard of the best models.

H2O Driverless AI: AI to do AIH2O Driverless AI empowers data scientists to work on projects faster and more efficiently by using automation and state-of-the-art machine learning to accomplish tasks in hours instead of weeks and months. By delivering automatic feature engineering, model validation, model tuning, model selection and deployment, machine learning interpretability, time-series and automatic pipeline generation for model scoring, H2O Driverless AI provides companies with a data science platform that addresses the needs of a variety of use cases for every enterprise in every industry.

About H2O.aiH2O.ai is the open source leader in AI and automatic machine learning with a mission to democratize AI for everyone. H2O.ai is transforming the use of AI to empower every company to be an AI company in financial services, insurance, healthcare, telco, retail, pharmaceuticals and marketing. H2O.ai is driving an open AI movement with H2O, which is used by more than 18,000 companies and hundreds of thousands of data scientists. H2O Driverless AI, an award winning and industry leading automatic machine learning platform for the enterprise, is helping data scientists across the world in every industry be more productive and deploy models in a faster, easier and cheaper way. H2O.ai partners with leading technology companies such as NVIDIA, IBM, AWS, Intel, Microsoft Azure and Google Cloud Platform and is proud of its growing customer base which includes Capital One, Nationwide Insurance, Walgreens and MarketAxess. H2O.ai believes in AI4Good with support for wildlife conservation and AI for academics. Learn more at http://www.H2O.ai.

About Jewelers MutualJewelers Mutual Group was founded in 1913 by a group of Wisconsin jewelers to meet their unique insurance needs. Today, Jewelers Mutual offers products and services nationwide and throughout Canada that enable jewelry businesses to run safe, secure, and successful operations. Consumers also put their trust in Jewelers Mutual to protect their personal jewelry and the special moments it represents. The group's strong financial position is reflected in its 32 consecutive ratings of "A+ Superior" from A.M. Best Company. To learn more, visitJewelersMutual.com.

Media Contact: Russ Castronovo press@H2O.ai408-391-9632

SOURCE H2O.ai

Home

Read more from the original source:

Jewelers Mutual Teams with H2O.ai to Drive AI Innovation in the Jewelry Insurance Business - PRNewswire

Cerner AI expert discusses important ‘misconceptions’ about the technology – Healthcare IT News

Dr. Tanuj Gupta, vice president at Cerner Intelligence, is an expert in healthcare artificial intelligence and machine learning. Part of his job is explaining, from his expert point of view, what he considers misconceptions with AI, especially misconceptions in healthcare.

In this interview with Healthcare IT News, Gupta discusses what he says are popular misconceptions with gender and racial bias in algorithms, AI replacing clinicians, and the regulation of AI in healthcare.

Q. In general terms, why do you think there are misconceptions about AI in healthcare, and why do they persist?

A. I've given more than 100 presentations on AI and ML in the past year. There's no doubt these technologies are hot topics in healthcare that usher in great hope for the advancement of our industry.

While they have the potential to transform patient care, quality and outcomes, there also are concerns about the negative impact this technology could have on human interaction, as well as the burden they could place on clinicians and health systems.

Q. Should we be concerned about gender and racial bias in ML algorithms?

A. Traditionally, healthcare providers consider a patient's unique situation when making decisions, along with information sources, such as their clinical training and experiences, as well as published medical research.

Now, with ML, we can be more efficient and improve our ability to examine large amounts of data, flag potential problems and suggest next steps for treatment. While this technology is promising, there are some risks. Although AI and ML are just tools, they have many points of entry that are vulnerable to bias, from inception to end use.

As ML learns and adapts, it's vulnerable to potentially biased input and patterns. Existing prejudices especially if they're unknown and data that reflects societal or historical inequities can result in bias being baked into the data that's used to train an algorithm or ML model to predict outcomes. If not identified and mitigated, clinical decision-making based on bias could negatively impact patient care and outcomes. When bias is introduced into an algorithm, certain groups can be targeted unintentionally.

Gender and racial biases have been identified in commercial facial-recognition systems, which are known to falsely identify Black and Asian faces 10 to 100 times more than Caucasian faces, and have more difficulty identifying women than men. Bias is also seen in natural language processing that identifies topic, opinion and emotion.

If the systems in which our AI and ML tools are developed or implemented are biased, then their resulting health outcomes can be biased, which can perpetuate health disparities. While breaking down systemic bias can be challenging, it's important that we do all we can to identify and correct it in all its manifestations. This is the only way we can optimize AI and ML in healthcare and ensure the highest quality of patient experience.

Q. Could AI replace clinicians?

A. The short answer is no. AI and ML will not replace clinician judgement. Providers will always have to be involved in the decision-making process, because we hold them accountable for patient care and outcomes.

We already have some successful guardrails in other areas of healthcare that we'll likely evolve to for AI and ML. For example, one parallel is verbal orders. If a doctor gives a nurse a verbal order for a medication, the nurse repeats it back to them before entering it in the chart, and the doctor must sign off on it. If that medication ends up causing harm to the patient, the doctor can't say the nurse is at fault.

Additionally, any standing protocol orders that a hospital wants to institute must be approved by a committee of physicians who then have a regular review period to ensure the protocols are still safe and effective. That way, if the nurse executes a protocol order and there's a patient-safety issue, that medical committee is responsible and accountable not the nurse.

The same thing is going to be there with AI and ML algorithms. There won't be an algorithm that arbitrarily runs on a tool or machine, treating a patient without doctor oversight.

If we throw a bunch of algorithms into the electronic health record that say, "treat the patient this way" or "diagnose him with this," we'll have to hold the clinician and possibly the algorithm maker if it becomes regulated by the U.S. Food and Drug Administration accountable for the outcomes. I can't imagine a situation where that would change.

Clinicians can use,and are using, AI and ML to improve care and maybe make healthcare even more human than it is today. AI and ML could also allow physicians to enhance the quality of time spent with patients.

Bottom line, I think we as the healthcare industry should embrace AI and ML technology. It won't replace us; it will just become a new and effective toolset to use with our patients. And using this technology responsibly means always staying on top of any potential patient safety risks.

Q. What should we know about the regulation of AI in healthcare?

A. AI introduces some important concerns around data ownership, safety and security. Without a standard for how to handle these issues, there's the potential to cause harm, either to the healthcare system or to the individual patient.

For these reasons, important regulations should be expected. The pharmaceutical, clinical treatment and medical device industries provide a precedent for how to protect data rights, privacy, and security, and drive innovation in an AI-empowered healthcare system.

Let's start with data rights. When people use an at-home DNA testing kit, they likely gave broad consent for your data to be used for research purposes, as defined by the U.S. Department of Health and Human Services in a 2017 guidance document.

While that guidance establishes rules for giving consent, it also creates the process for withdrawing consent. Handling consent in an AI-empowered healthcare system may be a challenge, but there's precedent for thinking through this issue to both protect rights and drive innovation.

With regard to patient safety concerns, the Food and Drug Administration has published two documents to address the issue: Draft Guidance on Clinical Decision Support Software and Draft Guidance on Software as a Medical Device. The first guidance sets a framework for determining if an ML algorithm is a medical device.

Once you've determined your ML algorithm is in fact a device, the second guidance provides "good machine learning practices." Similar FDA regulations on diagnostics and therapeutics have kept us safe from harm without getting in the way of innovation. We should expect the same outcome for AI and ML in healthcare.

Finally, let's look at data security and privacy. The industry wants to protect data privacy while unlocking more value in healthcare. For example, HHS has long relied on the Health Insurance Portability and Accountability Act, which was signed into law in 1996.

While HIPAA is designed to safeguard protected health information, growing innovation in healthcare particularly regarding privacy led to HHS' recently issued proposed rule to prevent information blocking and encourage healthcare innovation.

It's safe to conclude that AI and ML in healthcare will be regulated. But that doesn't mean these tools won't be useful. In fact, we should expect the continued growth of AI applications for healthcare as more uses and benefits of the technology surface.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the rest here:

Cerner AI expert discusses important 'misconceptions' about the technology - Healthcare IT News

Washington Must Bet Big on AI or Lose Its Global Clout – WIRED

The US government must spend $25 billion on artificial intelligence research by 2025, stem the loss of foreign AI talent, and find new ways to prevent critical AI technology from being stolen and exported, according to a policy report issued Tuesday. Otherwise it risks falling behind China and losing its standing on the world stage.

The report, from the Center for New American Security (CNAS), is the latest to highlight the importance of AI to the future of the US. It argues that the technology will define economic, military, and geopolitical power in coming decades.

Advanced technologies, including AI, 5G wireless services, and quantum computing, are already at the center of an emerging technological cold war between the US and China. The Trump administration has declared AI a national priority, and it has enacted policies, such as technology export controls, designed to limit Chinas progress in AI and related areas.

The CNAS report calls for a broader national AI strategy and a level of commitment reminiscent of the Apollo program. If the United States wants to continue to be the world leader, not just in technology but in political power and being able to promote democracy and human rights, that calls for this type of effort, says Martijn Rasser, a senior fellow at CNAS and the lead author of the report.

Rasser and his coauthors believe AI will be as pervasive and transformative as software itself has been. This means it will be of critical importance to economic success as well as military might and global influence. Rasser argues that $25 billion over five years is achievable, and notes that it would constitute less than 19 percent of total federal R&D in the 2020 budget.

We're back in an era of great power competition, and technology is that the center, Rasser says. And the nation that leads, not just artificial intelligence but technology across the board, will truly dominate the 21st century.

Over the past three years, the White Houses Office of Science and Technology Policy, which shapes the administration's technology strategy, has highlighted the importance of AI and called for federal funds to be redirected towards its development. In its 2020 budget plan, the administration has proposed $5 billion in funding for AI research and development. But officials have generally maintained that the private sector should also play a primary role in investing and developing AI.

Rasser says this is a mistake because private companies do not invest in the kind of fundamental research that serves as a foundation for big technological advances. Investments that the US government made in the 50s, 60s, and 70s propagated the technologies that the American economy, and the global economy, are built on, he says.

Artificial intelligence could prove particularly important to Americas military standing. Last month, a report to Congress by the National Security Commission on Artificial Intelligence, declared that AI will be critical to US national security. Collaboration between the government and the US tech sector would be vital, it advised.

Both the Russians and the Chinese have concluded that the way to leapfrog the US is with AI, says Bob Work, a distinguished senior fellow at CNAS who served as deputy secretary of defense under Presidents Obama and Trump. Work says the US needs to convince the public and that it doesnt intend to develop lethal autonomous weapons, only technology that would counter the work Russia and China are doing.

In addition to calling for new funding, the CNAS report argues that a different attitude towards international talent is needed. It recommends that the US attract and retain more foreign scientists by raising the number of H1-B visas and removing the cap for people with advanced degrees. You want these people to live, work, and stay in the United States, Rasser says. The report suggests early vetting of applications at foriegn embassies to identify potential security risks.

The administrations current immigration strategy arguably undermines US competitiveness. Trumps travel ban, which prevents anyone from Iran, Iraq, Somalia, Libya, Somalia, Syria, Yemen, Chad, North Korea, and Venezuela from coming to the United States, has contributed to a culture unwelcoming to foreign scientists, prompting some to study and work elsewhere. A more rigorous visa vetting process has also reduced the number of Chinese scientists that are able to enter the country.

More:

Washington Must Bet Big on AI or Lose Its Global Clout - WIRED

AI News | VentureBeat

Google lays out framework for autonomous errand-running robots

Kyle WiggersFebruary 28, 2019 1:10 PM

Jeremy HorwitzFebruary 28, 2019 11:42 AM

Exclusive

Khari JohnsonFebruary 28, 2019 11:04 AM

Exclusive

Khari JohnsonFebruary 28, 2019 9:48 AM

Kyle WiggersFebruary 28, 2019 9:15 AM

Sponsored

VB StaffFebruary 28, 2019 7:50 AM

Paul SawersFebruary 28, 2019 7:27 AM

Manish SinghFebruary 28, 2019 6:00 AM

Jeremy HorwitzFebruary 28, 2019 6:00 AM

Exclusive

Kyle WiggersFebruary 28, 2019 5:00 AM

Paul SawersFebruary 28, 2019 4:35 AM

Kyle WiggersFebruary 28, 2019 2:33 AM

Kyle WiggersFebruary 28, 2019 1:00 AM

Dean TakahashiFebruary 27, 2019 11:35 AM

Kyle WiggersFebruary 27, 2019 8:30 AM

Kyle WiggersFebruary 27, 2019 6:45 AM

Kyle WiggersFebruary 27, 2019 6:00 AM

Kyle WiggersFebruary 27, 2019 6:00 AM

ReutersFebruary 27, 2019 4:50 AM

Exclusive

Emil ProtalinskiFebruary 26, 2019 9:09 AM

Khari JohnsonFebruary 26, 2019 9:05 AM

Kyle WiggersFebruary 26, 2019 9:00 AM

Kyle WiggersFebruary 26, 2019 6:00 AM

Khari JohnsonFebruary 26, 2019 6:00 AM

Kyle WiggersFebruary 26, 2019 5:30 AM

Kyle WiggersFebruary 26, 2019 5:00 AM

Kyle WiggersFebruary 26, 2019 3:40 AM

Kyle WiggersFebruary 25, 2019 9:25 AM

Kyle WiggersFebruary 25, 2019 8:35 AM

Kyle WiggersFebruary 25, 2019 7:45 AM

Khari JohnsonFebruary 25, 2019 3:31 AM

Manish SinghFebruary 25, 2019 1:00 AM

Chris O'BrienFebruary 25, 2019 12:34 AM

Kyle WiggersFebruary 24, 2019 11:45 PM

Kyle WiggersFebruary 24, 2019 10:45 AM

Load more

See the original post:

AI News | VentureBeat

REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk

GETTY

In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.

The judges can then use this data when giving his or her verdict.

However, an investigation has revealed that the artificial intelligence behind the software exhibits racist tendencies.

Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.

GETTY

The suspects are asked a total of 137 questions by the AI system Correctional Offender Management Profiling for Alternative Sanctions (Compas) including questions such as Was one of your parents ever sent to jail or prison? or How many of your friends/acquaintances are taking drugs illegally?, with the computer generating its results at the end.

Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.

In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.

GETTY

The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.

Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered low risk.

But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.

However, researchers warn the problem does not lie with robots, but with the human race as AI uses machine learning algorithms to pick up on human traits.

Joanna Bryson, a researcher at the University of Bath, told the Guardian: People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

This is not an isolated incident either.

Microsofts TayTweets (AI) chatbot was unleashed on Twitter last year which was designed to learn from users.

However, it almost instantly turned to anti-semitism and racism, tweeting: "Hitler did nothing wrong" and "Hitler was right I hate the Jews.

See the original post:

REVEALED: AI is turning RACIST as it learns from humans - Express.co.uk

Observe.AI and Telarus Partner to Bring AI-Driven Speech Analytics and Quality Management Platform to Market – AiThority

Observe.AI, the leader in AI-powered speech analytics and quality management for voice customer service, announced that it has partnered with Telarus, the largest U.S.-based technology services distributor (master agent). Observe.AI Voice AI platform will be the first of its kind in Telarus portfolio of suppliers. With Observe.AI, top brands can unlock conversational insights across 100% of their calls and coach agents to improve the customer experience.

Legacy speech analytics systems fail to meet todays demands because theyre disconnected from agent coaching and training, take months to implement, and offer historically low transcription accuracies

Legacy speech analytics systems fail to meet todays demands because theyre disconnected from agent coaching and training, take months to implement, and offer historically low transcription accuracies, said Swapnil Jain, CEO and co-founder of Observe.AI. Our Telarus partnership allows top brands to immediately change agent behavior by analyzing 100% of calls with the best possible accuracy in contact center environments and completely automate some parts of the evaluation process.

Recommended AI News: Securing E-Commerce By Investing In Cybersecurity

Thousands of global contact center agents are coached with Observe.AI, which provides a detailed look at how top agents structure calls so those tactics can be replicated.

With transcription accuracy of over 80% and an average onboarding time of three to four weeks, Observe.AI is paving the way for a faster, more accurate solution that will revolutionize the way contact centers do business, said Brandon Knight, VP of Business Development CCaaS at Telarus.

Recommended AI News: Artificial Intelligence Startup Labelbox Closes $25 Million in Series B Funding

View original post here:

Observe.AI and Telarus Partner to Bring AI-Driven Speech Analytics and Quality Management Platform to Market - AiThority

Alyce PX Platform Aims To Personalize Gifting With AI – Demand Gen Report

The Personal Experience (PX) Platform from Alyce, an AI-powered gifting service, aims to improve the sales and marketing outreach process by helping enterprise teams create personalized, one-to-one gifts that drive loyalty with buyers, prospects and partners at scale.

The PX platform is designed to leverage AI to automate the personal research process on B2B prospects and buyers to recommend gifts tailored to their interests. An invitation is sent to the recipient with a personalized note, inviting them to a branded landing page where they can accept, exchange or donate the gift.

Each gift comes with a trackable code that allows the sender to observe the recipients behavior throughout the process, tracking delivery, viewing and acceptance rates for conversion and pipeline impact measurement.

The PX platform is designed for B2B marketing, sales and CS leaders at growth-oriented enterprise tech companies pursuing an account-based strategy.

The PX Platform aims to help marketers improve their sales and marketing outreach through personalized gifts tailored to a contacts personal interests, making the process hyper-personal for greater impact on the buyer.

Alyce888-861-6608

See the original post:

Alyce PX Platform Aims To Personalize Gifting With AI - Demand Gen Report

Google launches its AI-powered jobs search engine | TechCrunch – TechCrunch

Looking for a new job is getting easier. Google today launched a new jobs search feature right on its search result pages that lets you search for jobs across virtually all of the major online job boards like LinkedIn, Monster, WayUp, DirectEmployers, CareerBuilder and Facebook and others. Google will also include job listings its finds on a companys homepage.

The idea here is to give job seekers an easy way to see which jobs are available without having to go to multiple sites only to find duplicate postings and lots of irrelevant jobs.

With this new feature, is now available in English on desktop and mobile, all you have to type in is a query like jobs near me, writing jobs or something along those lines and the search result page will show you the new job search widget that lets you see a broad range of jobs. From there, you can further refine your query to only include full-time positions, for example.When you click through to get more information about a specific job, you also get to see Glassdoor and Indeed ratings for a company.

You can also filter jobs by industry, location, when they were posted, and employer. Once you find a query that works, you can also turn on notifications so you get an immediate alert when a new job is posted that matches your personalized query.

Finding a job is like dating, Nick Zakrasek, Googles product manager for this project, told me. Each person has a unique set of preferences and it only takes one person to fill this job.

To create this comprehensive list, Google first has to remove all of the duplicate listings that employers post to all of these job sites. Then, its machine learning-trained algorithms sift through and categorize them. These job sites often already use at least some job-specific markup to help search engines understand that something is a job posting (though often, the kind of search engine optimization that worked when Google would only show 10 blue links for this type of query now clutters up the new interface with long, highly detailed job titles, for example).

Once you find a job, Google will direct you to the job site to start the actual application process. For jobs that appeared on multiple sites, Google will link you to the one with the most complete job posting. We hope this will act as an incentive for sites to share all the pertinent details in their listings for job seekers, a Google spokesperson told me.

As for the actual application process itself, Google doesnt want to get in the way here and its not handling any of the process after you have found a job on its service.

Its worth noting that Google doesnt try to filter jobs based on what it already knows. As Zakrasek quipped, the fact that you like to go fishing doesnt mean you are looking for a job on a fishing boat, after all.

Google is very clear about the fact that it doesnt want to directly compete with Monster, CareerBuilder and similar sites. It currently has no plans to let employers posts jobs directly to its jobs search engine for example (though that would surely be lucrative). We want to do what we do best: search, Zakrasek said. We want the players in the ecosystem to be more successful. Anything beyond that is not in Googles wheelhouse, he added.

Monster.coms CTO Conal Thompson echoed this in a written statement when I asked him how this cooperation with Google will change the competitive landscape for job sites. Googles new job search product aligns with our core strategy and will allow candidates to explore jobs from across the web and refine search criteria to meet their unique needs, he wrote. Yes, as with anything, there will be some challenges and adjustments to existing job posting sites; the biggest perhaps being for those that are currently driven by SEO.

Go here to see the original:

Google launches its AI-powered jobs search engine | TechCrunch - TechCrunch

[PULSE] Why we need diversity in AI development – HousingWire

Trust is a universal concept and like love, can be fickle, fleeting and fuzzy. Some trust is implicit when I board a bus, I trust the driver to drive safely. I know nothing about the driver or the bus, I could be in a foreign country, but I trust. I am shocked if they prove me wrong.

Some trust takes work. I have learned to trust a certain hairdresser, a particular sandwich maker, my husband. This took time and effort from both parties repeated execution towards a set goal and consistently meeting them.

And of course, there are situations that require contextual trust. I trust my sandwich maker to give me a great sandwich, but can I trust him to do my eyebrows? I trust my husband, but not necessarily with cheesecake.

It is these fuzzy situations that tend to trip up even the smartest artificial intelligence (AI) workforce. We have established tried-and-true AI workflows back-office operations, analytics and advanced computing. We have successfully tackled using AI in newer areas, such as tiered contextual responses, voice recognition, biometrics and natural language processing.

The fuzziness increases in emerging areas of AI use, including one where its especially common in mortgage banking customer engagement using sentiment analysis and advanced contextual cues.

Its been repeatedly proven over the last few decades that if you focus an AI on a definitive task chess, calculus and soil management it excels. But contextual knowledge remains an enigma, mainly because of our limitations. The code we write and the logic we create is influenced by the bubbles we live in.

If you have a Google news feed, or a social media account, you have experienced AI working on a definitive task. A news and media feeds that provides content with similar attributes keeps you engaged.

As New York Times Tech Columnist Kevin Roose uncovers in his chilling Rabbit Hole podcast, what it does not do is provide you with diversity of thought.

The AI driving those feeds does not show you news and media feeds that might expand your view. Nothing you see is going to challenge your beliefs. That is a critical point because our experiences and beliefs both firsthand and secondhand through books, friends, movies, and feeds all play a significant role in the knowledge we hold and impart.

The idea that diversity is needed in any AI development to ensure a roundness, a level of acceptable morality and humanity may not be new, but it continues to be an issue.

Every iteration from Nikons failure of recognizing Asian faces to Microsofts disastrous troll-taught bot Tay, has widened our perspective. AI needs interaction with people, but individuals bring positive and negative experiences and beliefs to those interactions.

How do we capture the many cultural social norms of the American melting pot?We know to use larger datasets, look for fringe cases, diversify our focus groups and refine through larger and larger testing.

In mortgage banking, AI does its best work answering questions with exact answers for questions such as, Whats the most I can borrow if I use Fannie Maes HomeReady?

Where AI still falls short is in areas that our mortgage loan originators excel, the fuzzy areas where beliefs, experience and social norms influence our financial decisions. AI can offer pros and cons to answer the question, Is it cheaper to use Fannie Maes HomeReady or my state bond program, or both?

What it cannot advise on is how your relationship may change after using your father-in-laws income to qualify for HomeReady.

Ray Kurzweil waxes eloquently about human historys law of accelerating returns advanced societies have the ability to progress faster since they are more advanced. Marty McFly was shocked when he went back 30 years from 1985 to 1955 in the movie Back to the Future. But if he were to go back 30 years today from 2020 to 1990 his mind would be blown! Smart phones, internet, social media, mumble rap, K Pop the pace of change would be even more evident.

History dictates that we are fast on our way to a truly intelligent bot. There are many schools of thought on how we will reach this goal evolutionary algorithms where we mimic natural selection by combining logic that we deem accurate. Maybe self-learning algorithms that enable the AI to code and improve its own architecture. Whatever the path, the goal is in sight.

While we await the intelligent bot, perhaps the easiest integration for AI in mortgage banking is into our workforce. The value proposition of using AI to increase efficiency and optimize the current workforce has been established.

Many lenders who tout accelerated close times leverage varying degrees of AI to supplement their teams. Any activity that may be defined as business rules (no matter how nested) can be seamlessly executed. The most common tried-and-tested AI assisted workflows now include ordering disclosures, integrations with third-party services, AUS/DU, and QA/QC to verify values across systems and documents.

Newer workflows like intelligent NLP chatbots are getting smarter and better able to provide contextual advice. The scenarios they respond to still need to be defined by a human but the bot is able to retain context through multiple levels and understand slang, emojis, and some levels of content.

The difference of providing not just an animated version of a FAQ, but an actual advisor who can walk a customer through multiple scenarios is incredible. The customer experiences hyper-personalized, focused attention delivered on their schedule. The gains a huge lift in productivity from having a trained team member who works 24/7 for a fraction of the cost.

Careful consideration needs to be made on how the bot is integrated. Focused tasks with clear business rules are easy solutions and the industry is filled with potential providers.

Contextual situation requires more thought and grooming how does the bot fit your representation of your company? It is also important that the bot align with your corporate values and mission. You should pay attention to how it reacts with not only your core demographic but every customer and potential prospect. Also, companies need to determine how to manage interactions and how bots will update as customer beliefs, views and social norms change.

With where bots are right now, we are really looking for trust in our teams to design, nurture, and tailor a bot to have the best interests of our companies, our customers, and our business partners in mind. By focusing on the fuzzy, we can create a contextual experience that builds the trust we want our end-users to have in us.

The views and opinions expressed in this article are those of the author and do not necessarily reflect or represent the views, policy, or position of Planet Home Lending, LLC.

Read more from the original source:

[PULSE] Why we need diversity in AI development - HousingWire

Testing Microsoft’s new AI for iPhone: Can this app really detect our ages and sense our moods? – GeekWire

Microsoft released a new app called Seeing AI for the iPhone this week. Its billed as a talking camera for the blind, but its also a showcase for the companys artificial intelligence technologies, bringing together several interesting features in one app.

Seeing AI can read short signs, scan barcodes, describe whats in a room, identify people, estimate their ages and genders, and guess their moods.

Pretty cool stuff, at least in theory. So how well does it work? We ran Seeing AI through its paces this week on our Geared Up podcast, and came away impressed with its capabilities.

Watch our hands-on preview of the Seeing AI app above, and listen to this weeks full episode of Geared Up below, starting with a recap and review of Amazon Prime Day. Download the MP3 here, and follow Geared Up viaApple,RSS,YouTube,FacebookorGoogle Play Music.

See the article here:

Testing Microsoft's new AI for iPhone: Can this app really detect our ages and sense our moods? - GeekWire

Hypotenuse AI wants to take the strain out of copywriting for e-commerce – TechCrunch

Imagine buying a dress online because a piece of code sold you on its flattering, feminine flair or convinced you romantic floral details would outline your figure with timeless style. The very same day your friend buy the same dress from the same website but shes sold on a description of vibrant tones, fresh cotton feel and statement sleeves.

This is not a detail from a sci-fi short story but the reality and big picture vision of Hypotenuse AI, a YC-backed startup thats using computer vision and machine learning to automate product descriptions for e-commerce.

One of the two product descriptions shown below is written by a human copywriter. The other flowed from the virtual pen of the startups AI, per an example on its website.

Can you guess which is which?* And if you think you can well, does it matter?

Screengrab: Hypotenuse AIs website

Discussing his startup on the phone from Singapore, Hypotenuse AIs founder Joshua Wong tells us he came up with the idea to use AI to automate copywriting after helping a friend set up a website selling vegan soap.

It took forever to write effective copy. We were extremely frustrated with the process when all we wanted to do was to sell products, he explains. But we knew how much description and copy affect conversions and SEO so we couldnt abandon it.

Wong had been working for Amazon, as an applied machine learning scientist for its Alexa AI assistant. So he had the technical smarts to tackle the problem himself. I decided to use my background in machine learning to kind of automate this process. And I wanted to make sure I could help other e-commerce stores do the same as well, he says, going on to leave his job at Amazon in June to go full time on Hypotenuse.

The core tech here computer vision and natural language generation is extremely cutting edge, per Wong.

What the technology looks like in the back end is that a lot of it is proprietary, he says. We use computer vision to understand product images really well. And we use this together with any metadata that the product already has to generate a very human fluent type of description. We can do this really quickly we can generate thousands of them within seconds.

A lot of the work went into making sure we had machine learning models or neural network models that could speak very fluently in a very human-like manner. For that we have models that have kind of learnt how to understand and to write English really, really well. Theyve been trained on the Internet and all over the web so they understand language very well. Then we combine that together with our vision models so that we can generate very fluent description, he adds.

Image credit: Hypotenuse

Wong says the startup is building its own proprietary data-set to further help with training language models with the aim of being able to generate something thats very specific to the image but also specific to the companys brand and writing style so the output can be hyper tailored to the customers needs.

We also have defaults of style if they want text to be more narrative, or poetic, or luxurious but the more interesting one is when companies want it to be tailored to their own type of branding of writing and style, he adds. They usually provide us with some examples of descriptions that they already have and we used that and get our models to learn that type of language so it can write in that manner.

What Hypotenuses AI is able to do generate thousands of specifically detailed, appropriately styled product descriptions within seconds has only been possible in very recent years, per Wong. Though he wont be drawn into laying out more architectural details, beyond saying the tech is completely neural network-based, natural language generation model.

The product descriptions that we are doing now the techniques, the data and the way that were doing it these techniques were not around just like over a year ago, he claims. A lot of the companies that tried to do this over a year ago always used pre-written templates. Because, back then, when we tried to use neural network models or purely machine learning models they can go off course very quickly or theyre not very good at producing language which is almost indistinguishable from human.

Whereas now we see that people cannot even tell which was written by AI and which by human. And that wouldnt have been the case a year ago.

(See the above example again. Is A or B the robotic pen? The Answer is at the foot of this post)

Asked about competitors, Wong again draws a distinction between Hypotenuses pure machine learning approach and others who relied on using templates to tackle this problem of copywriting or product descriptions.

Theyve always used some form of templates or just joining together synonyms. And the problem is its still very tedious to write templates. It makes the descriptions sound very unnatural or repetitive. And instead of helping conversions that actually hurts conversions and SEO, he argues. Whereas for us we use a completely machine learning based model which has learnt how to understand language and produce text very fluently, to a human level.

There are now some pretty high profile applications of AI that enable you to generate similar text to your input data but Wong contends theyre just not specific enough for a copywriting business purpose to represent a competitive threat to what hes building with Hypotenuse.

A lot of these are still very generalized, he argues. Theyre really great at doing a lot of things okay but for copywriting its actually quite a nuanced space in that people want very specific things it has to be specific to the brand, it has to be specific to the style of writing. Otherwise it doesnt make sense. It hurts conversions. It hurts SEO. So we dont worry much about competitors. We spent a lot of time and research into getting these nuances and details right so were able to produce things that are exactly what customers want.

So what types of products doesnt Hypotenuses AI work well for? Wong says its a bit less relevant for certain product categories such as electronics. This is because the marketing focus there is on specs, rather than trying to evoke a mood or feeling to seal a sale. Beyond that he argues the tool has broad relevance for e-commerce. What were targeting it more at is things like furniture, things like fashion, apparel, things where you want to create a feeling in a user so they are convinced of why this product can help them, he adds.

The startups SaaS offering as it is now targeted at automating product description for e-commerce sites and for copywriting shops is actually a reconfiguration itself.

The initial idea was to build a digital personal shopper to personalize the e-commerce experence. But the team realized they were getting ahead of themselves. We only started focusing on this two weeks ago but weve already started working with a number of e-commerce companies as well as piloting with a few copywriting companies, says Wong, discussing this initial pivot.

Building a digital personal shopper is still on the roadmap but he says they realized that a subset of creating all the necessary AI/CV components for the more complex digital shopper proposition was solving the copywriting issue. Hence dialing back to focus in on that.

We realized that this alone was really such a huge pain-point that we really just wanted to focus on it and make sure we solve it really well for our customers, he adds.

For early adopter customers the process right now involves a little light onboarding typically a call to chat through their workflow is like and writing style so Hypotenuse can prep its models. Wong says the training process then takes a few days. After which they plug in to it as software as a service.

Customers upload product images to Hypotenuses platform or send metadata of existing products getting corresponding descriptions back for download. The plan is to offer a more polished pipeline process for this in the future such as by integrating with e-commerce platforms like Shopify .

Given the chaotic sprawl of Amazons marketplace, where product descriptions can vary wildly from extensively detailed screeds to the hyper sparse and/or cryptic, there could be a sizeable opportunity to sell automated product descriptions back to Wongs former employer. And maybe even bag some strategic investment before then However Wong wont be drawn on whether or not Hypotenuse is fundraising right now.

On the possibility of bagging Amazon as a future customer hell only say potentially in the long run thats possible.

Joshua Wong (Photo credit: Hypotenuse AI)

The more immediate priorities for the startup are expanding the range of copywriting its AI can offer to include additional formats such as advertising copy and even some listicle style blog posts which can stand in as content marketing (unsophisticated stuff, along the lines of 10 things you can do at the beach, per Wong, or 10 great dresses for summer etc).

Even as we want to go into blog posts were still completely focused on the e-commerce space, he adds. We wont go out to news articles or anything like that. We think that that is still something that cannot be fully automated yet.

Looking further ahead he dangles the possibility of the AI enabling infinitely customizable marketing copy meaning a website could parse a visitors data footprint and generate dynamic product descriptions intended to appeal to that particular individual.

Crunch enough user data and maybe it could spot that a site visitor has a preference for vivid colors and like to wear large hats ergo, it could dial up relevant elements in product descriptions to better mesh with that persons tastes.

We want to make the whole process of starting an e-commerce website super simple. So its not just copywriting as well but all the difference aspects of it, Wong goes on. The key thing is we want to go towards personalization. Right now e-commerce customers are all seeing the same standard written content. One of the challenges there its hard because humans are writing it right now and you can only produce one type of copy and if you want to test it for other kinds of users you need to write another one.

Whereas for us if we can do this process really well, and we are automating it, we can produce thousands of different kinds of description and copy for a website and every customer could see something different.

Its a disruptive vision for e-commerce (call it A/B testing on steroids) that is likely to either delight or terrify depending on your view of current levels of platform personalization around content. That process can wrap users in particular bubbles of perspective and some argue such filtering has impacted culture and politics by having a corrosive impact on the communal experiences and consensus which underpins the social contract. But the stakes with e-commerce copy arent likely to be so high.

Still, once marketing text/copy no longer has a unit-specific production cost attached to it and assuming e-commerce sites have access to enough user data in order to program tailored product descriptions theres no real limit to the ways in which robotically generated words could be reconfigured in the pursuit of a quick sale.

Even within a brand there is actually a factor we can tweak which is how creative our model is, says Wong, when asked if theres any risk of the robots copy ending up feeling formulaic. Some of our brands have like 50 polo shirts and all of them are almost exactly the same, other than maybe slight differences in the color. We are able to produce very unique and very different types of descriptions for each of them when we cue up the creativity of our model.

In a way its sometimes even better than a human because humans tends to fall into very, very similar ways of writing. Whereas this because its learnt so much language over the web it has a much wider range of tones and types of language that it can run through, he adds.

What about copywriting and ad creative jobs? Isnt Hypotenuse taking an axe to the very copywriting agencies his startup is hoping to woo as customers? Not so, argues Wong. At the end of the day there are still editors. The AI helps them get to 95% of the way there. It helps them spark creativity when you produce the description but that last step of making sure it is something that exactly the customer wants thats usually still a final editor check, he says, advocating for the human in the AI loop. It only helps to make things much faster for them. But we still make sure theres that last step of a human checking before they send it off.

Seeing the way NLP [natural language processing] research has changed over the past few years it feels like were really at an inception point, Wong adds. One year ago a lot of the things that we are doing now was not even possible. And some of the things that we see are becoming possible today we didnt expect it for one or two years time. So I think it could be, within the next few years, where we have models that are not just able to write language very well but you can almost speak to it and give it some information and it can generate these things on the go.

*Per Wong, Hypotenuses robot is responsible for generating description A. Full marks if you could spot the AIs tonal pitfalls

View post:

Hypotenuse AI wants to take the strain out of copywriting for e-commerce - TechCrunch

Flippy the robot uses AI to cook burgers – ZDNet

Flippy the robot is starting its culinary career with one simple task, but just like any rookie, it is learning on the job. With some practice and training, Flippy will be able to do everything from chopping vegetables to plating meals like a pro. Miso Robotics created the robot, which debuted in a kitchen at the restaurant chain CaliBurger in Pasadena, Calif., this week.

"Flippy will initially only focus on flipping burgers and placing them on buns," David Zito, CEO of Miso Robotics tells ZDNet. He adds, "But since Flippy is powered by our own cooking AI software, it will continuously learn from its experiences to improve and adapt over time. This means Flippy will learn to take on additional tasks including grilling chicken, bacon, onions, and buns in addition to frying, prepping, and finishing plates. Eventually, Flippy will support CaliBurger's entire menu."

The robot can be installed in kitchens in less than five minutes, and it's designed to work alongside restaurant staff. Flippy will even politely move aside if it gets in someone's way. Computer vision and deep learning software make it much smarter than your average kitchen appliance.

"Flippy features a Sensor Bar allowing it to see in 3D, thermal, and regular vision for detecting the exact temperatures of the grill as well as readiness of each burger, which will expand to other menu items as Flippy continues to learn and adapt," says Zito.

Flippy uses computer vision and AI to cook burgers. (Image: Miso Robotics)

Flippy will be installed in more than 50 CaliBurger restaurants worldwide by the end of 2019.

"The application of artificial intelligence to robotic systems that work next to our employees in CaliBurger restaurants will allow us to make food faster, safer and with fewer errors," said John Miller, chairman of Cali Group, in a statement. "Our investment in Miso Robotics is part of our broader vision for creating a unified operating system that will control all aspects of a restaurant from in-store interactive gaming entertainment to automated ordering and cooking processes, 'intelligent' food delivery and real-time detection of operating errors and pathogens."

Automation is creeping into kitchens in many forms. This week, Chowbotics (formerly Casabots) announced that it raised $5 million of Series A funding for food service robots. Then there's also Grillbot pro, which is like a Roomba for your grill. Moley Robotics is developing a fully automated and intelligent robotic chef. Various robots sell pizza, cook it, deliver it, and can even print it in outer space .

VIDEO: FDNY uses drone to tame Bronx fire

See original here:

Flippy the robot uses AI to cook burgers - ZDNet

Microsoft AI Lab Aims to Give Machines Common Sense – SDxCentral

Microsoft is building an artificial intelligence (AI) research hub to speed up the integration of AI into products and services.

Harry Shum, executive vice president for Microsofts AI and research group, spoke about the new lab, called Microsoft Research AI, for the first time this week at an event in the U.K.

In a blog post, he wrote that the lab will combine various disciplines such as machine learning, perception, and natural language processing to develop more sophisticated AI. This integrated approach aims to develop systems that can understand language and take action based on that understanding.

Machine reading, which combines AI disciplines such as natural language processing and deep learning, is an example.

We believe AI will be even more helpful when we can create tools that combine those functions and add some of the abilities that come naturally to people, Shum wrote. That includes things like applying our knowledge of one task to another task, or having a commonsense understanding of the world around us.

The company first launchedMicrosoft AI and Research in September 2016. The group brings together about 7,500 computer scientists, researchers, and engineers from the companys research labs and product groups such as digital assistant Cortana and Azure Machine Learning.

The new Microsoft AI lab will be based at the companys headquarters in Redmond, Washington, Bloomberg reports.

The announcement comes as competitors including Google are beefing up their AI efforts.

In May, Google announced its next-generation Tensor Processing Units (TPUs). The Cloud TPUs aim to make Google Cloud Platform the bestcloudfor machine learning, said Google CEO Sundar Pichai, adding that the investment reflects an overall shift in computing.

This is a shift from a mobile-first to an AI-first world, and we are driving it forward across all of our products and platforms, Pichai said.

Both companies are founding members of the Partnership on AI to Support People and Society. Microsoft and Google, along with IBM, Apple, Facebook, Amazon, andDeepMindformed the group in September 2016 to advance AI technologies including machine perception, learning, and automated reasoning.

Jessica is a Senior Editor, covering next-generation data centers and security, at SDxCentral. She has worked as an editor and reporter for more than 15 years at a number of B2B publications including Environmental Leader, Energy Manager Today, Solar Novus Today and Silicon Valley Business Journal. Jessica is based in the Silicon Valley.

Read the rest here:

Microsoft AI Lab Aims to Give Machines Common Sense - SDxCentral

AI is changing the way medical technicians work – TNW

When MIT successfully created AI that can diagnose skin cancer it was a massive step in the right direction for medical science. A neural-network can process huge amounts of data. More data means better research, more accurate diagnosis, and the potential to save lives by the thousands or millions.

In the future medical technicians will become data-scientists to support the AI-powered diagnostics departments that every hospital will need. Radiologists are going to need a different education than the one they have now theyre gonna need help from Silicon Valley.

This isnt a knock against radiologists or other medical technicians. For ages now, theyve worked hand-in-hand with doctors and been crucial in the diagnostic process. Its just that machines can process more data, with greater efficiency, than any human could. For what its worth, weve predicted that doctors are on their way out too, but this is different.

Geoffrey Hinton, a computer scientist at The University of Toronto, told the New Yorker:

I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon. Youre already over the edge of the cliff, but you havent yet looked down. Theres no ground underneath. Its just completely obvious that in five years deep learning is going to do better than radiologists. It might be ten years.

Its not about replacing, but upgrading and augmenting. Hinton might be a little dramatic, but not for nothing: hes the grandson of famed mathematician George Boole, the person responsible for boolean algorithms. Obviously, he understands what AI means for research. Hes not suggesting, however, that radiologists dont do anything beyond pointing out anomalies in pictures.

Instead, hes intimating that traditional radiology is going to change, and the way we train people now is going to be irrelevant. Which is, again, harsh.

Nobody is saying that medical trainers and educational facilities are doing a bad job. Its just that they need to be replaced with something better. Like machines.

We dont have to give neural-networks the keys to the shop; were not creating autonomous doctor-bots thatll decide to perform surgery on their own without the need for nurses, technicians, or other staff. Instead were streamlining things that humans simply cant do, like process millions of pieces of data at a time.

Tomorrows radiologist isnt a person who interprets the shadows on an X-ray. They are data-scientists. Medical technicians are going to be at the cutting-edge of AI technology in the near future. Technology and medicine are necessary companions. If were going to continue progress in medicine, we need a forward-thinking scientific attitude that isnt afraid of implementing AI.

Nowhere else is the potential to save lives greater than in medical research and diagnostics. What AI brings to the table is worth revolutionizing the industry and shaking it up for good. Some might say its long overdue.

A.I. VERSUS M.D. on The New Yorker

Read next: Snap Inc. is rumored to be buying a Chinese drone manufacturer

Read the original:

AI is changing the way medical technicians work - TNW

Human Compatible by Stuart Russell review AI and our future – The Guardian

Heres a question scientists might ask more often: what if we succeed? That is, how will the world change if we achieve what were striving for? Tucked away in offices and labs, researchers can develop tunnel vision, the rosiest of outlooks for their creations. The unintended consequences and shoddy misuses become afterthoughts messes for society to clean up later.

Today those messes spread far and wide: global heating, air pollution, plastics in the oceans, nuclear waste and babies with badly rewritten DNA. All are products of neat technologies that solve old problems by creating new ones. In the inevitable race to be first to invent, the downsides are dismissed, unexplored or glossed over.

In 1995, Stuart Russell wrote the book on AI. Co-authored with Peter Norvig, Artificial Intelligence: A Modern Approach became one of the most popular course texts in the world (Norvig worked for Nasa; in 2001, he joined Google). In the final pages of the last chapter, the authors posed the question themselves: what if we succeed? Their answer was hardly a ringing endorsement. The trends seem not to be too terribly negative, they offered. A lot has happened since: Google and Facebook for starters.

In Human Compatible, Russell returns to the question and this time does not hold back. The result is surely the most important book on AI this year. Perhaps, as Richard Brautigans poem has it, life is good when we are all watched over by machines of loving grace. But Russell, a professor at the University of California, Berkeley, sees darker eventualities. Creating machines that surpass our intelligence would be the biggest event in human history. It may also be the last, he warns. Here he makes the convincing case that how we choose to control AI is possibly the most important question facing humanity.

Russell has picked his moment well. Tens of thousands of the worlds brightest minds are now building AIs. Most work on one-trick ponies the narrow AIs that process speech, translate languages, spot people in crowds, diagnose diseases, or whip people at games from Go to Starcraft II. But these are a far cry from the fields ultimate goal: general purpose AIs that match, or surpass, the broad-based brainpower of humans.

Its is not a ludicrous ambition. From the start, DeepMind, the AI group owned by Alphabet, Googles parent company, set out to solve intelligence and then use that to solve everything else. In July, Microsoft signed a $1bn contract with OpenAI, a US outfit, to build an AI that mimics the human brain. It is a high stakes race. As Vladimir Putin said: whoever becomes the leader in AI will become the ruler of the world.

Russell doesnt claim we are nearly there. In one section he sets out the formidable problems computer engineers face in creating human-level AI. Machines must know how to turn words into coherent, reliable knowledge; they must learn how to discover new actions and order them appropriately (boil the kettle, grab a mug, toss in a teabag). And like us, they must manage their cognitive resources so they can reach good decisions fast. These are not the only hurdles, but they give a flavour of the task ahead. Russell suspects it will keep researchers busy for another 80 years, but stresses the timing is impossible to predict.

Even with apocalypse camped on the horizon, this is a wry and witty tour of intelligence and where it may take us. And where exactly is that? A machine that masters all the above would be a formidable decision maker in the real world, Russell says. It would absorb vast amounts of information from the internet, TV, radio, satellites and CCTV and with it gain a more sophisticated understanding of the world and its inhabitants than any human could ever hope for.

What could possibly go right? In education, AI tutors would maximise the potential of every child. They would master the vast complexity of the human body, letting us banish disease. As digital personal assistants they would put Siri and Alexa to shame: You would, in effect, have a high-powered lawyer, accountant, and political advisor on call at any time.

And what of the downsides? Without serious progress on AI safety and regulation, Russell foresees messes aplenty and his chapter on misuses of AI is grim reading. Advanced AI would hand governments such extraordinary powers of surveillance, persuasion and control that the Stasi will look like amateurs. And while Terminator-style killer robots are not about to eradicate humanity, drones that select and kill individuals based on their faceprints, skin colour or uniforms are entirely feasible. As for jobs, we may no longer make a living by providing physical or mental labour, but we can still supply our humanity. Russell notes: We will need to become good at being human.

Whats worse than a society-destroying AI? A society-destroying AI that wont switch off. Its a terrifying, seemingly absurd prospect that Russell devotes much time to. The idea is that smart machines will suss out, as per HAL in 2001: A Space Odyssey, that goals are hard to achieve if someone pulls the plug. Give a superintelligent AI a clear task to make the coffee, say and its first move will be to disable its off switch. The answer, Russell argues, lies in a radical new approach where AIs have some doubt about their goals, and so will never object to being shut down. He moves on to advocate provably beneficial AI, whose algorithms are mathematically proven to benefit their human users. Suffice to say this is a work in progress. How will my AI deal with yours?

Lets be clear: there are plenty of AI researchers who ridicule such fears. After the philosopher Nick Bostrom highlighted potential dangers of general purpose AI in Superintelligence (2014), a US thinktank, the Information Technology and Innovation Foundation, gave its Luddism award to alarmists touting an artificial intelligence apocalypse. This was indicative of the dismal debate around AI safety, which is on the brink of descending into tribalism. The danger that comes across here is less an abrupt destruction of the species, more an inexorable enfeeblement: a loss of striving and understanding, which erodes the foundations of civilisation and leaves us passengers in a cruise ship run by machines, on a cruise that goes on forever.

Human Compatible is published by Allen Lane (25). To order a copy go to guardianbookshop.com or call 020-3176 3837. Free UK p&p over 15, online orders only. Phone orders min p&p of 1.99.

More:

Human Compatible by Stuart Russell review AI and our future - The Guardian

5 Reasons Artificial Intelligence Will Improve Greenhouse Production – Greenhouse Grower

Artificial intelligence (AI) involves using computers to do things that traditionally require human intelligence. This means creating algorithms to classify, analyze, and draw predictions from data. It also involves acting on data, learning from new data, and improving over time.

Thats the definition of AI, at least. But what does it actually mean for greenhouse growers?

According to Gursel Karacor, Senior Data Scientist at Grodan, a supplier of sustainable stone wool growing media solutions for the horticulture market, greenhouses will, to a large extent, be autonomous in the near future.

My mission is the realization of autonomous greenhouses through the use of all this data with state-of-the-art machine learning methodologies, Karacor says. I want to realize this goal step-by-step in five years.

Click here to learn more about why AI will change the way you work, for the better.

Gursel Karacor is a Senior Data Scientist with Grodan. See all author stories here.

View original post here:

5 Reasons Artificial Intelligence Will Improve Greenhouse Production - Greenhouse Grower