Allen-backed AI2 incubator aims to connect AI startups with world-class talent – TechCrunch

You cant swing a cat these days without hitting some incubator or accelerator, or a startup touting its artificial intelligence chops but for some reason, there are few if any incubators focused just on the AI sector. Seattles Allen Institute for AI is doing just that, with the promise of connecting small classes of startups with the organizations formidable brains (and 250 grand).

AI2, as the Paul Allen-backed nonprofit is more commonly called, already spun off two companies: XNOR.ai, which has made major advances in enabling AI tasks to run on edge devices, is operating independently and licensing its tech to eager customers. And Kitt.ai, a (profitable!) natural language processing platform, was bought by Baidu just last month.

Were two for two, and not in a small way, said Jacob Colker, who has led several Seattle and Bay Area startups and incubators, and is currently the Entrepreneur-in-Residence charged with putting AI2s program on the map. Until now the incubation program has kept a low profile.

Startups will get the expected mentorship and guidance on how to, you know, actually run a company but the draw, Colker emphasized, is the people. A good AI-based startup might get good advice and fancy office space from just about anyone but only AI2, he pointed out, is a major concentration of three core competencies in machine learning, natural language processing, and computer vision.

YOLO in action, from the paper presented at CVPR.

XNOR.ai, still partly run out of the AI2 office, is evidence of that. The companys latest computer vision system, YOLO, performs the rather incredible feat of both detecting and classifying hundreds of object types on the same network, locally and in real time. YOLO scored runner-up for Best Paper at this years CVPR, and thats not the first time its authors have been honored. Id spend more time on the system but its not what this article is about.

There are dozens more PhDs and published researchers; AI2 has plucked (or politely borrowed) high-profile academics from all over, but especially the University of Washington, a longstanding presence at the frontiers of tech. AI2 CEO Oren Etzioni is himself a veteran researcher and is clearly proud of the team hes built.

Obviously AI is hot right now, he told me, but were not jumping on the bandwagon here.

The incubator will have just a handful of companies at a time, he and Colker explained, and the potential investment of up to $250K is more than most such organizations are willing to part with. And as a nonprofit, there are fewer worries about equity terms and ROI.

But the applications of supervised learning are innumerable, and machine learning has become a standard developer tool so ambitious and unique applications of AI are encouraged.

Were not looking for a doohickey, Etzioni said. We want to make big bets and big companies.

AI2 is hoping to get just 2-5 companies for its first batch. Makes it a lot easier for me to keep eyes on them, thats for sure. Interested startups can apply at the AI2 site.

Read more here:

Allen-backed AI2 incubator aims to connect AI startups with world-class talent - TechCrunch

Task Force on Artificial Intelligence – hearing to discuss use of AI in contact tracing – Lexology

On July 8, 2020, the House Financial Services Committees Taskforce on Artificial Intelligence held a hearing entitled Exposure Notification and Contact Tracing: How AI Helps Localities Reopen Safely and Researchers Find a Cure.

In his opening remarks, Congressman Bill Foster (D-IL), chairman of the task force, stated that the hearing would discuss the essential tradeoffs that the coronavirus disease 2019 (COVID-19) pandemic was forcing on the public between life, liberty, privacy and the pursuit of happiness. Chairman Foster noted that what he called invasive artificial intelligence (AI) surveillance may save lives, but would come at a tremendous cost to personal liberty. He said that contact tracing apps that use back-end AI, which combines raw data collected from voluntarily participating COVID-19-positive patients, may adequately address privacy concerns while still capturing similar health and economic benefits as more intrusive monitoring.

Congressman Barry Loudermilk (R-GA) discussed how digital contact tracing could be more effective than manual contact tracing, but noted that it must have strong participation from people 40-60 percent adoption rate overall to be effective. He said that citizens would need to trust that their privacy would not be violated. To help establish this trust, he suggested, people would need to be able to easily determine what data would be collected, who would have access to the data and how the data would be used.

Four panelists testified at this hearing. Below is a summary of each panelists testimony, followed by an overview of some of the post-testimony questions that committee members raised:

Brian McClendon, the CEO and co-founder of the CVKey Project, discussed how privacy, disclosure and opt-in data collection impact the ability to identify and isolate those infected with COVID-19. AI and machine learning require large amounts of data. He stated that while the most valuable data to combat COVID-19 can be found in the contact-tracing interviews of infected and exposed people, difficulties exist in capturing this information. For example, attempted phone calls to reach exposed individuals may go unanswered because people often do not pick up calls from unknown numbers. Mobile apps, he said, offer a way to conduct contact tracing with greater accuracy and coverage. Mr. McClendon discussed two ways that such apps could work: (1) using GPS location or (2) via low-energy Bluetooth. For the latter, Mr. McClendon explained a method developed by two large technology companies: when a user of a digital contact tracing app tests positive for COVID-19, he or she then chooses to opt in to upload non-personally identifiable information to a state-run cloud server, which would then determine whether potential exposures have occurred and provide in-app notifications to such users.

Krutika Kuppalli, M.D., an infectious diseases physician, discussed how using contact tracing can help impede the spread of infectious diseases. She noted that it is important to remember ethical considerations involving public health information, data protection and data privacy when using these technologies.

Andre M. Perry, a fellow at the Brookings Institution, began his presentation by discussing how COVID-19 has disproportionately affected Black and Latino populations, reflecting historical inequalities and structural racism. Mr. Perry identified particular concerns regarding AI and contact tracing as they pertain to structural racism and bias. These tools, he stated, are not neutral and can either exacerbate or mitigate structural racism. To address such bias, he suggested, contact tracing should include people who have generally been excluded from systems that have provided better health and economic outcomes. Further, the use of AI tools in the healthcare arena presents the same risk as in other fields: the AI is only as good as the programmers who design it. Bias in programming can lead to flaws in technology and amplify biases in the real world. Mr. Perry stated that greater recruitment and investment with Black-owned tech firms, rigorous reviews and testing for bias and more engagement with local communities is required.

Ramesh Raskar, a professor at MIT and the founder of the PathCheck Foundation, emphasized three elements during his presentation: (1) how to augment manual contact tracing with apps; (2) how to make sure apps are privacy-preserving, inclusive, trustworthy, and built using open-source methods and nonprofits; and (3) the creation of a National Pandemic Response Service. Regarding inclusivity, Mr. Raskar noted that Congress should actively require that solutions be accessible broadly and generally; contact tracing cannot be effective only for segments of the population that have access to the latest technology.

Post-testimony questions

Chairman Foster asked about limits of privacy-preserving techniques by providing an example of a person who had been isolated for a week, then interacted with only one other person, and then later received a notification of exposure: such a person likely will know the identity of the infected person. Mr. Raskar replied that data protection has different layers: confidentiality, anonymity, and then privacy. In public health scenarios, Mr. Raskar stated that today, we only care about confidentiality and not anonymity or privacy (eventually, he commented, you will have to meet a doctor).

If we were to implement a federal contact tracing program, Representative Loudermilk asked, how would we ensure citizens that they can know what data will be used and collected, and who has access? Mr. McClendon responded that under the approach developed by the two large technology companies, data is random and stored on a personal phone until the user opts in to upload random numbers to the server. The notification determination is made on the phone and the state provides the messages. The state will not know who the exposed person is until that person opts in by calling the manual contact tracing team.

Representative Maxine Waters (D-CA) asked what developers of a mobile contact tracing technology should consider to ensure that minority communities are not further disadvantaged. Mr. Perry reiterated that AI technologies have not been tested, created, or vetted by persons of color, which has led to various biases.

Congressman Sean Casten (D-IL) asked whether AI used in contact tracing is solely backward-looking or could predict future hotspots. Mr. McClendon replied that to predict the future, you need to know the past. Manual contact tracing interviews, where an infected or exposed person describes where he or she has been, would provide significant data to include in a machine-learning algorithm, enabling tracers to predict where a hotspot might occur in the future. However, privacy issues and technological incompatibility (e.g., county and state tools that are not compatible with each other) mean that a lot of data is currently siloed and even inaccessible, impeding the ability for AI to look forward.

Read more:

Task Force on Artificial Intelligence - hearing to discuss use of AI in contact tracing - Lexology

The nominees for the VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

Last Chance: Register for Transform, VB's AI event of the year, hosted online July 15-17.

At our AI-focusedTransform 2020event, taking place July 15-17entirely online, VentureBeat will recognize and award emergent, compelling, and influential work through our second annual VB AI Innovation Awards. Drawn from our daily editorial coverage and the expertise of our nominating committee members, these awards give us a chance to shine a light on the people and companies making an impact in AI.

Here are the nominees in each of the five categories NLP/NLU Innovation, Business Application Innovation, Computer Vision Innovation, AI for Good, and Startup Spotlight.

Dr. Dilek Hakkani-Tur

A senior principal scientist at Amazon Research and faculty member at the University of California, Santa Cruz, Dr. Hakkani-Tur currently works on solving natural dialogue for Amazons Alexa AI. She has researched and worked on natural language processing, conversational AI, and more for over two decades, including stints at Google and Microsoft. She holds dozens of patents and has written or co-authored more than 200 papers in the area of natural language and speech processing. Recent work includes improving task-oriented dialogue systems, increasing the usefulness of open-domain dialogue responses, and repurposing existing data sets for dialogue state tracking for natural language generation (NLG).

BenevolentAI

BenevolentAIs mission is to use AI and machine learning to improve drug discovery and development. The amount of available data is overwhelming, and despite a steady stream of new research, too many pharmaceutical experiments fail today. BenevolentAI helps by accelerating the indexing and retrieval of medical papers and clinical trial reports about new treatments for diseases that dont have cures. Fact-based decision-making is essential everywhere, but for the pharmaceutical industry, the facts just need to be harvested in a synthetic, relevant, and efficient way.

StereoSet

Research continues to uncover bias in AI models. StereoSet is a data set designed to measure discriminatory behaviors like racism and sexism in language models. Researchers Moin Nadeem, Anna Bethke, and Siva Reddy built StereoSet and have made it available to anyone who makes language models. The teams maintains a leaderboard to show how models like BERT and GPT-2 measure up.

Hugging Face

Hugging Face seeks to advance and democratize natural language processing (NLP). The company wants to contribute to the development of technology in this domain by growing the open source community, conducting research, and creating NLP libraries like Transformers and Tokenizers. Hugging Face offers free online tools anyone can use to leverage models such as BERT, XLNet, and GPT-2. The company says more than 1,000 companies use its tools in production, including Apple and Microsofts Bing group.

Jumbotail

Jumbotails technology updates traditional mom-and-pop stores in India, often known as kirana stores, by connecting them with recognized brands and other high-quality product producers to help transform them into modern convenience stores. Jumbotail does so without raising the cost to customers by collecting and mining millions of data points in real time every day. Thanks to its AI backend, Jumbotail became Indias leading online wholesale food and grocery marketplace, with a full stack that includes integrated supply chain and logistics, as well as an in-house financial tech platform for payments and credit. The insights and tech developed around this new business model empower producers and customers, and Jumbotail is poised to expand to other continents.

Codota

Codota is developing a platform powered by machine learning that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code. By automating routine programming tasks that would normally require a team of skilled developers, the company is helping reduce the estimated $312 billion organizations spend on debugging each year. Codotas cloud-based and on-premises solutions, which are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix, complete lines of code based on millions of programs and individual context locally, without sending any sensitive data to remote servers.

Rasa

Rasa is an open source conversational AI company whose tools enable startups to build their own (close to) state-of-the-art natural language processing systems. These tools some of which have been downloaded over 3 million times bring AI assistants to life by providing the technical scaffolding necessary for robust conversations. Rasa invests in research to create conversational AI, furnishing developers at companies like Adobe, Deutsche Telekom, Lemonade, Airbus, Toyota, T-Mobile, BMW, and Orange with solutions to understand messages, determine intent, and capture key contextual information.

Dr. Richard Socher

Dr. Richard Socher is probably best known for founding MetaMind, which Salesforce acquired in 2016, and for his contribution to the landmark ImageNet database. But in his most recent role as chief scientist and EVP at Salesforce (he just left to start a new company), Socher is responsible for bringing forth AI applications, from initial research to deployment.

Platform.ai

To help domain experts without AI expertise deploy AI products and services, Platform.ai offers computer vision without coding. Its an end-to-end rapid development solution that uses proprietary and patent-pending AI and HCI algorithms to visualize data sets and speed up labeling and training by 50-100 times. The goal is to empower companies to build good AI. Platform.ai can count big-name brands like GE, Claro, and Mattel as customers. The companys founders include chief scientist Jeremy Howard, who is also the founding researcher of deep learning education organization Fast.ai and a professor at the University of San Francisco.

Abeba Birhane and Dr. Vinay Prabhu

In their powerful work, Large image datasets: A pyrrhic win for computer vision?, researchers Abeba Birhane, Ph.D. candidate at University College Dublin, and Dr. Vinay Prabhu, principal machine learning scientist at UnifyID, examined the problematic opacity, data collection ethics, labeling and classification, and consequences of large image data sets. These data sets, including ImageNet and MITs 80 Million Tiny Images, have been cited hundreds of times in research. Birhane and Prabhus work is under peer review, but it has already resulted in MIT voluntarily and formally withdrawing the Tiny Images data set on the grounds that it contains derogatory terms as categories, as well as offensive images, and that the nature of images in the data set makes remedying it unfeasible.

Dr. Dhruv Batra

An assistant professor in the School of Interactive Computing at Georgia Tech and a research scientist at Facebook AI Research, Dr. Dhruv Batra focuses primarily on machine learning and computer vision. His long-term research goal is to create AI agents that can perceive their environments, carry natural-sounding dialogue, navigate and interact with their environment, and consider the long-term consequences of their actions. Hes also cofounder of Caliper, a platform designed to help companies better evaluate the data science skills of potential machine learning, AI, and data science hires. And he helped create Eval.ai, an open source platform for evaluating and comparing machine learning (ML) and artificial intelligence (AI) algorithms at scale.

Ripcord

Ripcord offers a portfolio of physical robots that can digitize paper records, even removing staples. Employing computer vision, lifting and positioning arms, and high-quality RGB cameras that capture details at 600 dots per inch, the companys robots are able to scan at 10 times the speed of traditional processes and handle virtually any format. Courtesy of partnerships with logistics firms, Ripcord transports files from customers such as Coca-Cola, BP, and Chevron to its facilities, where it scans them and either stores them to meet compliance requirements or shreds and recycles them. The companys Canopy platform uploads documents to the cloud nearly instantly and makes them available as searchable PDFs.

Machine Learning Emissions Calculator

Authors Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres built an online calculator so anyone can understand the carbon emissions their research generates. Machine learning research demands high compute resources, and even as the field achieves key technological breakthroughs, the authors of the calculator believe transparency about the environmental impact of those achievements should be generalized and included in any paper, blog post, or publication about a given work. They also provide a simple template for standardized, easy reporting.

Niramai

Niramai developed noninvasive, radiation-free early-stage breast cancer detection for women of all age groups using thermal imaging technologies and AI-based analytics software. The company works with various government and nonprofit entities to enable low-cost health check-ups in rural areas in India. Prevention and early detection are key to improving the outcomes of cancers, but health centers are not always equipped with expensive screening machines. Because thermal imaging is safe, cost-effective, and easy to deploy, it can improve early screening in low-tech facilities around the world.

Dr. Pascale Fung

Dr. Pascale Fung is director of the Centre for AI Research (CAiRE) at the Hong Kong University of Science and Technology (HKUST). Among other accolades and honors, Fung represents the university at Partnership on AI and is an IEEE fellow because of her contributions to human-machine interactions. Through her work with CAiRE, she has helped create an end-to-end empathetic chatbot and a natural language processing Q&A system that enables researchers and medical professionals to quickly access information from the COVID-19 Open Research Dataset (CORD-19).

Dr. Timnit Gebru

Dr. Timnit Gebru continues to be one of the strongest voices battling racism, misogyny, and other biases in AI not just in the actual technology, but within the wider community of AI researchers and practitioners. Shes the co-lead of Ethical AI at Google and cofounded Black in AI, a group dedicated to sharing ideas, fostering collaborations, and discussing initiatives to increase the presence of Black individuals in the field of AI. Her work includes Gender Shades, the landmark research exposing the racial bias in facial recognition systems, and Datasheets for Datasets, which aims to create a standardized process for adding documentation to data sets to increase transparency and accountability.

Relimetrics

Relimetrics develops full-stack computer vision and machine learning software for QA and process control in Industry 4.0 applications. Unlike many other competitors in the field of visual inspection, Relimetrics proposes an end-to-end flow that can be adopted by large groups, as well as smaller manufacturers. Industry 4.0 is associated with a plethora of technological stacks, but few are able to scale to large and small manufacturers across multiple industries yet remain simple enough for domain experts to deploy them, which is where Relimetrics comes in.

Dr. Daniela Braga, DefinedCrowd

DefinedCrowd creates high-quality training data for enterprises AI and machine learning projects, including voice recognition, natural language processing, and computer vision workflows. The company crowdsources data labeling and more from hundreds of thousands of paid contributors and passes the massive curation on to its enterprise customers, which include several Fortune 500 companies. The startups cofounder and CEO, Dr. Daniela Braga, has credentials in speech technology and crowdsourcing dating back nearly two decades, including nearly seven years at Microsoft that included work on Cortana. She has led DefinedCrowd through several rounds of funding most recently, a large $50.5 million round in May 2020.

Flatfile

Flatfile wants to replace manual data janitoring for enterprises with its AI-powered data onboarding technology. Flatfile is content agnostic, so a company in essentially any industry can take advantage of its Portal and Concierge platforms, which are able to run on-premises or in the cloud. Flatfile has completed two funding rounds, one of which wrapped up in June 2020. As of September 2019, the company had attracted 30 customers with essentially no paid advertising. Less than a year later, it had 400 companies on its waitlist, ranging from startups up to publicly traded companies.

DoNotPay

DoNotPay, founded by British-born entrepreneur Josh Browder, offers over 100 bots to help consumers cancel memberships and subscriptions, fight corporations, file for benefits, sue robocallers, and more. While much of the companys automation engine is rules-based, it leverages third-party machine learning services to parse terms of service (ToS) agreements for problematic clauses, such as forced arbitration. To address challenges stemming from the pandemic, DoNotPay recently launched a bot that helps U.S.-based users file for unemployment. In the future, the startup plans to bring to market a Chrome extension that will work proactively for users in the background.

See original here:

The nominees for the VentureBeat AI Innovation Awards at Transform 2020 - VentureBeat

Top tech trends for 2021: Gartner predicts hyperautomation, AI and more will dominate business technology – TechRepublic

Operational resiliency is key as the COVID-19 pandemic continues to change how companies will do business next year.

There are nine top strategic technology trends that businesses should plan for in 2021 as the pandemic continues, according to Gartner's analysts. Their findings were presented on Monday at the virtual Gartner IT Symposium/Xpo Americas conference, which runs through Thursday.

Organizational plasticity is key to these trends. "When we talk about the strategic technology trends, we actually have them grouped into three different themes, which is people centricity, location independence, and resilient delivery," said Brian Burke, research vice president at Gartner. "What we're talking about with the trends is how do you leverage technology to gain the organizational plasticity that you need to form and reform into whatever's going to be required as we emerge from this pandemic?"

SEE: COVID-19 workplace policy (TechRepublic Premium)

Here are the top nine trends, in no particular order. And they will have an impact for more than the next year. Companies can look at these for insight through 2025, per Gartner.

"We don't prioritize these. So we don't say that one is more important than the other," Burke explained. "Different organizations in different industries will prioritize the impact of the trends on them as being higher or lower, but when we look really across industries and across geographies and across these trends, we think that these are the most impactful trends that organizations generally are going to face over the next five years."

The Internet of Behaviors (IoB) is an emerging trend. The term "Internet of Behaviors" was first coined inGartner's tech predictions for 2020. This is how organizations, whether government or private sector, are leveraging technology to monitor behavioral events and manage the data to upgrade or downgrade the experience to influence those behaviors. This is what Gartner calls the "digital dust" of peoples' daily lives. It includes facial recognition, location tracking, and big data.

Burke said, "In practical terms, it's real things like health insurance companies that are monitoring your fitness bands and your food intake, and the number of times you go to the gym, and those things to adjust your premiums."

Gartner predicts that by the end of 2025, more than half of the world's population will be subject to at least one IoB program. Burke said: "That might be a little bit of an understatement because when you think about the social credit system in China, you're already up to double digit percentages of people that are being monitored just with one implementation. There's all kinds of these things that are popping up here and there and everywhere."

The cybersecurity mesh technology trend enables people to access any digital asset security, no matter where the asset is, or where the person is located. Burke said: "The cybersecurity mesh is really how we've really reached a tipping point or inflection point with security, and that's causing us to really decouple policy enforcement from policy decision-making. Those were coupled in the past. What that allows us to do is it allows us to put the security perimeter around the individual as opposed to around the organization."

He added "The way that security professionals have traditionally thought about security is that inside of the organization is secure. Then we make sure that everything outside of your organization is secured through that security mechanism inside the organization, inside the firewall, so to speak."

With more digital assets outside of the firewall, particularly with cloud and more remote employees, the security perimeter needs to be around an individual and enforcement is handled through a cloud access security broker, so that policy enforcement is done at the asset itself, Burke explained.

Gartner predicts that by 2025, the cybersecurity mesh will support more than half of digital access control requests.

Another trend is total experience (TX). Last year, Gartner introduced multiexperience and this is a step beyond that. Multiexperience is multiple modes of access using different technologies, and TX ties together customer experience, employee experience, and user experience with the multiexperience environment, Burke said.

Organizations need a TX strategy as interactions become more mobile, virtual, and distributed, particularly as a result of the COVID-19 pandemic.

"The challenge is that in most organizations, those different disciplines are siloed. So what we're saying the basis of that prediction is that if you can bring together customer experience, employee experience, multi experience and user experience, the common notarial effect, common notarial innovation as a combination of strategies is harder to replicate than in a single strategy, according to Michael Porter. And we believe that, too. So you can bring those things together. That's where you'll gain the competitive advantage that will be realized through those experience metrics," Burke said.

Gartner predicts that organizations providing a TX will outperform competitors across key satisfaction metrics over the next three years.

This trend, intelligent composable business, is about leveraging from an application perspective and leveraging packaged business capabilities, which can be thought of as chunks of functionality accessible through APIs, Burke said.

"They can be developed by vendors or provided by vendors or developed in-house. That kind of framework, that allows you to cobble together those package business capabilities, and then access data through a data fabric to provide it's configuration and rapid reconfiguration of business services that can be highly granular even personal acts."

"The intelligent composable business is about bringing together things like better decision making, better access to data that changes the way that we do things, which is required for flexible applications, and which we can deliver when we have this composable approach to application delivery," Burke said.

Hyperautomation is another key strategic trend for 2021. It was a top strategic trend last year as well, and it has been evolving.

"We've seen tremendous demand for automating repetitive manual processes and tasks; so robotic process automation was the star technology that companies were focused on to do that. That has been happening for a couple of years, but what we're seeing now is that it's moved from task based automation, to process based automation, so automating a number of tasks in a process, to functional automation across multiple processes and even moving towards automation at the business ecosystem level. So really, the breadth of automation has expanded as we go forward with hyperautomation," Burke explained.

Another strategic trend, anywhere operations, refers to an IT operating model that supports customers everywhere and enables employees everywhere and manages the deployment of business services across distributed infrastructure.

Burke said that anywhere operations were always there but the pandemic made urgent.

"There always had been a movement towards location independent and providing services at the point where they're required. But back at least in America and Europe in March, suddenly all of these people working from home really raised the awareness of it, which was we have an immediate need to be able to support remote employees and most organizations were able to resolve that really quickly. But then we also are dealing with our customers, and our customers are remote and our products need to become a deliverable remotely as well."

With employees working from home, and salespeople working from home, talking to purchasing agents and buyers working from home, it ramped up the problem and the need to deliver services to people wherever they are and wherever they are required, Burke said.

Gartner predicts that by the end of 2023, 40% of organizations will have applied anywhere operations to deliver optimized and blended virtual and physical customer and employee experiences.

This trend involves providing engineering discipline to an organization because only 53% of projects make it from artificial intelligence (AI) prototypes to production, according to Gartner research.

"AI engineering is about providing the sort of engineering discipline, a robust structure that will emphasize having AI projects that are delivered in a consistent way to ensure that they can scale, move into production, all of those kinds of things. So it's really bringing the engineering discipline to AI for end user organizations. So when you talk about large vendors, yes, they've been delivering successfully for the past quite a few years, but end user organizations are needing to move out of the experimental stage with AI and move into a robust delivery model and that's really what AI engineering's about," Burke said.

Distributed cloud is another technology trend and it involves the distribution of public cloud services to different physical locations while the operation, governance, and evolution of the services are the responsibility of the public cloud provider.

Gartner predicts that by 2025, most cloud service platforms will provide at least some distributed cloud services that begin at the point of need.

Privacy is more important than ever as global data protection legislation matures, and Gartner predicts that by 2025, half of large organizations will implement privacy-enhancing computation for processing data in untrusted environments and multiparty data analytics use cases. Privacy-enhancing computation protects data in use while maintaining secrecy or privacy.

Burke said the actual number of companies that will use privacy-enhancing computation is tough to assess. "That's a difficult one to gauge because what we've seen over the years of course, is that a lot of organizations have not focused as much attention as they probably require on privacy. But we think that what's happening now is that privacy legislation globally is really starting to take hold. So when privacy legislation is introduced, it takes a while for enforcement to catch up to legislation."

He added: "Privacy is going to be an issue for organizations going forward. The importance is going to increase, but also the opportunities are going to be increased to be able to use trusted third parties for analytics and share data across priorities without exposing the private details in that data and that kind of thing."

"One of the things that's really an underlying premise of all of our research, including the top technology trends, is that we're not going to come out of the pandemic and go back to what we were," Burke said. "We're going to come out of the pandemic, but we're going to move forward on a different trajectory. So really, trying to anticipate what that trajectory is going to be for your organization helps to guide you on how you're going to emerge from the pandemic on that different trajectory. So these trends are focused on organizational agility because that's what's going to be successful as we step into a new future phase, hopefully sometime soon."

We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet. Delivered Daily

AI engineering is one of the key strategic trends Gartner predicts for 2021.

Image: iStock

Read more:

Top tech trends for 2021: Gartner predicts hyperautomation, AI and more will dominate business technology - TechRepublic

Loyal Markets on the FX Market and AI Technology – GlobeNewswire

BELIZE CITY, Belize, Sept. 04, 2020 (GLOBE NEWSWIRE) -- With forex trading growing in popularity along with the artificial intelligence revolution, companies like Loyal Markets are playing its part in helping the industry realise the full potential of artificial intelligence in trading.

Both the forex and technology industries are changing and accelerating at an unprecedented rate. As regulation shifts to keep up with the growth, brokers are competing to unveil the latest technological advancements. As such, most have now expanded their offerings to include on-the-go trading through mobile apps. The challenge in the competitive field of forex trading, therefore, is to create a solution that stands out from the pack one that simultaneously adheres to regulatory changes while also meeting the needs of a new trading generation.

Loyal Markets has been using artificial intelligence to create a proprietary system that combines the machine learning of AI. with the discretion of humans to analyse trading insights and to find trading patterns and trends with high odds of success.

Some of the most valuable information for retail investors in forex trading is currency patterns and trends. Investors of Loyal Markets can now select various different AI trading solutions from the trading platform to assist in their trading decisions.

"With the Intraday Pattern Feed and Trend Prediction Engine, using artificial intelligence to trade forex currency is now significantly simpler," said Will Colmore. "Retail traders and independent investment advisors can use the same technology as Wall Street firms to find patterns early."

This technology can also provide insights on the percentage of outcomes that confirm successful trade signals in the past. Pre-calculated through backtesting, this information enables Loyal Market's Fund Management team to make informed decisions about the pattern using artificial intelligence's predictions.

About Loyal Markets

Loyal Markets is one of the world's leading brokerage firms. The company's mission is to expand internationally and become a global financial powerhouse. Uniting a work-force which specialized investment professionals globally, Loyal Markets also boasts a comprehensive administrative support, state-of-the-art artificial intelligence and excellent risk control protocols.

Media ContactCompany: Loyal MarketsContact Person: Will ColmoreEmail: contact@loyalmarkets.comWebsite: https://www.loyalmarkets.comTelephone: +501 4892 5899Address: 1782 Coney Dr, Belize City, Belize

See more here:

Loyal Markets on the FX Market and AI Technology - GlobeNewswire

3 common jobs AI will augment or displace – VentureBeat

Its clear artificial intelligence (AI) and automation will dramatically affect the job market, but theres conflicting ideas on just how soon this will happen. Some believe its imminent possibly fueled by developmentslike the Japanese insurance company replacing over 30 employees with robots but its not that cut and dried. Many of the jobs that will be automated are the same jobs companies have been outsourcing for years: customer support, data entry, accounting, etc. Others are jobs they simply cannot fill due to decreases in headcount.

Either way, as transactions and expectations for real-time output increase, businesses are struggling to meet this demand and must digitize their operations to remain competitive. Its the future of human labor. Its not black and white, or good and evil, its simply the natural cycle of automation, just like we saw in the industrial revolution and will see again after AI becomes commonplace.

Adoption of AI and automation will be highest in regulated industries and those that must process thousands of transactions and customer requests daily. They are industries like banking, financial services, insurance, and health care those with repetitiveprocesses like copying and pasting that do not really require human intelligence. Its the types of jobs/task within an organization that are repeatable and admin-heavy that will be automated first. In fact, in three examples in particular, were already seeing automation play a big role.

The cost of fraudulent claims across all lines of insurance amounts to $80 billion a year, and well over half of insurers predict an increase in such fraud. Yet, despite the well-known pressures insurers face to correctly verify claims, they also get a bad rap for not doing so fast enough. Thats why the insurance industry is looking to advances in AI to both reduce fraudulent claims and improve customer service by speeding up the process.

Using machine learning, a subfield of AI, insurers can auto-validate policies, matching key facts from the claim to the policy and using cognitive analysis to determine whether the claim should be paid. These technologies can even transmit data into the system for downstream payment automatically and in a fraction of the time it would take a human to complete the same task. Humans are then elevated to tasks that really require their human intelligence and their customer service expertise.

Consumers have become accustomed to talking to bots, whether its asking Siri to find the closest dry cleaner or asking Amazons Alexa to add bananas to the grocery list. And the financial services and banking industries are no exception. More banks are reducing manual service efforts by offloading repetitive inquiries to AI-powered chatbots.

Theyre training these bots on historical conversations so they can perform the same tasks as a human agent, conversing with customers to determine their needs and then, in the best scenarios, actually executing a business process to deliver against their intent. More complex conversations are escalated to a human agent, where they now have the time to handle with care; meanwhile,the chatbots are working in the background to learn from the outcome. Customers are happy because their needs are taken care of seamlessly and quickly, and banks are able to reduce the backlog of customer service requests.

The ultimate goal of every health care plan administrator is to ensure claims are received and processed accurately and on time. The sheer volume of claims makes this a difficult task. Making it harder, claims are submitted in various formats fax, email, handwritten, etc. and must be put in a standardized format before theyre processed. In fact, billing and insurance-related paperwork costs an estimated $375 billion annually.

Using advanced AI/machine learning technologies, health care providers can reduce the amount of time it takes to process a claim and respond to patients and providers. Not only does this improve patient satisfaction, it also lessens errors that can result in hefty financial losses and regulatory fines. While it wont replace all health care administrators, it will help them redirect their resources toward critical, customer-facing activities.

According to a recent report by McKinsey Global Institute, almost every job has the potential to be automated, but more often than not, these jobs will require a combination of automation and human intelligence. There will be a tsunami of job loss relating tocertain tasks but this will push people into higher value work. Data entry may be automated, but creative thinking wont be replaced by bots. AI is creating new efficiencies that will ultimately change the types of jobs that are in demand. This reality is happening more quicklyin some industries than others, but it is unequivocally transforming the way work gets done.

Read more from the original source:

3 common jobs AI will augment or displace - VentureBeat

China Wants to Lead the World on AI. What Does That Mean for America? – The National Interest

Years ago, the thought of using software to fight a deadly pathogen might have seemed far-fetched. Today, its a reality. The Coronavirus pandemic has caused monumental shifts in the use and deployment of artificial intelligence (AI) around the world.

Of those now using AI to fight Coronavirus, none are more prominent than China. From software that diagnoses the symptoms of Coronavirus to algorithms that identify and compile data on individuals with high temperatures vis--vis infrared cameras, China is showcasing the potential applications of AI. But Beijing is also demonstrating its willingness to leverage the technology to solve many of its problems.

To understand the potential benefits and perils, we need to delve a bit deeper into the subject of AI itself. Artificial intelligence essentially falls into two categories: narrow and general. Narrow AI is a type of machine learning that is limited to specifically defined tasks, while general AI refers to totally autonomous intelligence akin to human cognition. General AI remains a distant dream for many, but the real-world implications of narrow AI exist in the presentand China is working diligently to become a world leader in it.

In his book AI Superpowers: China, Silicon Valley, and the New World Order, former Microsoft executive and Google China president Kai-Fu Lee describes how the country began rapid development of AI as a response to AlphaGo, a software program that successfully bested the worlds top player in the ancient game of Go back in 2017. That victory, Lee explains, showcased China's Communist Party (CCP) research and technology with infinite potential.

The revelation was a sea-change. In its 2019 Annual Report, the U.S.-China Economic and Security Review Commission noted that the Next Generation AI Development Plan released in 2017 by Chinas State Council marked a shift in Chinas approach to AI, from pursuing specific applications to prioritizing AI as foundational to overall economic competitiveness.

The results have been rapidand pronounced. China is still considered to be second in the race to AI (behind the U.S.), but it is quickly gaining traction. As the United Nations World Intellectual Property Organization (WIPO) noted last year, China leads in AI-related publications and patent applications originating from public research institutions, and the gap is shrinking between the U.S. and China in patent requests originating from the private sector.

And because the aggregation of vast swathes of data is what drives the most effective artificial intelligence, China is in a unique position to persevere. With the worlds largest population and close to no data privacy protections, the PRC has the potential to develop the worlds best AI products.

Beijing is also working hard to maintain its freedom of action in this domain. Back in March, China triedand nearly succeededin installing its candidate as head of the WIPO, a move that would essentially have assured that its lengthy track record of violating intellectual property rights, theft and espionage would not come with any consequences.

Those practices are already raising international hackles. In April of 2020, Bloomberg reported that electric carmaker Tesla is now seeking further legal action to analyze the source code of a competitors product in China after a former Tesla employee allegedly left the company in 2018 for the Chinese startup, carrying with him secrets from Teslas self-driving AI, AutoPilot.

But the CCP is also harnessing AI to strengthen its authoritarian state. Against the backdrop of the coronavirus pandemic, the Chinese government has stepped up its repressive domestic practices, including its persecution and detention of Uyghur Muslims in Western China and a broad crackdown on Hong Kong. Worryingly, Chinese advances in AI seem to be empowering these practices, as well as making them more effective.

These dynamics should matter a great deal to the United States, which has stepped up its strategic competition with China in earnest in recent months. Chinas activism on the AI front, and its attention to this emerging technology, has made abundantly clear that the PRC places tremendous value on dominating the field of AI. Washington should think deeply about what that would mean, in both a political and a technological sense. And then it should get just as serious in this sphere as well.

Ryan Christensen is a researcher at the American Foreign Policy Council in Washington, DC.

See the original post:

China Wants to Lead the World on AI. What Does That Mean for America? - The National Interest

In the AI Age, Being Smart Will Mean Something Completely Different – Harvard Business Review

Executive Summary

To date, many of us have achieved success by being smarter than other people as measured by grades and test scores, beginning from our early days in school. The smart people were those that received the highest scores by making the fewest mistakes.

AI will change that because there is no way any human being can outsmart, for example, IBMs Watson, at least without augmentation. What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement.

Andrew Ng has likened artificial intelligence (AI) to electricity in that it will be as transformative for us as electricity was for our ancestors. I can only guess that electricity was mystifying, scary, and even shocking to them just asAI will be to many of us. Credible scientists and research firms have predicted that the likely automation of service sectors and professional jobs in the United States will be more than 10times as large as the number of manufacturing jobs automated to date. That possibility is mind-boggling.

So, what can we do to prepare for the new world of work? Because AI will be a far more formidable competitor than any human, we will be in a frantic race to stay relevant. That will require us to take our cognitive and emotional skills to a much higher level.

Many experts believe that human beings will still be needed to do the jobs that require higher-order critical, creative, and innovative thinking and the jobs that require high emotional engagement to meet the needs of other human beings. The challenge for many of us is that we do not excel at those skills because of our natural cognitive and emotional proclivities:We are confirmation-seeking thinkers and ego-affirmation-seeking defensive reasoners. We will need to overcome those proclivities in order to take our thinking, listening, relating, and collaborating skills to a much higher level.

I believe that this process of upgrading begins with changing our definition of what it means to be smart. To date, many of us have achieved success by being smarter than other people as measured by grades and test scores, beginning inour early days in school. The smart people were those that received the highest scores by making the fewest mistakes.

AI will change that because there is no way any human being can outsmart, for example, IBMs Watson, at least without augmentation. Smart machines can process, store, and recall information faster and betterthan we humans. Additionally, AI can pattern-match faster and produce a wider array of alternatives than we can. AI can even learn faster. In an age of smart machines, our old definition of what makes a person smart doesnt make sense.

What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement. The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality. And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.

We will spend more time training to be open-minded and learning to update our beliefs in response to new data. We will practice adjusting after our mistakes, and we will invest more in the skills traditionally associated with emotional intelligence. The new smart will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears. Doing so will make it easier to perceive reality as it is, rather than as we wish it to be. In short, we will embrace humility. That is how we humans will add value in a world of smart technology.

Continue reading here:

In the AI Age, Being Smart Will Mean Something Completely Different - Harvard Business Review

Hoffman-Yee research grants focus on AI | Stanford News – Stanford University News

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) today announced six inaugural recipients of the Hoffman-Yee Research Grant Program, a multiyear initiative to invest in research that leverages artificial intelligence (AI) to address real-world problems.

Computer Science Associate Professor Karen Liu and collaborators will research robotic devices to aid in human locomotion using their Hoffman-Yee Grant. (Image credit: Christophe Wu)

The projects were selected for their boldness, ingenuity and potential for transformative impact. The grantees comprise interdisciplinary teams of faculty members, postdoctoral scholars and graduate students spanning the Schools of Business, Education, Engineering, Humanities and Sciences, Law and Medicine.

Philanthropists Reid Hoffman and Michelle Yee are providing foundational support for the grants.

The Hoffman-Yee Research Grant Program is helping to drive new collaborations across campus, harnessing AI to benefit humanity, said Stanford President Marc Tessier-Lavigne. Technological advancements must be inextricably linked to research about their potential societal impacts. I am very grateful to Reid and Michelle for their vision and extraordinary generosity in creating this program.

HAI received submissions from 22 different departments and all of Stanfords seven schools. Each of the six teams selected will receive significant funding to enable ambitious research by assisting with hiring students and postdocs, procuring data and equipment and accessing computational and other resources.

These projects will initiate and sustain exciting new collaborations across the university, said John Etchemendy, Denning Co-Director of HAI and the Patrick Suppes Family Professor in the School of Humanities and Sciences. The interdisciplinary teams each apply AI in a novel context to address challenges whose solutions could bring significant benefits to human wellbeing.

The six projects, which were submitted for review before the COVID-19 pandemic, will push the boundaries of how AI can advance education, health care and government. Project goals range from advancing AI technology through better understanding of human learning, creating more adaptable, collaborative AI agents for a wide range of assistive tasks, applying AI to facilitate and improve student learning, elder care and government operations, and creating tools for understanding the history and evolution of concepts.

Reid Hoffman (Image credit: David Yellen)

Michelle and I are delighted to help enable Stanford HAI to diversify and scale the research community applying artificial intelligence toward a range of major societal issues, said Reid Hoffman. Extraordinary opportunities for discovery and innovation will result from uniting technologists, humanists and educators together to take on pressing challenges that bridge their respective fields.

An entrepreneur, executive and investor, Reid Hoffman plays an integral role in building many of todays leading consumer technology businesses and is chair of the HAI Advisory Council. In 2003 he co-founded LinkedIn, the worlds largest professional networking service. In 2009 he joined Greylock Partners. Reid serves on the boards of multiple companies and nonprofits, including Kiva, Endeavor, CZI Biohub, Do Something and the MacArthur Foundations 100&Change. Michelle Yee earned her undergraduate degree from Stanford and her doctorate in education from the University of San Francisco.

The Hoffman-Yee Research Grant Program provides each award recipient an initial year of research funding, which can potentially be extended to three years. Each of the six research projects was reviewed carefully for ethical risks and benefits to society and subgroups within society as well as the global community.

While the algorithms that drive artificial intelligence may appear to be neutral, the data and applications that shape the outcomes of those algorithms are not. What matters are the people building it, why theyre building it and for whom. AI research must take into account its impact on people, said Fei-Fei Li, Sequoia Professor of Computer Science, Stanford and Denning Co-Director of Stanford HAI. Thats why these research projects are so promising. Each of them can make a significant difference in the lives of ordinary people, supporting HAIs purpose to improve the human condition.

The projects and principal investigators are:

Intelligent Wearable Robotic Devices for Augmenting Human Locomotion

PI: Karen Liu, Associate Professor of Computer ScienceFaculty, postdoctoral scholars and graduate students from Mechanical Engineering, Bioengineering, Orthopedic Surgery and Medicine

Falling injuries among the elderly cost the U.S. health system $50 billion (2015) while causing immeasurable suffering and loss of independence. This research team seeks to develop wearable robotic devices using an AI system that both aids in human locomotion, as well as predicts and prevents falls among older people.

AI Tutors to Help Prepare Students for the 21st Century Workforce

PI: Christopher Piech, Assistant Professor of Computer Science Education.Faculty and postdoctoral scholars from Education, Psychology and Computer Science

The project aims to demonstrate a path to effective, inspiring education that is accessible and scalable. The team will create new AI systems that model and support learners as they work through open-ended activities like writing, drawing, working on a science lab, or coding. The research will monitor learners motivation, identity and competency to improve student learning. Tested solutions will be implemented in code.org, brick-and-mortar schools, virtual science labs and beyond.

Toward Grounded, Adaptive Communication Agents

PI: Christopher Potts, Professor of Linguistics and, by courtesy, Computer ScienceFaculty and postdoctoral scholars from Electrical Engineering, Philosophy, Psychology, Linguistics, Law

This project aims to develop next-generation, language-based virtual agents capable of collaborating with humans on meaningful, challenging tasks such as caring for patients. The research could be particularly impactful for assistive technologies, where a humans behavior and language use will change over repeated interactions with their personal agent.

Curious, Self-aware AI Agents to Build Cognitive Models and Understand Developmental Disorders

PI: Daniel Yamins, Assistant Professor of Psychology and Computer Science.Faculty, postdoctoral scholars and graduate students affiliated with Psychology, Graduate School of Education, Computer Science, School of Medicine

Human children learn about their world and other people as they explore. This project will bring together tools from AI and cognitive and clinical sciences, creating playful, socially interactive artificial agents and improving the understanding and diagnosis of development variability, including Autism Spectrum Disorder. In the process, the team hopes to gain insights into building robots that can handle new environments and interact naturally in social settings.

Reinventing Government with AI: Modern Tax Administration

PI: Jacob Goldin, Associate Professor of LawFaculty, postdoctoral scholars and graduate students from Law, Business, Engineering and Economics

This team seeks to demonstrate how AI-driven, evidence-based learning can benefit U.S. government agencies by driving efficiencies and improving the delivery of services. The team proposes an active-learning system that uses an AI algorithm to decide which tax returns should be prioritized for auditing for a more effective and fairer tax collection system. This research will have implications for a wide range of othergovernmental contexts, including environmental and health compliance.

An AI Time Machine for Investigating the History of Concepts

PI: Dan Jurafsky, Professor of Humanities, Linguistics and Computer ScienceFaculty from English and Digital Humanities, Philosophy, Economics, French, Political Science, History of Science, Sociology, Psychology and Biomedical Data Science

This research will develop new AI technology to examine historical texts in multiple languages to help humanists and social scientists better interpret history and society. Researchers will investigate key questions on morality, immigration, bias, aesthetics and more. Using AI to help analyze how ideas change over time and how thought shapes society could be a breakthrough contribution not only to AI but to the humanities as well.

Read the rest here:

Hoffman-Yee research grants focus on AI | Stanford News - Stanford University News

Why emotion recognition AI can’t reveal how we feel – The Next Web

The growing use of emotion recognition AI is causing alarm among ethicists. They warn that the tech is prone to racial biases, doesnt account for cultural differences, and isused for mass surveillance. Some argue that AIisnt even capable of accurately detecting emotions.

A new study published in Nature Communications has shone further light on these shortcomings.

The researchers analyzed photos of actors to examine whether facial movements reliably express emotional states.

They found that people use different facial movements to communicate similar emotions. One individual may frown when theyre angry, for example, but another would widen their eyes or even laugh.

The research also showed that people use similar gestures to convey different emotions, such as scowling to express both concentration and anger.

Study co-author Lisa Feldman Barrett, a neuroscientist at Northeastern University, said the findings challenge common claims around emotion AI:

Certain companies claim they have algorithms that can detect anger, for example, when what really they have under optimal circumstances are algorithms that can probably detect scowling, which may or may not be an expression of anger. Its important not to confuse the description of a facial configuration with inferences about its emotional meaning.

The researchers used professional actors because they have a functional expertise in emotion: their success depends on them authentically portraying a characters feelings.

The actors were photographed performing detailed, emotion-evoking scenarios. For example, He is a motorcycle dude coming out of a biker bar just as a guy in a Porsche backs into his gleaming Harley and She is confronting her lover, who has rejected her, and his wife as they come out of a restaurant.

The scenarios were evaluated in two separate studies.In the first, 839 volunteers rated the extent to which the scenario descriptions alone evoked one of 13 emotions: amusement, anger, awe, contempt, disgust, embarrassment, fear, happiness, interest, pride, sadness, shame, and surprise.

Next, the researchers used the median rating of each scenario to classify them into 13 categories of emotion.

The team then used machine learning to analyze how the actors portrayed these emotions in the photos.

This revealed that the actors used different facial gestures to portray the same categories of emotions. It also showed that similar facial poses didnt reliably express the same emotional category.

The team then asked additional groups of volunteers to assess the emotional meaning of each facial pose alone.

They found that the judgments of the poses alone didnt reliably match the ratings of the facial expressions when they were viewed alongside the scenarios.

Barrett said this shows the importance of context in our assessments of facial expressions:

When it comes to expressing emotion, a face does not speak for itself.

The study illustrates the enormous variability in how we express our emotions. It also further justifies the concerns around emotion recognition AI, which is already used in recruitment, law enforcement, and education,

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Go here to read the rest:

Why emotion recognition AI can't reveal how we feel - The Next Web

Artificially inflated: It’s time to call BS on AI – InfoWorld

First there was "open washing," the marketing strategy for dressing up proprietary software as open source. Next came "cloud washing," whereby datacenter-bound software products masqueraded as cloud offerings. The same happened to big data, with petabyte-deprived enterprises pretending to be awash in data science.

Now we're into AI-washing -- an attempt to make dumb products sound smart.

Judging by the number of companies talking up their amazing AI projects, the entire Fortune 500 went from bozo status to the Mensa society. Not to rain on this parade, but it's worth remembering that virtually all so-called AI offerings today should be defined as "artificially inflated" rather than "artificially intelligent."

As tweeted by Michael McDonough, global director of economic research and chief economist, Bloomberg Intelligence, the number of mentions of artificial intelligence on earnings calls has exploded since mid-2014:

It's possible that in the last three years, the state of AI has accelerated incredibly fast so that nearly every enterprise now has something worthwhile to say on the subject. More likely, everyone wants on the AI bandwagon, and in the absence of mastery, they're marketing.

AI is, after all, incredibly difficult. Yann LeCun, director of AI research at Facebook, said at a recent O'Reilly conference that "machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan."

Most companies have neither the expertise on staff nor the scale to pull this off. Or, at least, not to an extent worthy of talking about AI initiatives on earnings calls.

Developers recognize this even if their earnings-touting executives don't. For example, as an extensive, roughly 8,500-strong developer survey from VisionMobile uncovers, less than one quarter of developers think AI-driven chatbots are currently worthwhile. While chatbots aren't the only expression of AI, they're one of the most visible examples of hype getting out in front of reality.

I witnessed the sound and fury of AI hype firsthand at Mobile World Congress in Barcelona, where I participated in a panel ("The Future of Messaging: Engagement, eCommerce and Bots") that explored the current and future state of AI as applied to messaging and chatbots. Executives from Google, PayPal, and Sprint joined me, and it quickly became clear that the promise of AI has yet to be realized and won't be for some time. Instead of overpromising a near-term AI future, the session seemed to conclude, it would be best for enterprises to focus on small-scale AI projects that deliver simple but effective consumer value.

For example, machine learning/AI can be used to interpret patterns in X-rays, as Dr. Ziad Obermeyer of Harvard Medical School and Brigham and Women's Hospital and Ezekiel Emanuel, Ph.D., of the University of Pennsylvania, posit in a New England Journal of Medicine article. Deep, mind-blowing AI? Nope. Effective (and likely to render a big chunk of the radiologist population under-employed)? Likely.

The trick to making AI work well is data: lots and lots of data. Most companies simply aren't in a position to gather, create, or harness that data. Google, Apple, Amazon, and Facebook, by contrast, can and do, and yet anyone who has used Amazon's Echo or Apple's Siri knows that the output of their mountains of data is still relatively basic. Each of these companies sees the potential, however, and is ramping up efforts to collect and annotate data. Amazon, for example, has 15,000 to 20,000 low-paid people working behind the scenes on labeling snippets of data. Those people are building toward an AI-driven future, but it's still the future.

So let's not get ahead of ourselves. Everyone may be talking about AI, but it's mostly artificial with precious little intelligence. That's OK, so long as we recognize it as such and build simple services that deliver on their promise.

In sum, we don't need an AI revolution. Evolution will do nicely.

The rest is here:

Artificially inflated: It's time to call BS on AI - InfoWorld

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic – VentureBeat

While self-driving cars have hogged the headlinesfor the past few years, other forms of autonomous transport are starting to heat up.

This month, IBM and Promare a U.K.-based marine research and exploration charity will trial a prototype of an artificial intelligence (AI)-powered maritime navigation system ahead of a September 16th venture to send a crewless ship across the Atlantic Ocean on the very same route the original Mayflower traversed 400 years ago.

The original Mayflower ship, which in 1620 carried the first English settlers to the U.S., traveled from Plymouth in the U.K. to what is today known as Plymouth, Massachusetts. Mayflower version 1.0 was a square-rigged sail ship, like many merchant vessels of the era, and relied purely on wind and human navigation techniques to find its way to the New World. The Mayflower Autonomous Ship (MAS), on the other hand, will be propelled by a combination of solar- and wind-generated power, with a diesel generator on board as backup.

Moreover, while the first Mayflower traveled at a maximum speed of around 2.5 knots and took some two months to reach its destination, the upgraded version moves at a giddy 20 knots and should arrive in less than two weeks.

The mission, first announced back in October, aims to tackle all the usual obstacles that come with navigating a ship through treacherous waters, except without human intervention.

The onboard AI Captain, as its called, cant always rely on GPS and satellite connectivity, and speed is integral to processing real-time data. This is why all the AI and navigational smarts must be available locally, making edge computing pivotal to the ventures success.

Edge computing is critical to making an autonomous ship like the Mayflower possible, noted Rob High, IBMs CTO for edge computing. The ship needs to sense its environment, make smart decisions about the situation, and then act on these insights in the minimum amount of time even in the presence of intermittent connectivity, and all while keeping data secure from cyberthreats.

The team behind the new Mayflower has been training the ships AI models for the past few years, using millions of maritime images collected from cameras in the Plymouth Sound, in addition to other open source data sets.

For machine learning prowess, the ship is using an IBM Power AC922 system, which is used in some of the worlds biggest AI supercomputers. Alongside IBMs PowerAI Vision, the Mayflowers AI Captain is built to detect and identify ships and buoys as well as other hazards, including debris and to make decisions about what to do next.

For example, if the MAS encounters a cargo ship that has shed some of its load after colliding with another vessel, the AI Captain will be called into action and can use any combination of onboard sensors and software to circumvent the obstacles. The radar can detect hazards in the water ahead, with cameras providing additional visual data on objects in the water.

Moreover, an automatic identification system (AIS) can tap into specific information about any vessels ahead, including their class, weight, speed, cargo type, and so on. Radio broadcast warnings from the cargo ship can also be accepted and interpreted, with the AI Captain ready to decide on a change of course.

Other data the AI Captain can tap into includes the navigation system and nautical chart server, which provide the current location, speed, course, and route of the ship, as well as attitude sensors for monitoring the state of the sea and a fathometer for water depth.

The onboard vehicle management system also provides crucial data, such as the battery charge level and power consumption, that can be used to determine the best route around a hazardous patch of ocean, with weather forecasts informing the final decision.

Crucially, the AI Captain can communicate vocally with other ships in the vicinity to communicate any change in plans.

The MAS ship itself is still being constructed in Gdansk, Poland, and the AI Captain will be tested this month in a manned research ship called the Plymouth Quest, which is owned by the U.K.s Plymouth Marine Laboratory. The test will essentially determine how the AI Captain performs in real-world scenarios, and feedback will be used to refine the main vessels machine learning smarts before the September launch.

Maritime transport constitutes around 90% of global trade, as its the most cost-effective way of transporting goods in bulk. But shipping is widely regarded as a major source of pollution for the planet. Like self-driving cars, a major benefit of electrified autonomous ships is that they reduce emissions while also promising fewer accidents at least three quarters of maritime accidents are thought to be caused by human error.

Moreover, crewless ships open the doors to longer research missions, as food and salaries are no longer logistical or budgetary considerations.

There has been a push toward fully automating sea-faring transport in recent years. Back in 2016, news emerged that an unmanned warship called Sea Hunter was being developed by research agency DARPA, which passed the Sea Hunter prototype on to the Office of Naval Research two years later for further iteration. In Norway, a crewless cargo ship called the Yara Birkeland has also been in development for the past few year and is expected to go into commercial operation later in 2020. The Norwegian University of Science and Technology (NNTU) has also carried out trialsof atiny electric driverless passenger ferry.

Elsewhere,Rolls-Royce previously demonstrated a fully autonomous passenger ferry in Finland and announced a partnership with Intel as part of a grand plan to bring self-guided cargo ships to the worlds seas by 2025.

So plenty is happening in the self-navigating ship sphere a recent report from Allied Research pegged the industry at $88 billion today, and it could hit $130 billion within a decade. But while others seeks to automate various aspects of shipping, the new Mayflower is designed to be completely self-sufficient and operate without any direct human intervention.

Many of todays autonomous ships are really just automated robots [that] do not dynamically adapt to new situations and rely heavily on operator override, said Don Scott, CTO of the Mayflower Autonomous Ship. Using an integrated suite of IBMs AI, cloud, and edge technologies, we are aiming to give the Mayflower full autonomy and are pushing the boundaries of whats currently possible.

Four centuries after the Mayflower carried the Pilgrims across the Atlantic, we could be entering a new era of maritime adventures.

More here:

AI, AI, Captain! How the Mayflower Autonomous Ship will cross the Atlantic - VentureBeat

Too much AI leaves a longing for the human touch – Frederick News Post (subscription)

This quirky 1950s advertising message, posted line-by-line on a series of small roadside signs, isnt what Denis Sverdlov has in mind.

Sverdlov is CEO of Roborace, a company on the verge of putting driverless electric cars very fast and very smart electric cars on the world racing circuit. He doesnt plan on watching them go around corners and collide.

Sverdlov says his ultimate aim is to develop Artificial Intelligence technology for ordinary cars for ordinary people who just want to relax and read eBooks on the way to the supermarket. His goal is no crunch, no crash routine rides without harm to passenger or vehicle.

Not to cast doubt on his stated motive, but he seems to be having a lot of high-speed fun on the way. He and his team of designers, programmers and engineers have already tested driverless racing Robocars that can do 200 mph and avoid bumping into each other.

Their plan is to put 10 of these full-size, electric-powered machines in Roboraces on the same city street and road race courses being used today In piloted Formula E events around the world. And they aim to do it this year.

High-powered electric racers with cockpits occupied by humans have been dueling ever since the first big Formula E race in Beijing in September 2014. So its possible that the super-slick, futuristic Roborace machines without superstar drivers like Sbastien Olivier Buemi behind the wheel may, indeed, be burning up the course before Santas old-fashioned December 2017 sleigh ride.

And the racing will rapidly get better, Sverdlov says, because the Artificial Intelligence cars will begin to learn on their own, without prompting or help from people.

One example: In a crucial Roborace test, two vehicles were put on a track and allowed to race without human intervention. The competition eventually ended in a crash, but for a 20-lap competition it was a huge success. Whats really, extremely important, Sverdlov told an interviewer, is that those two cars started to understand each other and change their online path planner. In other words, their electronic control systems started behaving like human drivers.

But will these unmanned e-racers erase traditional auto sports? Will they mean the end of the Indianapolis 500 as civilization has come to know it? Will they put the brakes on the Formula One Grand Prix races of Monaco, Spain, Belgium and Malaysia? Will they drop the finish flag on NASCAR?

Robotics is advancing everywhere, driven by ever-improving Artificial Intelligence. It seems to be taking over more and more pieces of our lives, especially in the workplace.

AI, as its known, is posing a really big question: Are we going to outrace ourselves? Are we creating machines that ultimately will leave us behind?

Some philosophers and futurists believe this is the most fundamental challenge facing humanity today. They think its possible well lose control of our lives to machines, systems and automated networks that will take over nearly everything. They say artificial intelligence, while it can bring about dramatic improvement in the ways we do things, could also be our ultimate undoing.

And AI is coming on fast. Not quite 50 years ago, back in the 1970s, I took a tour of a Mack Trucks plant in Hagerstown and listened as our guide marveled at a stork-like machine jerking back and forth, spray-painting engine blocks. It did the job so much better than humans, he said, because a mindless, uncomplaining computer was running the show.

That contraption was primitive by todays standards. Entire manufacturing processes, from front to back, will soon be robotic and will soon be common.

Were seeing some resistance to all this in the growing popularity of maker and artisanal effort. Things made by artisans that is, human beings with their inconsistent, peculiar and even flawed natures are finding a foothold in the marketplace. But theyre often more expensive and harder to get. Will we put up with this inconvenience, or opt for the easy, from-the-automated-factory stuff?

Im not ready to join the Luddites, but Im thinking maybe its time to give horses and horse racing another look. As far as I know, you cant program a jockey and a thoroughbred. They dont go as fast as robocars, but I can understand them.

See original here:

Too much AI leaves a longing for the human touch - Frederick News Post (subscription)

Implementing Illinois AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing…

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

See the original post:

Implementing Illinois AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing...

AI-Driven Technology to Protect Privacy of Health Data – Analytics Insight

New research derives an AI-based method to protect the privacy of medical images.

On May 24th, researchers from the Technical University of Munich (TUM), Imperial College London, and OpenMined, a non-profit organization published a paper titled End-to-end privacy-preserving deep learning on multi-institutional medical imaging.

The research unveiled PriMIA- Privacy-Preserving Medical Image Analysis that employs securely aggregated federated learning and an encrypted approach towards the data obtained from medical imaging. As the paper states, this technology is a free, open-source software framework. They conducted the experiment on pediatric chest X-Rays and used an advanced level deep convolutional neural network to classify them.

Although there exist conventional methods to safeguard medical data, they often fail or are easily breakable. For example, centralized data sharing methods have proved inadequate to protect sensitive data from attacks. This nascent technology protects data by using federated learning, wherein only the deep learning algorithm is passed on while sharing the medical data and not the actual content. They also applied secured aggregation, which prevents from external entities finding the source where the algorithm was trained. This will not allow anybody to identify the institution where it originated, keeping the privacy intact. The researchers also used another technique to ensure that statistical correlations are derived from the data records and not the individuals contributing the data.

According to the paper, this framework is compatible with a wide variety of medical imaging data formats, easily user-configurable, and introduces functional improvements to FL training. It increases flexibility, usability, security, and performance. PriMIAs SMPC protocol guarantees the cryptographic security of both the model and the data in the inference phase, states the report.

A report by the Imperial College London quotes professor Daniel Rueckert, who co-authored the paper and says, Our methods have been applied in other studies, but we are yet to see large-scale studies using real clinical data. Through the targeted development of technologies and the cooperation between specialists in informatics and radiology, we have successfully trained models that deliver precise results while meeting high standards of data protection and privacy.

With the advent of technology and the rapid adoption of AI, the healthcare sector has been witnessing a digital boom. With electronic health records and the proliferation of telemedicine, there is an abundance of medical data and images generated each day. To enable better patient monitoring, diagnostics, and availability of data, these medical data are often shared across different points and institutions. This AI-driven privacy-preserving technology has a potential role to play here as it does not compromise data privacy while sharing happens. And, data cannot be traced back to individuals, thus protecting their privacy.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

View post:

AI-Driven Technology to Protect Privacy of Health Data - Analytics Insight

Samsung to unveil NEON at CES 2020, teased to be a human-like AI assistant with support for Hindi – India Today

The race for supremacy in the field of Artificial Intelligence (AI) is heating up, with the biggest players in the industry coming up with their own products one after the other. And now Samsung appears to have hinted that it may have a mysterious new product in the pipeline that could be quite a bit special.

Called Neon, the new AI-based product is currently in the works at Samsung Technology & Advanced Research Labs (STAR Labs) an independent entity of Samsung Electronics. While little has been revealed by Neon till now, it will however, be the second AI assistant from Samsung after it introduced Bixby back in 2017.

As of now, Bixby is supported across a range of products and finds a presence not only on its smartphones, but also many of its IoT enabled home appliances. Interestingly, the new AI-based assistant will also have an India connect as its creator, STAR Labs, is currently being led by India-born Pranav Mistry who will be unveiling Neon at CES 2020 in Las Vegas next month.

For Samsungs part, it has remained tight lipped on Neon, and has only teased the product via social media channels. It has also created a website with a domain name Neon life that doesn't reveal any details except showcasing a tagline saying, Have you ever met an Artificial'?

Samsung has also teased that NEON will be a human-level AI, which may heavily depend upon access to a working 5G network. While the teasers do not reveal much, a look at Star Labsgoals reveals that the project is to secure cutting-edge AI core technologies and platformshuman-level AI with the ability to speak, recognize, and thinkto provide new AI-driven experiences and value to its customers.

If this indeed ends up being true, then with Neon we very well may end up having on our hands a highly advanced AI-assistant, one which could think and act like human beings, and may very well be very difficult to differenciate from the real thing in the way it interacts with humans.

Interestingly, Samsung is also using a number of celebrities to drum up interest for this new product. One of these is Shekhar Kapur who recently tweeted, Finally, Artifical Intelligence that will make you wonder which one of you is real. Coming soon from the brilliant mind of @pranavmistry the amazing @neondotlife .. where artificial intelligence ceases to be artificial .. http://neon.life.

More here:

Samsung to unveil NEON at CES 2020, teased to be a human-like AI assistant with support for Hindi - India Today

Real life CSI: Google’s new AI system unscrambles pixelated faces – The Guardian

On the left, 8x8 images; in the middle, the images generated by Google; and on the right, the original 32x32 faces. Photograph: Google

Googles neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of enhancing an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.

The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one.

Googles researchers describe the neural network as hallucinating the extra information. The system was trained by being shown innumerable images of faces, so that it learns typical facial features. A second portion of the system, meanwhile, focuses on comparing 8x8 pixel images with all the possible 32x32 pixel images they could be shrunken versions of.

The two networks working in harmony effectively redraw their best guess of what the original facial image would be. The system allows for a huge improvement over old-fashioned methods of up-sampling: where an older system might simply look at a block of red in the middle of a face, make it 16 times bigger and blur the edges, Googles system is capable of recognising it is likely to be a pair of lips, and draw the image accordingly.

Of course, the system isnt capable of magic. While it can make educated guesses based on knowledge of what faces generally look like, it sometimes wont have enough information to redraw a face that is recognisably the same person as the original image. And sometimes it just plain screws up, creating inhuman monstrosities. Nontheless, the system works well enough too fool people around 10% of the time, for images of faces.

Running the same system on pictures of bedrooms is even better: test subjects were unable to correctly pick the original image almost 30% of the time. A score of 50% would indicate the system was creating images indistinguishable from reality.

Although this system exists at the extreme end of image manipulation, neural networks have also presented promising results for more conventional compression purposes. In January, Google announced it would use a machine learning-based approach to compress images on Google+ four-fold, saving users bandwidth by limiting the amount of information that needs to be sent. The system then makes the same sort of educated guesses about what information lies between the pixels to increase the resolution of the final picture.

Excerpt from:

Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian

How AI will automate cybersecurity in the post-COVID world – VentureBeat

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberies. Willie Sutton said he robbed banks because thats where the money is. If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of societys operations being online, cybercrime is the most common type of crime.

Unfortunately, society isnt evolving as quickly as cybercriminals are. Most people think they are only at risk of being targeted if there is something special about them. This couldnt be further from the truth: Cybercriminals today target everyone. What are people missing? Simply put: the scale of cybercrime is difficult to fathom. The Herjavec Group estimates cybercrime will cost the world over $6 trillion annually by 2021, up from $3 trillion in 2015, but numbers that large can be a bit abstract.

A better way to understand the issue is this: In the future, nearly every piece of technology we use will be under constant attack and this is already the case for every major website and mobile app we rely on.

Understanding this requires a Matrix-like radical shift in our thinking. It requires us to embrace the physics of the virtual world, which break the laws of the physical world. For example, in the physical world, it is simply not possible to try to rob every house in a city on the same day. In the virtual world, its not only possible, its being attempted on every house in the entire country. Im not referring to a diffuse threat of cybercriminals always plotting the next big hacks. Im describing constant activity that we see on every major website the largest banks and retailers receive millions of attacks on their users accounts every day. Just as Google can crawl most of the web in a few days, cybercriminals attack nearly every website on the planet in that time.

The most common type of web attack today is called credential stuffing. This is when cybercriminals take stolen passwords from data breaches and use tools to automatically log in to every matching account on other websites to take over those accounts and steal the funds or data inside them. These account takeover (ATO) events are possible because people frequently reuse their passwords across websites. The spate of gigantic data breaches in the last decade has been a boon for cybercriminals, reducing cybercrime success to a matter of reliable probability: In rough terms, if you can steal 100 users passwords, on any given website where you try them, one will unlock someones account. And data breaches have given cybercriminals billions of users passwords.

Above: Source: Attacks Against Financial Services via F5 Security Incident Response Team in 2017-2019

Whats going on here is that cybercrime is a business, and growing a business is all about scale and efficiency. Credential stuffing is only a viable attack because of the large-scale automation that technology makes possible.

This is where artificial intelligence comes in.

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools.

Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes. They do this with a fully automated attack infrastructure, using the same DevOps techniques that are popular in the legitimate business world. This is no surprise, since running such a criminal system is similar to operating a major commercial website, and cybercrime-as-a-service is now a common business model. AI will be further infused throughout these applications over time to help them achieve greater scale and to make them harder to defend against.

So how can we protect against such automated attacks? The only viable answer is automated defenses on the other side. Heres what that evolution will look like as a progression:

Right now, the long tail of organizations are at level 1, but sophisticated organizations are typically somewhere between levels 3 and 4. In the future, most organizations will need to be at level 5. Getting there successfully across the industry requires companies to evolve past old thinking. Companies with the war for talent mindset of hiring huge security teams have started pivoting to also hire data scientists to build their own AI defenses. This might be a temporary phenomenon: While corporate anti-fraud teams have been using machine learning for more than a decade, the traditional information security industry has only flipped in the past five years from curmudgeonly cynicism about AI to excitement, so they might be over-correcting.

But hiring a large AI team is unlikely to be the right answer, just as you wouldnt hire a team of cryptographers. Such approaches will never reach the efficacy, scale, and reliability required to defend against constantly evolving cybercriminal attacks. Instead, the best answer is to insist that the security products you use integrate with your organizational data to be able to do more with AI. Then you can hold vendors accountable for false positives and false negatives, and the other challenges of getting value from AI. After all, AI is not a silver bullet, and its not sufficient to simply be using AI for defense; it has to be effective.

The best way to hold vendors accountable for efficacy is by judging them based on ROI. One of the beneficial side effects of cybersecurity becoming more of an analytics and automation problem is that the performance of all parties can be more granularly measured. When defensive AI systems create false positives, customer complaints rise. When there are false negatives, ATOs increase. And there are many other intermediate metrics companies can track as cybercriminals iterate with their own AI-based tactics.

If youre surprised that the post-COVID Internet sounds like its going to be a Terminator-style battle of good AI vs. evil AI, I have good news and bad news. The bad news is, were already there to a large extent. For example, among major retail sites today, around 90% of login attempts typically come from cybercriminal tools.

But maybe thats the good news, too, since the world obviously hasnt fallen apart yet. This is because the industry is moving in the right direction, learning quickly, and many organizations already have effective AI-based defenses in place. But more work is required in terms of technology development, industry education, and practice. And we shouldnt forget that sheltering-in-place has given cybercriminals more time in front of their computers too.

Shuman Ghosemajumder is Global Head of AI at F5. He was previously CTO of Shape Security, which was acquired by F5 in 2020, and was Global Head of Product for Trust & Safety at Google.

Read more:

How AI will automate cybersecurity in the post-COVID world - VentureBeat

A voice-over artist asks: Will AI take her job? – WHYY

This story is from The Pulse, a weekly health and science podcast.

Subscribe on Apple Podcasts, Stitcher or wherever you get your podcasts.

My name is Nikki Thomas, and I am a voice-over artist. I speak into a microphone, and my voice is captured. I can change my accent. My pitch. My mood.

But its still me, right? Until its not. Because I am being replaced by my own voice an AI version of my voice.

It starts with TTS, or text-to-speech. Thats the same technology used to create Siri or Alexa. It captures a human voice and then artificially replicates that sound to read any digital text out loud.

I got hired for a TTS job. I delivered my spoken words to the client. Then a few weeks later, I could type words into a text box, and my voice clone said them back to me.

I asked longtime client and audio engineer Daren Lake to compare the two. And while concluding that the AI voice actually sounded pretty good, he could still hear that a robot made it.

Its got these warbling artifacts. I call it the zoom effect or the matrix sound, he said. Despite thinking I might be able to get away with it, the engineer in him didnt like it.

So one can tell the difference now. But when this technology gets better, could this be my new method of work? I record just a few voice samples and, before I know it, an 11-hour audiobook is produced with a voice that sounds just like mine in the time it takes me to copy and paste a document? It would be much more accurate, and reliable. An AI voice never fatigues or needs a week to recover from the flu.

Could I still consider myself a voice-over artist? If theres even a role for me. How will artificial intelligence affect creativity and artistry?

I took the question to Sarah Rose Siskind, one of the creators of a robot named Sophia. Sarah laughed when I asked if she was threatened by a robot taking her job. She told me about an 11-hour day spent getting Sophia to wink reason enough for her to believe her job was not at risk.

Sophia the Robot is an interviewer, guest speaker and host with over 16,000 YouTube subscribers. Siskind was on the writing team and worked with a group to shape Sophias personality.

An artist is a major component of her personality because we wanted her personality to be fascinated with areas not traditionally considered the domain of robots, Siskind said. However, it is hard to describe her outside of a relationship to the humans who came up with the idea of creating her.

Visit link:

A voice-over artist asks: Will AI take her job? - WHYY

AI is our best weapon against terrorist propaganda – The Next Web – TNW

In the past four months alone, there have been three separate terrorist attacks across the UK (and possibly a third reported just today) and thats after implementing efforts that the Defense Secretary claimed helped in thwarting 12 other incidents there in the previous year.

That spells a massive challenge for companies investing in curbing the spread of terrorist propaganda on the web. And although itd most certainly be impossible to stamp out the threat across the globe, its clear that we can do a lot more to tackle it right now.

Last week, we looked at some steps that Facebook is taking to wipe out content promoting and sympathizing with terrorists causes, which involve the use of AI and relying on reports from users, as well as the skills of a team of 150 experts to identify and take down hate-filled posts before they spread across the social network.

Now, Google has detailed the measures its implementing in this regard as well. Similar to Facebook, its targeting hateful content with machine learning-based systems that can sniff it out, and also working with human reviewers and NGOs in an attempt to introduce a nuanced approach to censoring extremist media.

The trouble is, battling terrorism isnt what these companies are solely about; theyre concerned about growing their user bases and increasing revenue. The measures they presently implement will help sanitize their platforms so theyre more easily marketable as a safe place to consume content, socialize and shop.

Meanwhile, the people who spread propaganda online dedicate their waking hours to finding ways to get their message out to the world. They can, and will continue to innovate so as to stay ahead of the curve.

Ultimately, whats needed is a way to reduce the effectiveness of this propaganda. There are a host of reasons why people are susceptible to radicalization, and those may be far beyond the scope of the likes of Facebook to tackle.

AI is already being used to identify content that human response teams review and take down. But I believe that its greater purpose could be to identify people who are exposed to terrorist propaganda and are at risk of being radicalized. To that end, theres hope in the form of measures that Google is working on. In the case of its video platform YouTube, the company explained in a blog post:

Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the Redirect Method more broadly across Europe.

This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

In March, Facebook began testing algorithms that could detect warning signs of users in the US suffering from depression and possibly contemplating self-harm and suicide. To do this, it looks at whether people are frequently posting messages describing personal pain and sorrow, or if several responses from their friends read along the lines of, Are you okay? The company then contacts at-risk users to suggest channels they can seek out for help with their condition.

I imagine that similar tools could be developed to identify people who might be vulnerable to becoming radicalized perhaps by analyzing the content of the posts they share and consume, as well as the networks of people and groups they engage with.

The ideas spread by terrorists are only as powerful as they are widely accepted. It looks like well constantly find ourselves trying to outpace measures to spread propaganda, but what might be of more help is a way to reach out to people who are processing these ideas, accepting them as truth and altering the course their lives are taking. With enough data, its possible that AI could be of help but in the end, well need humans to talk to humans in order to fix whats broken in our society.

Naturally, the question of privacy will crop up at this point and its one that well have to ponder before giving up our rights but its certainly worth exploring our options if were indeed serious about quelling the spread of terrorism across the globe.

Read next: How secure is your favorite messaging app?

Here is the original post:

AI is our best weapon against terrorist propaganda - The Next Web - TNW