...89101112...203040...


Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of…

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%

Covina, CA, Aug. 04, 2020 (GLOBE NEWSWIRE) -- The report"Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029.

Key Highlights:

Request Free Sample of this Business Intelligence Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/4412

Analyst View:

Geospatial technology comprises GIS (geographical information systems), GPS (global positioning systems), and RS (remote sensing), a technology that provides a radically different way of producing and using maps that are required to manage communities and industries. Developed economies are expected to provide lucrative opportunities to the industry for geospatial solutions. The application of geospatial techniques across the globe has witnessed a steady growth over the past decades, owing to simple accessibility of geospatial technology in advanced nations such as the U.S. and Canada, thus further driving growth of the target the market. Moreover, rising smart city initiatives in emerging countries have resulted in the growing need for geospatial technologies for use in 3D urban mapping, monitoring and mapping natural resources. Increasing adoption of IoT, big data analysis, and Artificial Intelligence (AI) across the globe is projected to create profitable opportunities for global geospatial solutions & services market throughout the forecast period.

Browse 60 market data tables* and 35figures* through 140 slides and in-depth TOC on Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029

Ask for a Discount on this Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-discount/4412

Key Market Insights from the report:

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%. The market report has been segmented on the basis of solution type, technology, end-user, application, and region.

To know the upcoming trends and insights prevalent in this market, click the link below:

https://www.prophecymarketinsights.com/market_insight/Global-Geospatial-Solutions-&-Services-Market-4412

Competitive Landscape:

The prominent player operating in the global geospatial solutions & services market includes HERE Technologies, Esri (US), Hexagon (Sweden), Atkins PLC, Pitney Bowes, Topcon Corporation, DigitalGlobe, Inc. (Maxar Group), General Electric, Harris Corporation (US), and Google.

The market provides detailed information regarding the industrial base, productivity, strengths, manufacturers, and recent trends which will help companies enlarge the businesses and promote financial growth. Furthermore, the report exhibits dynamic factors including segments, sub-segments, regional marketplaces, competition, dominant key players, and market forecasts. In addition, the market includes recent collaborations, mergers, acquisitions, and partnerships along with regulatory frameworks across different regions impacting the market trajectory. Recent technological advances and innovations influencing the global market are included in the report.

Story continues

About Prophecy Market Insights

Prophecy Market Insights is specialized market research, analytics, marketing/business strategy, and solutions that offers strategic and tactical support to clients for making well-informed business decisions and to identify and achieve high-value opportunities in the target business area. We also help our clients to address business challenges and provide the best possible solutions to overcome them and transform their business.

Some Important Points Answered in this Market Report Are Given Below:

Key Topics Covered

Browse Related Reports:

More here:

Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of...

Posted in Ai

What is AI’s role in remote work? | HRExecutive.com – Human Resource Executive

Artificial intelligence will be tapped for everything from employee engagement to talent acquisition.

This is the second in a series on AI transforming the workplace. Read the first piece here.

Despite the pandemic continuing to spread, it may not be too soon for HR decision-makers to look ahead toward its eventual end.

When that day comes, HR leaders and employers around the globe will be back to, among other issues, figuring out exactly how AI-based technology can continue to drive success in the newly reopened world of work.

Seth EarleyCEO of Earley Information Science and author of The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster and More Profitablesays the massive workplace changes brought on by the pandemic are sure to continue. And AI will play a key role.

Seth Earley

He predicts many organizations will continue to allow remote work even post-pandemic, and, thus will have to be more aware about what this change means for recruitment, job satisfaction, performance and retention. In fact, tech giant Google said in late July it will continue its remote work strategy until at least the middle of 2021. No doubt other employers will follow suit.

Earley says greater use of remote teams means that work culture will be more fluid and, by design, will need to be less dependent on physical cues that in-person communication provides. At the same time, he adds, measuring and maintaining employee engagement will require heightened integration of HR and day-to-day collaboration tools. This need will arise, Earley explains, because many HR applications are legacy-based, on-premise and siloed, making it harder to read signals of disengagement across multiple systems.

AI-powered, integrated cloud solutions will enable aggregation of analytics and flag low-engagement employees and employees likely to leave, Earley says. He adds that the rapid deployment of remote work and new collaboration technologies means that data, architecture and user experience functions were likely cut in the rush to adapt during the pandemic.

These tools will need to be reconciled, rationalized, standardized and correctly re-architected to improve rather than detract from productivity, he adds.

According to Earley, fewer in-office interactions will increase dependency on knowledge bases and improved information access and usability. For example, the days of asking a colleague for routine information because it is too difficult to locate on the intranet will, by necessity, be behind us.

Plus, he says, employees who have to use multiple systems to accomplish their work (or who have to adapt to a different teams preferred technology) will be less satisfied due to the overhead and inefficiency such disconnected environments cause. Using AI tools to integrate and bridge the gapsincluding through chatbots for answering routine or team- and project-specific questions, semantic search and better knowledge architecturewill improve job satisfaction and increase productivity.

Many elements of the post-pandemic workplace will change dramatically, such as the role of serendipitous in-person interactions, Early says. Intelligent collaboration systems can help fill this gap.

According to Earley, fixing the foundation of the employee experience, in part through AI-based applications, needs to be the priority across all market segments.

This will be a challenge in the post-pandemic era, but those who do not do it will lose talent, customers and market share to the ones that do, Earley says.

Ken Lazarus

Ken Lazarus, the former CEO of Scout Exchange, which was recently acquired by Aquent, expects to see an increase in AI use post-pandemic, as employers have become more sophisticated in their use of the technology, especially in regards to attracting and retaining talentwhich will be even more competitive in the years ahead.

AI started with bots for simple communication, Lazarus says. What Im beginning to see in many different use casesfrom screening applicants and job candidates and scheduling them for interviews to internal talent mobilityis a greater use of conversational AI.

You can even throw it some curveballs and have that be all conducted with software and artificial intelligence, rather than a human being, he says.

According to Lazarus, other future trends driving more AI-based solutions include:

*

Check back soon for part three.

Tom Starner is a freelance writer based in Philadelphia who has been covering the human resource space and all of its component processes for over two decades. He can be reached at hreletters@lrp.com.

Read the original:

What is AI's role in remote work? | HRExecutive.com - Human Resource Executive

Posted in Ai

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data – VentureBeat

Zencity, a platform that meshes AI with big data to give municipalities insights and aggregated feedback from local communities, has raised $13.5 million from a slew of notable backers, including lead investor TLV Partners, Microsofts VC arm M12, and Salesforce Ventures. Founded in 2015, Israel-based Zencity had previously raised around $8 million, including a $6 million tranche nearly two years ago. With its latest cash injection, the company will build out new strategic partnerships and expand its market presence.

Gathering data through traditional means, such as surveys or Town Hall meetings, can be a slow and time-consuming process and fails to factor in evolving sentiment. Zencity enables local governments and city planners to extract meaningful data from a range of unstructured data sources including social networks, news websites, and even telephone hotlines to figure out what topics and concerns are on local residents minds all in real time.

Zencity uses AI to sort and classify data from across channels to identify key topics and trends, from opinions on proposed traffic measures to complaints about sidewalk maintenance or pretty much anything else that impacts a community.

Above: Zencity platform in use

Zencity said it has seen an increase in demand during the pandemic, with 90% of its clients engaging on a weekly basis, and even on weekends.

Since COVID-19, not only have we seen an increase in usage but in demand as well, cofounder and CEO Eyal Feder-Levy told VentureBeat. Zencity has signed over 40 new local governments, reaffirming our role in supporting local governments crisis management and response efforts.

Among these new partnerships are government agencies in Austin, Texas; Long Beach, California; and Oak Ridge, Tennessee. A number of municipalities also launched COVID-19 programs using the Zencity platform, including the city of Meriden in Connecticut, which used Zencity data to optimize communications around social distancing in local parks. Officials discovered negative sentiment around the use of drones to monitor crowds in parks and noticed that communications from the mayors official channels got the most engagement from residents.

Elsewhere, government officials in Fontana, California used Zencity to assess locals opinions on lockdown restrictions and regulations.

Before the COVID-19 pandemic hit, providing real-time resident feedback for local governments was core to Zencitys AI-based solution, Feder-Levy continued. And now, as local governments continue to battle the pandemic and undertake the task of economic recovery, Zencitys platform has proven pivotal in their crisis response and management efforts.

Go here to read the rest:

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data - VentureBeat

Posted in Ai

A|I: The AI Times When AI codes itself – BetaKit

The AI Times is a weekly newsletter covering the biggest AI, machine learning, big data, and automation news from around the globe. If you want to read A|I before anyone else, make sure to subscribe using the form at the bottom of this page.

Five projects have received $29 million in funding from Scale AI and a number of companies to support the implementation of artificial intelligence.

In these unprecedented times, entrepreneurs need all the help they can get. BetaKit has teamed up with Microsoft for Startups on a new series called Just One Thing, where startup founders and tech leaders share the one thing they want the next generation of entrepreneurs to learn.

Instrumental, a startup that uses vision-powered AI to detect manufacturing anomalies, announced that it has closed a $20 million Series B led by Canaan Partners.

The tool spots similarities between programs to help programmers write faster and more efficient software.

A big study by the US Census Bureau finds that only about 9 percent of firms employ tools like machine learning or voice recognition for now.

In addition to bolstering its go-to-market efforts, Tempo says it will use the funds to expand its content offering with a second production studio.

Through its new robotic collaborations, the infamously creepy dog-shaped robots could soon ride on wheels and launch their own drones.

University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves.

The plan will be to use the funding for hiring, to invest in the tools it uses to detect entities and map the relationships between them and to bring on more clients.

A spokesperson said the funds will be used to scale the companys platform, which allows people to create a digital persona that mirrors their own.

Link:

A|I: The AI Times When AI codes itself - BetaKit

Posted in Ai

Why organizations might want to design and train less-than-perfect AI – Fast Company

These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered.

Each of these AI systems also keeps humans in the decision-making loop. Thats because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers).

In addition, giving too much authority to AI systems can unintentionally reduce human motivation. Drivers might become lazy about checking their rearview mirrors; investors might be less inclined to research alternatives; and human resource managers might put less effort into finding outstanding candidates. Essentially, relying on an AI system risks the possibility that people will, metaphorically speaking, fall asleep at the wheel.

How should businesses and AI designers think about these tradeoffs? In a recent paper, economics professor Susan Athey of Stanford Graduate School of Business and colleagues at the University of Toronto laid out a theoretical framework for organizations to consider when designing and delegating decision-making authority to AI systems. This paper responds to the realization that organizations need to change the way they motivate people in environments where parts of their jobs are done by AI, says Athey, who is also an associate director of the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.

Atheys model suggests that an organizations decision of whether to use AI at allor how thoroughly to design or train an AI systemmay depend not only on whats technically available, but also on how the AI impacts its human coworkers.

The idea that decision-making authority incentivizes employees to work hard is not new. Previous research has shown that employees who have been given decision-making authority are more motivated to do a better job of gathering the information to make a good decision. Bringing that idea back to the AI-human tradeoff, Athey says, there may be times wheneven if the AI can make a better decision than the humanyou might still want to let humans be in charge because that motivates them to pay attention. Indeed, the paper shows that, in some cases, improving the quality of an AI can be bad for a firm if it leads to less effort by humans.

Atheys theoretical framework aims to provide a logical structure to organize thinking about implementing AI within organizations. The paper classifies AI into four types, two with the AI in charge (replacement AI and unreliable AI), and two with humans in charge (augmentation AI and antagonistic AI). Athey hopes that by gaining an understanding of these classifications and their tradeoffs, organizations will be better able to design their AIs to obtain optimal outcomes.

Replacement AI is in some ways the easiest to understand: If an AI system works perfectly every time, it can replace the human. But there are downsides. In addition to taking a persons job, replacement AI has to be extremely well-trained, which may involve a prohibitively costly investment in training data. When AI is imperfect or unreliable, humans play a key role in catching and correcting AI errorspartially compensating for AI imperfections with greater effort. This scenario is most likely to produce optimal outcomes when the AI hits the sweet spot where it makes bad decisions often enough to keep human coworkers on their toes.

With augmentation AI, employees retain decision-making power while a high-quality AI augments their effort without decimating their motivation. Examples of augmentative AI might include systems that, in an unbiased way, review and rank loan applications or job applications but dont make lending or hiring decisions. However, human biases will have a bigger influence on decisions in this scenario.

Antagonistic AI is perhaps the least intuitive classification. It arises in situations where theres an imperfect yet valuable AI, human effort is essential but poorly incentivized, and the human retains decision rights when the human and AI conflict. In such cases, Atheys model proposes, the best AI design might be one that produces results that conflict with the preferences of the human agents, thereby antagonistically motivating them to put in effort so they can influence decisions. People are going to be, at the margin, more motivated if they are not that happy with the outcome when they dont pay attention, Athey says.

To illuminate the value of Atheys model, she describes the possible design issues as well as tradeoffs for worker effort when companies use AI to address the issue of bias in hiring. The scenario runs like this: If hiring managers, consciously or not, prefer to hire people who look like them, an AI trained with hiring data from such managers will likely learn to mimic that bias (and keep those managers happy).

If the organization wants to reduce bias, it may have to make an effort to expand the AI training data or even run experimentsfor example, adding candidates from historically black colleges and universities who might not have been considered beforeto gather the data needed to train an unbiased AI system. Then, if biased managers are still in charge of decision-making, the new, unbiased AI could actually antagonistically motivate them to read all of the applications so they can still make a case for hiring the person who looks like them.

But since this doesnt help the owner achieve the goal of eliminating bias in hiring, another option is to design the organization so that the AI can overrule the manager, which will have another unintended consequence: an unmotivated manager.

These are the tradeoffs that were trying to illuminate, Athey says. AI in principle can solve some of these biases, but if you want it to work well, you have to be careful about how you train the AI and how you maintain motivation for the human.

As AI is adopted in more and more contexts, it will change the way organizations function. Firms and other organizations will need to think differently about organizational design, worker incentives, how well the decisions by workers and AI are aligned with the goals of the firm, and whether an investment in training data to improve AI quality will have desirable consequences, Athey says. Theoretical models can help organizations think through the interactions among all of these choices.

This piece was originally published by the Stanford University Graduate School of Business.

See the article here:

Why organizations might want to design and train less-than-perfect AI - Fast Company

Posted in Ai

How AI is helping in the fight against coronavirus and cybercrime – Software Testing News

Matt Walmsley is an EMEA Director at Vectra, a cybersecurity company providing an AI-based network detection and response solution for cloud, SaaS, data centre and enterprise infrastructures.

With the spread of the Coronavirus, cybercriminals have gained more power and become more dangerous, leaving some IT infrastructures at risks. That is why Vectra is offering the use of AI to protect the data centre and specifically, cybersecurity powered by AI to help secure data centres and protect an organisations network.

In this interview, Matt explains why data centres represent such a valuable target for cybercriminals and how, despite the vast security measures put in place by enterprises, they are able to infiltrate a data centre system. He also explains the storyline of an attack targeting data centres and how cybersecurity powered by AI can help the security teams to detect anomalous behaviours before its too late.

Whats your current role?

Im an EMEA Director at Vectra. Ive been here for five years since we started the business, and Im predominately doing thought leadership, technical marketing and communicating information. I spent most of my time thinking about how we put AI to use in the pursuit of, in our case, cybersecurity and a big part of that is cloud and data centre.

To get into the core of what you do, you talk about cloud, data centres and AI, but which one is the core driver of all of those for your business?Which types of devices are the Vectra AI you use, integrated into and in what sectors is it applied within?

Our perspectives on this as experts of cybersecurity and AI is a culmination of those two practices: machine learning and applying it to cybersecurity user cases. So, were using it in an applied manner, i.e. to solve a particular set of complex tasks. In fact, if you look at the way AI is used in the majority of cases today, its used in a focused manner to do a specific set of tasks.

In cybersecurity practice, using AI to hunt and detect advanced attackers that are active inside the cloud, inside your data, inside your network, its really doing things at scale and speed that human beings just arent able to do. In doing so, cybersecurity is like a healthcare issue, if we find an issue early and resolve it early, well have a far more positive outcome, than if we left it festering, and theres nothing to be done. Its just the same as cybersecurity.

You talk a lot about the rapid ability of it to scale to bigger projects, in relation to your work, do you see AI as a way to solve problems in the future, or do you think theres a long way to go with it? Is AI the future? Or do you think humans managing AI is the future?

AI is becoming an increasingly important part of our lives. In cybersecurity practice, its going to be a fundamental set of tools but it wont replace human beings. Nobodys building a Skynet for cybersecurity, that sorts it all out and turns the table back on us. What were doing is building tools at size and scales to do tasks that humans beings cant do. So its really about augmenting human ability in the context of cybersecurity, and for us, its a touchstone of our business and a fundamental building block for cybersecurity operations now and in the future.

Theres a massive skills gap in our industry, so automating some cybersecurity tasks with AI is actually a very rational solution to fixing the immediate massive skills resource gap. But it can also do things humans cant do. Its not just taking the weight off your shoulders, its going to do things like spotting the really really weak signals of threat actor hiding in an encrypted web session. Its impossible to do that by hand, to do it looking at the packets, the bits of bytes, you need a deep neural net, that can look at really subtle temporal changes. AI does it faster and broader and it does things we are just not capable of doing at the same level of competency.

Its optimistic that its going to have such a dramatic effect on our working process. In terms of Data centres, how is AI working to protect data centres?

The data centres change and Im sure, as youve seen, its becoming increasingly hybrid. Theres more stuff going out to the cloud, even though people still have private data centres and clouds. One of the main challenges that a security team has with a data centre is that, as workloads are increased, moved, or mobile or flexed, its really hard to know about it.

As security experts usually have incomplete information, they wont know which VM you have just sped up or what its running. They dont know all of those things and they are meant to be agile for the business but that agility comes with a kind of information cost. I have imperfect information, I never quite know what Ive got.

Ill give you an example: I was at a very large international financial services provider and I was talking to their CCO. He had their cloud provider in and he told me where we are at with licensing and usage. What he thought he had to cover and what the business actually had was about ten times off. So there was ten times more workload out there than he and his security team even knew about.

So how can AI help us with that?

Well, if we integrate AI and allow it to monitor virtual conversations, it can automatically watch and use past observations spot, new workloads that are coming in there and how they are interacting with other entities. Its those behaviours that are the signal that tells the AI to look and find attackers. So its not about putting software in its workload, just monitoring how it works with other devices.

In doing so, we can then quickly tell the security team: here are all the workloads were seeing, here are the ones with behaviours that are consistent with active attackers and then we can score and prioritise them. What were doing is automating and monitoring the attacker for behaviours, so as a security team youre getting more signals, more noise and less ambiguity. Its not just headlined Malware or exploits, which are the ways people get into systems.

What else do you see in threat actor events?

Exactly what happens next in an advanced attack. An advanced attack can play out in days, weeks, months The attacker has got to get inside, hes had to get a credential, had to exploit a user, hes got to do research and reconnaissance, hes going to move around the environment, hes going to escalate. We call that lateral movement then hell make a play for the digital jewels, which could be in the data centre or in the cloud.

So, if you can spot those behaviours that are consistent with an attacker, youve got multiple opportunities to find an attacker in the life cycle of the attack. Just to use that healthcare analogy again, find it early and it will much better and faster to close it down. If you find them when they are running for the default gateway with a big chunk of data doing a big infiltration, you are almost breached and thats a bit too late.

Using AI, basically, is like being the drone in the sky looking over the totality of the digital enterprise and using the individual devices and how the accounts are talking to each other, looking for the very subtle, hard to spot for robust signs of the attackers and thats what were doing. I can see why AI speeds that up efficiently.

Is there a specific method or security process that Vectras cybersecurity software implements, to help protect mass data centres?

Thats quite an insightful question because not all AI is. Built the same AI, is quite a nebulous term. It doesnt tell you what algorithmic approach people are taking. I cant give you a definitive answer for a definitive technology but I can give you a methodology.

The methodology starts with the understanding that the attacker must manifest. If I got inside your organisation and I want to scan and look for devices, there are only so many techniques available for me to do that. Thats behaviour and we have tools and the protocols, to spot that. So, we can see how can we spot the malicious use of those legitimate tool or procedures, these TTPs.

How does that whole process start?

That starts with a security research team, to look for the evidence that attackers do use these behaviours, because it may be a premise, it may not be accurate, once weve done that we bring a data scientist to come in and work with this team.

So, lets find some examples of this behaviour attacker, of this behaviour manifesting in a benign way, as an attacker, in a malicious way and lets look at some regular no good data. The data scientist looks at that data, does a lot of analysis and tries to understand it. They look at the attributes, what they call a feature, and what are the feature selections. I might find it useful to build a model to find this and there are various ways you can look at data and separate the customers infrastructure and all of the different structure inside it. Then theyll go off and build a model and theyll train it with the data. Once weve got it to an effective level and performance and were happy with it, we release into our incognito NDR, network detection platform, and that goes off and looks for individual behaviour.

Remote Desktop Protocol, RDP, recon will be completely different from the thing thats looking for hidden https command and ctrl behaviours. So, it has different behaviours and data structures and different algorithmic approaches. However, some of those attacks manifest in the same way in everybodys network. We can pre-train those algorithms.

Are they aware of those behaviours?

Yes. Its like a big school, Its got its certification, its ready to go, as soon as you turn it on. Theres no concept of what it has to learn anything else, its already trained, it knows what to look for, its a complex job but weve already trained it.

But there are some things that we could never in advance. For example, Ill never know how your data centre is architected, what the IP ranges are, theres no way of me knowing it in advance and theres a lot of things we can only earn through observation.

So, we call that an unsupervised model and we blend these techniques. Some of them are supervised, some are unsupervised, some of them use recursive really deep neural networks. When its really challenging when were looking into a data problem, we just cant figure it out, what are the attributes? What are the features? We know its in there.

But what is it?

We cant figure it out. We are going to get a neural net to do that for themselves, once again doing things at a scale human could not do in an effective way. Weve got thirty patents pending now, we are rewarded on different algorithms that build they are brains that we built that does that monitoring and detection.

Do you think there are any precautions people should take to avoid cybercrime during coronavirus?

Our piece of the security puzzle is: how do I find someone who already penetrated my organisation in a quick manner, so were not the technology that stops known threats coming in? You might think of healthcare who adopted this really quickly. Healthcare, the WHO, recently called out a massive spike in COVID related phishing.

Thats the threat landscape, thats whats happening out there, thats what the threat actors are doing. We are actually inside healthcare and we did not see a particularly large spike in post intrusion behaviour, so we did not see evidence that more attackers were getting into these organisations, they all had done a reasonable job in keeping the wolf from the door.

But what we did see, because we were watching everything, were changes in how users were working. We saw a rapid pivot to using external services, generally services are associated with cloud adoption, particularly around collaboration tools and we saw a lot of data moving out of those, that created compliance challenges.

What do you mean?

Sensitive data suddenly being sent to a third party. Thats not to beret health organisation during a really challenging time but their priorities were obviously making sure clinical services were delivered but in doing so, they also opened up the attack surface where the potential for attackers to get was in increased.

Its important to maintain visibility so you can understand your attack surface, and you can then put in the appropriate procedures and policy and controls to minimise your risk there.

See the rest here:

How AI is helping in the fight against coronavirus and cybercrime - Software Testing News

Posted in Ai

Is China Winning the AI Race? by Eric Schmidt & Graham Allison – Project Syndicate

The COVID-19 pandemic has revealed the capabilities of the United States and China to deploy artificial intelligence in solving real-world problems. While America's performance hasn't exactly inspired confidence, it maintains some important competitive advantages.

CAMBRIDGE COVID-19 has become a severe stress test for countries around the world. From supply-chain management and health-care capacity to regulatory reform and economic stimulus, the pandemic has mercilessly punished governments that did not or could not adapt quickly.

The virus has also pulled back the curtain on one of this centurys most important contests: the rivalry between the United States and China for supremacy in artificial intelligence (AI). The scene that has been revealed should alarm Americans. China is not just on a trajectory to overtake the US; it is already surpassing US capabilities where it matters most.

Most Americans assume that their countrys lead in advanced technologies is unassailable. And many in the US national-security community insist that China can never be more than a near-peer competitor in AI. In fact, China is already a full-spectrum peer competitor in terms of both commercial and national-security AI applications. China is not just trying to master AI; it is mastering AI.

The pandemic has offered a revealing early test of each countrys ability to mobilize AI at scale in response to a national-security threat. In the US, President Donald Trumps administration claims that it deployed cutting-edge technology as part of its declared war on the coronavirus. But, for the most part, AI-related technologies have been used mainly as buzzwords.

Not so in China. To stop the spread of the virus, China locked down the entire population of Hubei province 60 million people. That is more than the number of residents in every state on the US East Coast from Florida to Maine. China maintained this massive cordon sanitaire by using AI-enhanced algorithms to track residents movements and scale up testing capabilities while massive new health-care facilities were being built.

The COVID-19 outbreak coincided with the Chinese New Year, a high-travel period. But top Chinese tech companies responded quickly by creating apps with health status codes to track citizens movements and determine whether individuals needed to be quarantined. AI then played a critical role in helping Chinese authorities enforce quarantines and perform extensive contract tracing. Owing to Chinas large-scale datasets, the authorities in Beijing succeeded where the government in Washington, DC, failed.

Enjoy unlimited access to the ideas and opinions of the world's leading thinkers, including weekly long reads, book reviews, and interviews; The Year Ahead annual print magazine; the complete PS archive; and more all for less than $2 a week.

Subscribe Now

Over the past decade, Chinas advantages in size, data collection, and strategic determination have allowed it to close the gap with Americas AI industry. Chinas edge begins with its population of 1.4 billion, which affords an unparalleled pool of talent, the largest domestic market in the world, and a massive volume of data collected by companies and government in a political system that always places security before privacy. Because a primary asset in applying AI is the quantity of high-quality data, China has emerged as the Saudi Arabia of the twenty-first centurys most valuable commodity.

In the context of the pandemic, Chinas ability and willingness to deploy these technologies for strategic value has strengthened its hard power. Like it or not, real wars in the future will be AI-driven. As Joseph Dunford, then the Chairman of the US Joint Chiefs of Staff, put it in 2018, Whoever has the competitive advantage in artificial intelligence and can field systems informed by artificial intelligence, could very well have an overall competitive advantage.

Is China destined to win the AI race? With a population four times the size of the US, there is no question that it will have the largest domestic market for AI applications, as well as many times more data and computer scientists. And because Chinas government has made AI mastery a first-order priority, it is understandable why some in the US would be pessimistic.

Nonetheless, we believe that the US can still compete and win in this critical domain but only if Americans wake up to the challenge. The first step is to recognize that the US faces a serious competitor in a contest that will help to decide the future. The US cannot hope to be the biggest, but it can be the smartest. In pursuing the most advanced technologies, it is arguably the brightest 0.0001% of individuals who make the decisive difference. While China can mobilize 1.5 billion Chinese speakers, the US can recruit and leverage talent from all 7.7 billion people on Earth, because it is an open, democratic society.

Moreover, while competing vigorously to sustain the US lead in AI, we also must acknowledge the necessity of cooperation in areas where neither the US nor China can secure its own minimum vital national interests without the others help. COVID-19 is a case in point. The pandemic threatens all countries national interests, and neither the US nor China can resolve it alone. In developing and widely deploying a vaccine, some degree of cooperation is essential, and it is worth considering whether a similar principle applies to the unconstrained development of AI.

The idea that countries could compete ruthlessly and cooperate intensely at the same time may sound like a contradiction. But in the world of business, this is par for the course. Apple and Samsung are intense competitors in the global smartphone market, and yet Samsung is also the largest supplier of iPhone parts. Even if AI and other cutting-edge technologies suggest a zero-sum competition between the US and China, coexistence is still possible. It may be uncomfortable, but it is better than co-destruction.

See the original post here:

Is China Winning the AI Race? by Eric Schmidt & Graham Allison - Project Syndicate

Posted in Ai

Whats This? A Bipartisan Plan for AI and National Security – WIRED

US representatives Will Hurd and Robin Kelly are from opposite sides of the ever-widening aisle, but they share a concern that the US may lose its grip on artificial intelligence, threatening the American economy and the balance of world power.

Thursday, Hurd (R-Texas) and Kelly (D-Illinois) offered suggestions to prevent the US from falling behind China, especially, on applications of AI to defense and national security. They want to cut off Chinas access to AI-specific silicon chips and push Congress and federal agencies to devote more resources to advancing and safely deploying AI technology.

Although Capitol Hill is increasingly divided, the bipartisan duo claim to see an emerging consensus that China poses a serious threat and that supporting US tech development is a vital remedy.

American leadership and advanced technology has been critical to our success since World War II, and we are in a race with the government of China, Hurd says. Its time for Congress to play its role. Kelly, a member of the Congressional Black Caucus, says that she has found many Republicans, not just Hurd, the only Black Republican in the House, open to working together on tech issues. I think people in Congress now understand that we need to do more than we have been doing, she says.

The Pentagons National Defense Strategy, updated in 2018, says AI will be key to staying ahead of rivals such as China and Russia. Thursdays report lays out recommendations on how Congress and the Pentagon should support and direct use of the technology in areas such as autonomous military vehicles. It was written in collaboration with the Bipartisan Policy Center and Georgetowns Center for Security and Emerging Technology, which consulted experts from government, industry, and academia.

The report says the US should work more closely with allies on AI development and standards, while restricting exports to China of technology such as new computer chips to power machine learning. Such hardware has enabled many recent advances by leading corporate labs, such as at Google. The report also urges federal agencies to hand out more money and computing power to support AI development across government, industry, and academia. The Pentagon is asked to think about how court martials will handle questions of liability when autonomous systems are used in war, and talk more about its commitment to ethical uses of AI.

Hurd and Kelly say military AI is so potentially powerful that America should engage in a kind of AI diplomacy to prevent dangerous misunderstandings. One of the reports 25 recommendations is that the US establish AI-specific communication procedures with China and Russia to allow human-to-human dialog to defuse any accidental escalation caused by algorithms. The suggestion has echoes of the Moscow-Washington hotline installed in 1963 during the Cold War. Imagine in a high stakes issue: What does a Cuban missile crisis look like with the use of AI? asks Hurd, who is retiring from Congress at the end of the year.

I think people in Congress now understand that we need to do more than we have been doing.

Representative Robin Kelly, D-Illinois

Beyond such worst-case scenarios, the report includes more sober ideas that could help dismantle some hype around military AI and killer robots. It urges the Pentagon to do more to test the robustness of technologies such as machine learning, which can fail in unpredictable ways in fast-changing situations such as a battlefield. Intelligence agencies and the military should focus AI deployment on back-office and noncritical uses until reliability improves, the report says. That could presage fat new contracts to leading computing companies such as Amazon, Microsoft, and Google.

Helen Toner, director of strategy at the Georgetown center, says although the Pentagon and intelligence community are trying to build AI systems that are reliable and responsible, theres a question of whether they will have the ability or institutional support. Congressional funding and oversight would help them get it right, she says.

See the rest here:

Whats This? A Bipartisan Plan for AI and National Security - WIRED

Posted in Ai

Now More Than Ever We Should Take Advantage of the Transformational Benefits of AI and ML in Healthcare – Managed Healthcare Executive

As healthcare businesses transform for a post-COVID-19 era, they are embracing digital technologies as essential for outmaneuvering the uncertainty faced by businesses and as building blocks for driving more innovation. Maturing digital technologies such as social, mobile, analytics and cloud (SMAC); emerging technologies such as distributed ledger, artificial intelligence, extended reality and quantum computing (DARQ);and scientific advancements (e.g., CRISPR, materials science) are helping to make innovative breakthroughs a reality.

These technologies are also proving essential in supporting COVID-19 triage efforts. For example, hospitals in China are using artificial intelligence (AI) to scan lungs, which is reducing the burden on healthcare providers and enabling earlier intervention. Hospitals in the United States are also using AI to intercept individuals with COVID-19 symptoms from visiting patients in the hospital.

Because AI and machine learning (ML) definitions can often be confused, it may be best to start by defining our terms.

AI can be defined as a collection of different technologies that can be brought together to enable machines to act with what appears to be human-like levels of intelligence. AI provides the ability for technology to sense, comprehend, act and learn in a way that mimics human intelligence.

ML can be viewed as a subset of AI that provides software, machines and robots the ability to learn without static program instructions.

ML is currently being used across the health industry to generate personalized product recommendations to consumers, identify the root cause of quality problems and fix them, detect healthcare claims fraud, and discover and recommend treatment options to physicians. ML-enabled processes rely on software, systems, robots or other machines which use ML algorithms.

For the healthcare industry, AI and ML represent a set of inter-related technologiesthat allow machines to perform and help with both administrative and clinical healthcare functions. Unlike legacy technologies that are algorithm-based tools that complement a human, health-focused AI and ML today can truly augment human activity.

The full potential of AI is moving beyond mere automation of simple tasks into a powerful tool enabling collaboration between humans and machines. AI is presenting an opportunity to revolutionize healthcare jobs for the better.

Recent research indicates that in order to maximize the potential of AI and to be digital leaders, healthcare organizations must re-imagine and re-invent their processes and create self-adapting, self-optimizing living processes that use ML algorithms and real-time data to continuously improve.

In fact, theres consensus among healthcare organizations hat ML-enabled processes help achieve previously hidden or unobtainable value, and that these processes are finding solutions to previously unsolved business problems.

Despite these key findings, additional research surprisingly finds that only 39% of healthcare organizations report that they have inclusive design or human-centric design principles in place to support human-machine collaboration. Machines themselves will become agents of process change, unlocking new roles and new ways for humans and machines to work together.

In order to tap into the unique strengths of AI, healthcare businesses will need to rely on their peoples talent and ability to steward, direct, and refine the technology. Advances in natural language processing and computer vision can help machines and people collaborate and understand one another and their surroundings more effectively. It will be vital to prioritize explainability to help organizations ensure that people understand AI.

Powerful AI capabilities are already delivering profound results across other industries such as retail and automotive. Healthcare organizations now have an opportunity to integrate the new skills needed to enable fluid interactions between human and machines and adapt to the workforce models needed to support these new forms of collaboration.

By embracing the growing adoption of AI, healthcare organizations will soon see the potential benefits and value of AI such as organizational and workflow improvements that can unleash improvements in cost, quality and access. Growth in the AI health market is expected to reach $6.6 billion by 2021 thats a compound annual growth rate of 40%. In just the next couple of years,the health AI market will grow more than 10 times.

AI generally, and ML specifically, gives us technology that can finally perform specialized nonroutine tasks as it learns for itself without explicit human programing shifting nonclinical judgment tasks away from healthcare enterprise workers.

What will be key to the success of healthcare organizations leveraging AI and ML across every process, piece of data and worker? When AI and ML are effectively added to the operational picture, we will see healthcare systems where machines will take on simple, repetitive tasks so that humans can collaborate on a larger scale and work at a higher cognitive level. AI and ML can foster a powerful combination of strategy, technology and the future of work that will improve both labor productivity and patient care.

Brian Kalis is a managing director of digital health and innovation for Accenture's health business.

See the rest here:

Now More Than Ever We Should Take Advantage of the Transformational Benefits of AI and ML in Healthcare - Managed Healthcare Executive

Posted in Ai

How AI is revolutionizing healthcare – Nurse.com

AI applications in healthcare can literally change patients lives, improving diagnostics and treatment and helping patients and healthcare providers make informed decisions quickly.

AI in the global healthcare market (the total value of products and services sold) was valued at $2.4 billion in 2019 and is projected to reach $31.02 billion in 2025.

Now in the COVID-19 pandemic, AI is being leveraged to identify virus-related misinformation on social media and remove it. AI is also helping scientists expedite vaccine development, track the virusand understand individual and population risk, among other applications.

Companies such as Microsoft, which recently stated it will dedicate $20 million to advance the use of artificial intelligence in COVID-19 research, recognize the need for and extraordinary potential of AI in healthcare.

The ultimate goal of AI in healthcare is to improve patient outcomes by revolutionizing treatment techniques. By analyzing complex medical data and drawing conclusions without direct human input, AI technology can help researchers make new discoveries.

Various subtypes of AI are used in healthcare. Natural language processing algorithms give machines the ability to understand and interpret human language. Machine learning algorithms teach computers to find patterns and make predictions based on massive amounts of complex data.

AI is already playing a huge role in healthcare, and its potential future applications are game-changing. Weve outlined four distinct ways that AI is transforming the healthcare industry.

This transformative technology has the ability to improve diagnostics, advance treatment options, boost patient adherence and engagement, and support administrative and operational efficiency.

AI can help healthcare professionals diagnose patients by analyzing symptoms, suggesting personalized treatments and predicting risk. It can also detect abnormal results.

Analyzing symptoms, suggesting personalized treatments and predicting risk

Many healthcare providers and organizations are already using intelligent symptom checkers. This machine learning technology asks patients a series of questions about their symptoms and, based on their answers, informs them of appropriate next steps for seeking care.

Buoy Health offers a web-based, AI-powered health assistant that healthcare organizations are using to triage patients who have symptoms of COVID-19. It offers personalized information and recommendations based on the latest guidance from the Centers for Disease Control and Prevention.

Additionally, AI can take precision medicine healthcare tailored to the individual to the next level by synthesizing information and drawing conclusions, allowing for more informed and personalized treatment. Deep learning models have the ability to analyze massive amounts of data, including information about a patients genetic content, other molecular/cellular analysis and lifestyle factors and find relevant research that can help doctors select treatments.

AI can also be used to develop algorithms that make individual and population health risk predictions in order to help improve outcomes. At the University of Pennsylvania, doctors used a machine learning algorithm that can monitor hundreds of key variables in real time to anticipate sepsis or septic shock in patients 12 hours before onset.

Detecting disease

Imaging tools can advance the diagnostic process for clinicians. The San Francisco-based company Enlitic develops deep learning medical tools to improve radiology diagnoses by analyzing medical data. These tools allow clinicians to better understand and define the aggressiveness of cancers. In some cases, these tools can replace the need for tissue samples with virtual biopsies, which would aid clinicians in identifying the phenotypes and genetic properties of tumors.

These imaging tools have also been shown to make more accurate conclusions than clinicians. A 2017 study published in JAMA found that of 32 deep learning algorithms, seven were able to diagnose lymph node metastases in women with breast cancer more accurately than a panel of 11 pathologists.

Smartphones and other portable devices may also become powerful diagnostic tools that could benefit the areas of dermatology and ophthalmology. The use of AI in dermatology focuses on analyzing and classifying images and the ability to differentiate between benign and malignant skin lesions.

Using smartphones to collect and share images could widen the capabilities of telehealth. In ophthalmology, the medical device company Remidio has been able to detect diabetic retinopathy using a smartphone-based fundus camera, a low-power microscope with an attached camera.

AI is becoming a valuable tool for treating patients. Brain-computer interfaces could help restore the ability to speak and move in patients who have lost these abilities. This technology could also improve the quality of life for patients with ALS, strokes, or spinal cord injuries.

There is potential for machine learning algorithms to advance the use of immunotherapy, to which currently only 20% of patients respond. New technology may be able to determine new options for targeting therapies to an individuals unique genetic makeup. Companies like BioXcel Therapeutics are working to develop new therapies using AI and machine learning.

Additionally, clinical decision support systems can help assist healthcare professionals make better decisions by analyzing past, current and new patient data. IBM offers clinical support tools to help healthcare providers make more informed and evidence-based decisions.

Finally, AI has the potential to expedite drug development by reducing the time and cost for discovery. AI supports data-driven decision making, helping researchers understand what compounds should be further explored.

Wearables and personalized medical devices, such as smartwatches and activity trackers, can help patients and clinicians monitor health. They can also contribute to research on population health factors by collecting and analyzing data about individuals.

These devices can also be useful in helping patients adhere to treatment recommendations. Patient adherence to treatment plans can be a factor in determining outcome. When patients are noncompliant and fail to adjust their behaviors or take prescribed drugs as recommended, the care plan can fail.

The ability of AI to personalize treatment could help patients stay more involved and engaged in their care. AI tools can be used to send patients alerts or content intended to provoke action. Companies like Livongo are working to give users personalized health nudges through notifications that promote decisions supporting both mental and physical health.

AI can be used to create a patient self-service model an online portal accessible by portable devices that is more convenient and offers more choice. A self-service model helps providers reduce costs and helps consumers access the care they need in an efficient way.

AI can improve administrative and operational workflow in the healthcare system by automating some of the process. Recording notes and reviewing medical records in electronic health records takes up 34% to 55% of physicians time, making it one of the leading causes of lost productivity for physicians.

Clinical documentation tools that use natural language processing can help reduce the time providers spend on documentation time for clinicians and give them more time to focus on delivering top-quality care.

Health insurance companies can also benefit from AI technology. The current process of evaluating claims is quite time-consuming, since 80% of healthcare claims are flagged by insurers as incorrect or fraudulent. Natural language processing tools can help insurers detect issues in seconds, rather than days or months.

More here:

How AI is revolutionizing healthcare - Nurse.com

Posted in Ai

Building The Worlds Top AI Industry Community – Lessons From Ai4 – Forbes

When it comes to artificial intelligence, and technology in general, we as a society are often guilty of thinking of it as separate from humanity. However, AI and humanity are of course intricately entwined, as AI is built by humans. Just as the saying goes, no man is an island, the same can be said of what we build.

Artificial intelligence (AI) has now permeated all sectors of business - and for good reason.

Across multiple industries, difficult or expensive tasks can be automated by AI/ML and, as a result, catapult even failing businesses into success. AI boasts a seemingly infinite list of applications - from improving customer experience to curing sleep disorders.

However, as the use of artificial intelligence rises, so does the need for cross-industry communication on the topic.

Ai4 is unique and worth high creates a bridge between industries, leaders, and technologists. The company provides a common framework for what AI means to both enterprise and the future of our globe as we transition into a new era of responsible human-machine collaboration.

One attendee, senior manager at The Aldo Group, commented on the benefits of this approach stating, I gained great insights into how my peers and competitors are leveraging AI & ML.

Ai4 Live Event

Ai4 was started by co-founders Marcus Jecklin and Michael Weiss as a small 350-person AI for financial services conference at a hotel in Brooklyn, NY. Since then, it has grown to be the top community for industry professionals seeking to learn about artificial intelligence.

Ai4 convenes thousands of people each year and reaches tens of thousands more through offline and online events, AI enterprise trainings, AI blogs and newsletters, AI matchmaking programs, and an AI jobs board.

Ai4 2020 (originally scheduled to take place at the MGM Grand in Las Vegas, now taking place digitally) promises to be an incredibly impactful event.

By gathering leaders of enterprise from across industry, government organizations, disruptive startups, investors, research labs, academia, associations, open source projects, media and analysts, Ai4 is creating the largest and most influential venue for AI-related idea-sharing, commerce, and technological progress.

Speakers for this years event include Salahuddin Khawaja, Managing Director - Automation / Global Risk, Bank of America Merrill Lynch; Stephen Wong, Chief Informatics Officer, Houston Methodist; Ameen Kazerouni, Head of ML/AI Research and Platforms, Zappos; Barret Zoph, Staff Research Scientist, Google Brain; Meltem Ballan, Data Science Lead, General Motors.

Ai4 Live Event

The speakers were amazing, commented an Assortment & Space Analyst, BJs Wholesale regarding past Ai4 live events. They covered a wide range of topics that will certainly help push our AI initiatives forward.

Success in business can often be attributed to networking - and this is no different when it comes to technology. Networks foster the exchange of ideas, as well as mutual confidence and understanding.

They also enable best practices to be created and distributed. Herein lies the genius of Ai4s AI Matchmaking system. Through this system, Ai4 arranges digital 1-1 meetings between industry leaders and vetted AI companies from the Ai4 community.

The results of this model seem to speak for themselves, according to participants. As far as recommendations go, I dont believe there is anything currently on the market, that competes or provides as much value, as Ai4s 1:1 virtual meetings, said Founder & CEO at Medlytics.

For any technology, a lack of open channels for communication will not only stall progress, but in the case of AI, it could also mean profound impacts for society. Simply put, more perspectives we encourage in this field translates into more comprehensive discussions of ethical implications and inclusive development.

Ai4 has been able to effectively address this need in the AI community by not only facilitating (virtual) space such as their AI Slack Community, but also conversations. Webinars led by AIs industry leaders on pressing topics are frequently hosted by Ai4.

Ai4 event

Additionally, Ai4 provides AI training in the form of open enrollment courses for Data Scientists & Execs as well as enterprise AI training advisory services ensuring that the Ai4 community members remain on the cutting edge. With the explosion of AI education providers in recent years, Ai4 is using their expertise to help enterprises navigate the AI education landscape to find the optimal curriculum at a fair price.

The individual presentations and moderated panels had a great combination of thoughtful commentary and technical details, commented Founder & CEO at RCM Brain, satisfying a diverse audience of technologists and business leaders.

Perhaps now more than ever, it is crucial that we remain connected - especially when it comes to the innovations that will shape our future. Ai4 is demonstrating the right way to build an advanced technology community with global, virtual conversations made up of diverse, cross-cultural, cross-industry perspectives and led by the worlds preeminent experts.

Link:

Building The Worlds Top AI Industry Community - Lessons From Ai4 - Forbes

Posted in Ai

New AI Tool GPT-3 Ascends to New Peaks, But Proves How Far We Still Need to Travel – JD Supra

If you want a glimpse of the future, check out how developers are using gpt-3.

This natural language processor was trained on parameters ten times greater than its most sophisticated rival and can be used to answer questions and write astoundingly well. Creative professionals everywhere, from top coders to professional writers marvel at what gpt-3 can produce even now in its relative infancy.

Yesterday, New York Times tech columnist Farhad Manjoo wrote that the short glimpse the general public has taken of gpt-3 is at once amazing, spooky, humbling, and more than a little terrifying. GPT-3 is capable of generating entirely original, coherent, and sometimes even factual prose. And not just prose it can write poetry, dialogue, memes, computer code, and who knows what else. Manjoo speculated on whether a similar but more advanced AI might replace him someday.

On the other hand, a recent Technology Review article describes the AI as Shockingly good and completely mindless. After describing some of the gpt-3 highlights the public has seen so far, it concedes, For one thing, the AI still makes ridiculous howlers that reveal a total lack of common sense. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs than original compositions.

Wired noted in a story last week, GPT-3 was built by directing machine-learning algorithms to study the statistical patterns in almost a trillion words collected from the web and digitized books. The system memorized the forms of countless genres and situations, from C++ tutorials to sports writing. It uses its digest of that immense corpus to respond to a text prompt by generating new text with similar statistical patterns. The results can be technically impressive, and also fun or thought-provoking, as the poems, code, and other experiments attest. But the article also stated that gpt-3, often spews contradictions or nonsense, because its statistical word-stringing is not guided by any intent or a coherent understanding of reality.

Gpt-3 is the latest iteration of language-processing machine learning program from Open AI, an enterprise funded in part by Elon Musk, and its training is orders of magnitude more complex than either its previous offering or the closest competitor. The program is currently in a controlled beta test where whitelisted programmers can make requests and run projects on the AI. According to Technology Review, For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

Gpt-3 provides a staggering glimpse of what the future can be. Simple computer tasks can be built and then confirmed in the AI, so it will know how to create custom buttons on your webpage. Developer Sharif Shameen built a layout generator with gpt-3 so he could simply ask for a button that looks like a watermelon and the AI would give him one.

This outcome shouldnt surprise everyone as a good natural language processor develops capabilities to translate from natural English to action or to another language, and computer code is little more than an expression of intent in a language that the computer can read. So translating simple English instructions into Python should not be impossible for a sophisticated AI that has read multiple Python drafting manuals.

Of course, some of the coding community is freaking out at the prospect of being replaced by this AI. Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.

OK, so gpt-3 has been trained on countless coding manuals and instruction sets. But freaketh not while gpt-3 can sometimes generate usable code, it still has no application of common sense, and therefore non-technical types cant rely on it to produce machine-readable language that can perform sophisticated tasks.

For any of you who have taken a coding course, you know that coaxing the right things out of a computer requires coders to be literal and precise in ways that are difficult for an AI to approximate. So a non-coder is likely to be frustrated with AI-generated code at this point in the process. If anything, gpt-3 is a step in the process toward easier coding, requiring a practiced software engineer to develop the right sets of questions for the AI to produce usable code quickly.

I talked about the hype cycle in one of last weeks posts, and while gpt-3 is worth the hype as an advance in AI training, where more the model has 175 billion parameters is clearly better, but it is only an impressive step in the larger process. OpenAI and its competitors will find useful applications for all of this power and continue to work toward a more general intelligence.

There are many reasons to be wary. Like others, before it, this AI picks up biases in its training, and it was trained on the internet, so expect some whoppers. Wired observed, Facebooks head of AI accused the service of being unsafe and tweeted screenshots from a website that generates tweets using GPT-3 that suggested the system associates Jews with a love of money and women with a poor sense of direction. Gpt-3 has not been trained to avoid offensive assumptions.

But the AI still has the power to astonish and may permit some incredible applications. It hasnt even been officially released as a product yet. Watch this space. As developers, writers, business executives, and artists learn to do more amazing tasks with gpt-3 (and gpt-4 and 5), we will continue to report on it.

[View source.]

See the original post:

New AI Tool GPT-3 Ascends to New Peaks, But Proves How Far We Still Need to Travel - JD Supra

Posted in Ai

AI Weekly: Big Techs antitrust reckoning is a cautionary tale for the AI industry – VentureBeat

This week, as the heads of four of the largest and most powerful tech companies in the world were called before a virtual congressional antitrust hearing to answer inquiries into how they built and run their respective behemoths, you could see that the bloom on the rose of Big Tech has faded.

Facebooks Mark Zuckerberg, once the rascally college dropout boy genius you loved to hate, still doesnt seem to grasp the magnitude of the problem of globally destructive misinformation and hate speech on his platform. Tim Cook struggles to defend how Apple takes a 30% cut from some of its app store developers revenue a policy he didnt even establish that is a vestige of Apples mid-2000s vise grip on the mobile app market. The plucky young upstarts who founded Google are both middle-aged and have stepped down from executive roles, quietly fading away while Alphabet and Google CEO Sundar Pichai runs the show. And Jeff Bezos wears the untroubled visage of the worlds richest man.

Amazon, Apple, Facebook, and Google all created tech products and services that have undeniably changed the world, some in ways that are undeniably good. But as these tech titans moved fast and broke things, they also largely excused themselves from asking difficult ethical questions, from how they built their business empires to the impacts their products and services have on the people who use them.

As AI continues to lead the next wave of transformative technology, skating over these difficult questions is a mistake the world cant afford to repeat. Whats more, AI technologies wont actually work properly unless companies address the issues at their heart.

Smart and ruthless was the tradition of Big Tech, but AI requires people to be smart and wise. Those working in AI have to not only ensure the efficacy of what they make, but holistically understand the potential harms for people AI tech impacts. Thats a more mature and just way of building world-changing technologies, products, and services. Fortunately, many prominent voices in AI are leading the field down that path.

This weeks best example was the widespread reaction to a service called Genderify, which promised to use natural language processing (NLP) to help companies identify customers gender using only their name, username, or email address. The entire premise is absurd and problematic, and when AI folks got ahold of it to put it through its paces, they predictably found it to be terribly biased (which is to say, broken).

Genderify was such a bad joke that it almost seemed like some kind of performance art. In any case, it was laughed off the internet. Just a day or so after it was launched, the Genderify site, Twitter account, and LinkedIn page were gone.

Its frustrating to many in the field that such ill-conceived and poorly executed AI offerings keep popping up. But the swift and wholesale deletion of Genderify illustrates the power and strength of this new generation of principled AI researchers and practitioners.

The burgeoning AI sector is already experiencing the kind of reckoning Big Tech is only facing after decades. Other recent examples include an outcry over a paper that promised to use AI to identify criminality from peoples faces (really just AI phrenology), which led to the paper being withdrawn from publication. Landmark studies on bias in facial recognition have led to bans and moratoriums on the technologys use in several U.S. cities, as well as a raft of legislation to eliminate or combat its potential abuses. Fresh research is finding intractable problems with bias in well-established data sets like 80 Million Tiny Images and the legendary ImageNet and leading to immediate (if overdue) change. And theres more.

Although advocacy groups play a role in pushing for changes and posing tough questions, the authority for such inquiry and the research-based proof is coming from people inside the field of AI ethicists, researchers looking for ways to improve AI techniques, and actual practitioners.

There is, of course, an immense amount of work to be done and many more battles ahead as AI fuels the next dominant set of technologies. Look no further than problematic AI in surveillance, military, the courts, employment, policing, and more.

But seeing tech giants like IBM, Microsoft, and Amazon pull back on massive investments in facial recognition is a sign of progress. It doesnt actually matter whether their actions are narrative cover for a capitulation to other companies market dominance, a calculated move to avoid potential legislative punishment, or just a PR stunt. For whatever reason, these companies acknowledged the value of slowing down and reducing damage rather than continuing to move fast and break things.

More here:

AI Weekly: Big Techs antitrust reckoning is a cautionary tale for the AI industry - VentureBeat

Posted in Ai

We’ve forgotten the most important thing about AI. It’s time to remember it again – ZDNet

In the 18th century, Hungarian inventor Wolfgang von Kempelen created a never-before-seen chess-playing machine. The automaton, called the Mechanical Turk, could handle a game of chess against a human player, and pretty well with that: it even defeated Napoleon Bonaparte in 1809, during a campaign in Vienna.

It was eventually revealed that von Kempelen's invention was an elaborate hoax. The machine, in reality, secretly hid a human chess master who directed every move. The Mechanical Turk was destroyed in the mid-19th century; but hundreds of years later, the story provides a telling metaphor for artificial intelligence.

A common narrative that surrounds AI is that the technology has agency. We hear that AI can solve climate change, build smart cities and find new drugs, and less often that in fact, it is a human programmer who is using an AI system to achieve all of those feats. Just like the human chess master hid behind von Kempelen's ingenious mechanism, so do engineers, programmers, and software developers disappear behind the algorithm.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

The relevance of the Hungarian machine is such that Amazon borrowed the name for one of its units one that is often less known than Prime or Fresh. Amazon Mechanical Turk is a division of the company that crowdsources the tedious job of labeling the huge datasets that feed AI systems to millions of remote "Turkers".

"I'm amazed that every four months or so, I catch a Tweet from someone who realizes what Amazon's Mechanical Turk is," Daniel Leufer, Mozilla fellow and technologist, tells ZDNet. "I find it fascinating that Amazon calls a platform designed to mask the human agency behind AI Mechanical Turk. We're not even hiding what we're trying to do here."

Leufer has just put the final touches to a new project to debunk common AI myths, which he has been working on since he received his Mozilla fellowship an award designed for web activists and technology policy experts. And one of the most pervasive of those myths is that AI systems can and act of their own accord, without supervision from humans.

It certainly doesn't help that artificial intelligence is often associated with humanoid robots, suggesting that the technology can match human brains. An AI system deployed, say, to automate insurance claims, is very unlikely to come in the form of a human-looking robot, and yet that is often the portrayal that is made of the technology, regardless of its application.

Leufer calls those "inappropriate robots", often shown carrying out human tasks that would never be necessary for an automaton. Among the most common offenders feature robots typing on keyboards and robots wearing headphones or using laptops.

The powers we ascribe to AI as a result even have legal ramifications: there is an ongoing debate about whether an AI system should own intellectual property (a proposal refuted by the European Patent Office and the UK Intellectual Property Office), or whether automatons should be granted citizenship. In 2017, for instance, Shibuya Mirai became the first chatbot to be granted residency in Tokyo by the Japanese government.

The current representation of AI feeds into the perception that the technology comes in one form, and one form only: a super-powerful system capable of general intelligence that is, of performing intelligently across a range of complex tasks, and eventually completing anything that a human can do.

Although achieving such a sophisticated form of artificial intelligence is not a prospect envisaged by many scientists, it seems to be the narrative that dominates even the highest level of geo-politics. "There is an entire narrative around the race for AI supremacy going on between the US, China and Europe," says Leufer. "That just doesn't make sense."

"If you believe we're headed towards an end-point, where a super-intelligence will grant you technological supremacy, then maybe it makes sense, but that's not the case. This is not a zero-sum game," he continues.

In reality, AI as we know it is still narrow. It can only solve a range of single tasks, and the step up to general intelligence is still far away in the future. But even if the anticipation of super-intelligence is currently unfounded, the consequences of misrepresenting the technology are very real.

Leufer takes the example of facial recognition, which he believes needs to be banned across the EU. The response he got from regulators, he argues, shows a lack of understanding of the technology.

"The idea is that this is a part of AI, and AI is inevitable, so we'll have to adopt it eventually and we better develop it ourselves so it is imbued with European values," says Leufer. "But AI is not just one technology. There are many ways you can use it."

SEE: CIO Jury: 58% of tech leaders say robotics will play a significant role in their industry within the next two years

Becoming a leader in industry robotics doesn't have to go hand-in-hand with developing facial recognition, just because both tap AI-enabled capabilities. It might be less exciting than the prospect of a super-intelligence, but AI is not one huge technology waiting to be cracked. In other words, artificial intelligence is not an all or nothing.

And so, as countries around the world race to develop all potential AI applications, regulation is crucial to make sure that the development of what Leufer calls "creepy stuff" is limited.

He is currently working with German NGO AlgorithmWatch to push for the creation of public registers for AI systems, in which public authorities and governments would have to provide basic information about the ways that they are using the technology, together with risk assessments, and even a way for citizens to contest the application.

"At the moment we're working in the dark, we don't know what's being used," says Leufer. Super-intelligent humanoid robots might still be a long way off, but narrow AI isn't short of issues that need fixing right now.

The rest is here:

We've forgotten the most important thing about AI. It's time to remember it again - ZDNet

Posted in Ai

How AI and ML Applications Will Benefit from Vector Processing – EnterpriseAI

As expected, artificial intelligence (AI) and machine learning (ML) applications are already having an impact on society. Many industries that we tap into dailysuch as banking, financial services and insurance (BFSI), and digitized health carecan benefit from AI and ML applications to help them optimize mission-critical operations and execute functions in real time.

The BFSI sector is an early adopter of AI and ML capabilities. Natural language processing (NLP) is being implemented for personal identifiable information (PII) privacy compliance, chatbots and sentiment analysis; for example, mining social media data for underwriting and credit scoring, as well as investment research. Predictive analytics assess which assets will yield the highest returns. Other AI and ML applications include digitizing paper documents and searching through massive document databases. Additionally, anomaly detection and prescriptive analytics are becoming critical tools for the cybersecurity sector of BFSI for fraud detection and anti-money laundering (AML).1

Scientists searching for solutions to the COVID-19 pandemic rely heavily on data acquisition, processing and management in health care applications. They are turning to AI, ML and NLP to track and contain the coronavirus, as well as to gain a more comprehensive understanding of the disease. Among the applications for AI and ML include medical research for developing a vaccine, tracking the spread of the disease, evaluating the effects of COVID-19 intervention, using natural processing of language in social media to understand the impact on society, and more.2

Processing a Data Avalanche

The fuel for BFSI applications like fraud detection, AML applications and chatbots, or health applications such as tracking the COVID-19 pandemic, are decision support systems (DSSs) containing vast amounts of structured and unstructured data. Overall, experts predict that by 2025, 79 trillion GB of data will have been generated globally.3 This avalanche of data is making data mining (DM) difficult for scalar-based high-performance computers to effectively and efficiently run a DSS for its intended applications. More powerful accelerator cards, such as vector processing engines supported by optimized middleware, are proving to efficiently process enterprise data lakes to populate and update data warehouses, from which meaningful insights can be presented to the intended decision makers.

Resurgence of Vector Processors

There is currently a resurgence in vector processing, which, due to the cost, was previously reserved for the most powerful supercomputers in the world. Vector processing architectures are evolving to provide supercomputer performance in a smaller, less expensive form factor using less power, and they are beginning to outpace scalar processing for mainstream AI and ML applications. This is leading to their implementation as the primary compute engine in high performance computing applications, freeing up scalar processors for other mission critical processing roles.

Vector processing has unique advantages over scalar processing when operating on certain types of large datasets. In fact, a vector processor can be more than 100 times faster than a scalar processor, especially when operating on the large amounts of statistical data and attribute values typical for ML applications, such as sparse matrix operations.

While both scalar and vector processors rely on instruction pipelining, a vector processor pipelines not only the instructions but also the data, which reduces the number of fetch then decode steps, in turn reducing the number of cycles for decoding. To illustrate this, consider the simple operation shown in Figure 1, in which two groups of 10 numbers are added together. Using a standard programming language, this is performed by writing a loop that sequentially takes each pair of numbers and adds them together (Figure 1a).

Figure 1: Executing the task defined above, the scalar processor (a) must perform more steps than the vector processor (b).

When performed by a vector processor, this task requires only two address translations, and fetch and decode is performed only once (Figure 1b) , rather than the 10 times required by a scalar processor (Figure 1a). And because the vector processors code is smaller, memory is used more efficiently. Modern vector processors also allow different types of operations to be performed simultaneously, further increasing efficiency.

To bring vector processing capabilities into applications less esoteric than scientific ones, it is possible to combine vector processors with scalar CPUs to produce a vector parallel computer. This system comprises a scalar host processor, a vector host running LINUX, and one or more vector processor accelerator cards (or vector engines), creating a heterogeneous compute server that is ideal for broad AI and ML workloads and data analytics applications. In this scenario, the primary computational components are the vector engines, rather than the host processor. These vector engines also have self-contained memory subsystems for increased system efficiency, rather than relying on the host processors direct memory access (DMA) to route packets of data through the accelerator cards I/O pins.

Software Matters

Processors perform only as well as the compilers and software instructions that are delivered to them. Ideally, they should be based on industry-standard programming languages such as C/C++. For AI and ML application development, there are several frameworks available with more emerging. A well designed vector engine compiler should utilize both industry-standard programming languages and open source AI and ML frameworks such as TensorFlow and PyTorch. A similar approach should be taken for database management and data analytics, using proven frameworks such as Apache Spark and Scikit-Learn. This software strategy allows for seamless migration of legacy code to vector engine accelerator cards. Additionally, by using the message passing interface (MPI) to implement distributed processing, the configuration and initialization become transparent to the user.

Conclusion

AI and ML are driving the future of computing and will continue to permeate more applications and services in the future. Many of these application deployments will be implemented in smaller server clusters, perhaps even a single chassis. Accomplishing such a feat requires revisiting the entire spectrum of AI technologies and heterogeneous computing. The vector processor, with advanced pipelining, is a technology that proved itself long ago. Vector processing paired with middleware optimized for parallel pipelining is lowering the entry barriers for new AI and ML applications, and is set to solve the challenges both today and in the future that were once only attainable by the hyperscale cloud providers.

References

About the Author

Robbert Emery is responsible for commercializing NEC Corporations advanced technologies in HPC and AI/ML platform solutions. His role includes discovering and lowering the entry point and initial investment for enterprises to realize the benefits of big data analytics in their operations. Robbert has developed a career of over 20 years in the ICT industrys emerging technologies, including mobile network communications, embedded technologies and high-volume manufacturing. Prior to joining NECs technology commercialization accelerator, NEC X Inc., in Palo Alto California, Robbert led the product and business plan for an embedded solutions company that resulted in a leadership position, in terms of both volume and revenue. He has an MBA from SJSUs Lucas College and Graduate School of Business, as well as a bachelors degree in electrical engineering from California Polytechnic State University.

Related

Read more here:

How AI and ML Applications Will Benefit from Vector Processing - EnterpriseAI

Posted in Ai

Researchers examine the ethical implications of AI in surgical settings – VentureBeat

A new whitepapercoauthored by researchers at the Vector Institute for Artificial Intelligence examines the ethics of AI in surgery, making the case that surgery and AI carry similar expectations but diverge with respect to ethical understanding. Surgeons are faced with moral and ethical dilemmas as a matter of course, the paper points out, whereas ethical frameworks in AI have arguably only begun to take shape.

In surgery, AI applications are largely confined to machines performing tasks controlled entirely by surgeons. AI might also be used in a clinical decision support system, and in these circumstances, the burden of responsibility falls on the human designers of the machine or AI system, the coauthors argue.

Privacy is a foremost ethical concern. AI learns to make predictions from large data sets specifically patient data, in the case of surgical systems and its often described as being at odds with privacy-preserving practices. The Royal Free London NHS Foundation Trust, a division of the U.K.s National Health Service based in London, provided Alphabets DeepMind with data on 1.6 million patients without their consent. Separately, Google, whose health data-sharing partnership with Ascension became the subject of scrutiny last November, abandoned plans to publish scans of chest X-rays over concerns that they contained personally identifiable information.

Laws at the state, local, and federal levels aim to make privacy a mandatory part of compliance management. Hundreds of bills that address privacy, cybersecurity, and data breaches are pending or have already been passed in 50 U.S. states, territories, and the District of Columbia. Arguably the most comprehensive of them all the California Consumer Privacy Act was signed into law roughly two years ago. Thats not to mention the national Health Insurance Portability and Accountability Act (HIPAA), which requires companies to seek authorization before disclosing individual health information. And international frameworks like the EUs General Privacy Data Protection Regulation (GDPR) aim to give consumers greater control over personal data collection and use.

But the whitepaper coauthors argue measures adopted to date are limited by jurisdictional interpretations and offer incomplete models of ethics. For instance, HIPAA focuses on health care data from patient records but doesnt cover sources of data generated outside of covered entities, like life insurance companies or fitness band apps. Moreover, while the duty of patient autonomy alludes to a right to explanations of decisions made by AI, frameworks like GDPR only mandate a right to be informed and appear to lack language stating well-defined safeguards against AI decision making.

Beyond this, the coauthors sound the alarm about the potential effects of bias on AI surgical systems. Training data bias, which concerns the quality and representativeness of data used to train an AI system, could dramatically affect a preoperative risk stratification prior to surgery. Underrepresentation of demographics might also cause inaccurate assessments, driving flawed decisions such as whether a patient is treated first or offered extensive ICU resources. And contextual bias, which occurs when an algorithm is employed outside the context of its training, could result in a system ignoring nontrivial caveats like whether a surgeon is right- or left-handed.

Methods to mitigate this bias exist, including ensuring variance in the data set, applying sensitivity to overfitting on training data, and having humans-in-the-loop to examine new data as its deployed. The coauthors advocate the use of these measures and of transparency broadly to prevent patient autonomy from being undermined. Already, an increasing reliance on automated decision-making tools has reduced the opportunity of meaningful dialogue between the healthcare provider and patient, they wrote. If machine learning is in its infancy, then the subfield tasked with making its inner workings explainable is so embryonic that even its terminology has yet to recognizably form. However, several fundamental properties of explainability have started to emerge [that argue] machine learning should be simultaneous, decomposable, and algorithmically transparent.

Despite AIs shortcomings, particularly in the context of surgery, the coauthors argue the harms AI can prevent outweigh the adoption cons. For example, in thyroidectomy, theres risk of permanent hypoparathyroidism and recurrent nerve injury. It might take thousands of procedures with a new method to observe statistically significant changes, which an individual surgeon might never observe at least not in a short time frame. However, a repository of AI-based analytics aggregating these thousands of cases from hundreds of sites would be able to discern and communicate those significant patterns.

The continued technological advancement in AI will sow rapid increases in the breadths and depths of their duties. Extrapolating from the progress curve, we can predict that machines will become more autonomous, the coauthors wrote. The rise in autonomy necessitates an increased focus on the ethical horizon that we need to scrutinize Like ethical decision-making in current practice, machine learning will not be effective if it is merely designed carefully by committee it requires exposure to the real world.

Read the original post:

Researchers examine the ethical implications of AI in surgical settings - VentureBeat

Posted in Ai

DeepMind’s Newest AI Programs Itself to Make All the Right Decisions – Singularity Hub

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasnt to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.

Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blues day, has faded into the background.

Key to deep learnings success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.

Now, Alphabets DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the worlds top computer scientists (and take them years to write).

In a paper recently published on the pre-print server arXiv, a database for research papers that havent been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value functiona critical programming rule in deep reinforcement learningfrom scratch.

Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari gamesa different, more complicated taskat a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.

DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.

First, a little background.

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.

The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, itd take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?

While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experienceweighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.

In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projectionthis is the value functionof which direction will maximize the total points, or rewards, it can earn.

Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.

So, a key to deep reinforcement learning is developing a good value function. And thats difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actionswhich is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.

LPG trained in a number of toy environments. Most of these were gridworldsliterally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.

Only in LPGs case, it had no value function to guide that learning.

Instead, LPG has what DeepMind calls a meta-learner. You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both what to predict, thereby forming its version of a value function, and how to learn from it, applying its newly discovered value function to each decision it makes in the future.

LPG builds on prior work in the area.

Recently, researchers at the Dalle Molle Institute for Artificial Intelligence Research (IDSIA) showed their MetaGenRL algorithm used meta-learning to learn an algorithm that generalizes beyond its training environments. DeepMind says LPG takes this a step further by discovering its own value function from scratch and generalizing to more complex environments.

The latter is particularly impressive because Atari games are so different from the simple worlds LPG trained inthat is, it had never seen anything like an Atari game.

LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isnt strictly worse, just that it specializes in some environments.

This is where theres room for improvement and more research.

The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.

At the least, though, they say further automation of algorithm discoverythat is, algorithms learning to learnwill accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.

Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.

Update (6/27/20): Clarified description of preceding meta-learning research to include prior generalization of meta-learning in RL algorithms (MetaGenRL).

Image credit: Mike Szczepanski /Unsplash

Follow this link:

DeepMind's Newest AI Programs Itself to Make All the Right Decisions - Singularity Hub

Posted in Ai

Task Force on Artificial Intelligence – hearing to discuss use of AI in contact tracing – Lexology

On July 8, 2020, the House Financial Services Committees Taskforce on Artificial Intelligence held a hearing entitled Exposure Notification and Contact Tracing: How AI Helps Localities Reopen Safely and Researchers Find a Cure.

In his opening remarks, Congressman Bill Foster (D-IL), chairman of the task force, stated that the hearing would discuss the essential tradeoffs that the coronavirus disease 2019 (COVID-19) pandemic was forcing on the public between life, liberty, privacy and the pursuit of happiness. Chairman Foster noted that what he called invasive artificial intelligence (AI) surveillance may save lives, but would come at a tremendous cost to personal liberty. He said that contact tracing apps that use back-end AI, which combines raw data collected from voluntarily participating COVID-19-positive patients, may adequately address privacy concerns while still capturing similar health and economic benefits as more intrusive monitoring.

Congressman Barry Loudermilk (R-GA) discussed how digital contact tracing could be more effective than manual contact tracing, but noted that it must have strong participation from people 40-60 percent adoption rate overall to be effective. He said that citizens would need to trust that their privacy would not be violated. To help establish this trust, he suggested, people would need to be able to easily determine what data would be collected, who would have access to the data and how the data would be used.

Four panelists testified at this hearing. Below is a summary of each panelists testimony, followed by an overview of some of the post-testimony questions that committee members raised:

Brian McClendon, the CEO and co-founder of the CVKey Project, discussed how privacy, disclosure and opt-in data collection impact the ability to identify and isolate those infected with COVID-19. AI and machine learning require large amounts of data. He stated that while the most valuable data to combat COVID-19 can be found in the contact-tracing interviews of infected and exposed people, difficulties exist in capturing this information. For example, attempted phone calls to reach exposed individuals may go unanswered because people often do not pick up calls from unknown numbers. Mobile apps, he said, offer a way to conduct contact tracing with greater accuracy and coverage. Mr. McClendon discussed two ways that such apps could work: (1) using GPS location or (2) via low-energy Bluetooth. For the latter, Mr. McClendon explained a method developed by two large technology companies: when a user of a digital contact tracing app tests positive for COVID-19, he or she then chooses to opt in to upload non-personally identifiable information to a state-run cloud server, which would then determine whether potential exposures have occurred and provide in-app notifications to such users.

Krutika Kuppalli, M.D., an infectious diseases physician, discussed how using contact tracing can help impede the spread of infectious diseases. She noted that it is important to remember ethical considerations involving public health information, data protection and data privacy when using these technologies.

Andre M. Perry, a fellow at the Brookings Institution, began his presentation by discussing how COVID-19 has disproportionately affected Black and Latino populations, reflecting historical inequalities and structural racism. Mr. Perry identified particular concerns regarding AI and contact tracing as they pertain to structural racism and bias. These tools, he stated, are not neutral and can either exacerbate or mitigate structural racism. To address such bias, he suggested, contact tracing should include people who have generally been excluded from systems that have provided better health and economic outcomes. Further, the use of AI tools in the healthcare arena presents the same risk as in other fields: the AI is only as good as the programmers who design it. Bias in programming can lead to flaws in technology and amplify biases in the real world. Mr. Perry stated that greater recruitment and investment with Black-owned tech firms, rigorous reviews and testing for bias and more engagement with local communities is required.

Ramesh Raskar, a professor at MIT and the founder of the PathCheck Foundation, emphasized three elements during his presentation: (1) how to augment manual contact tracing with apps; (2) how to make sure apps are privacy-preserving, inclusive, trustworthy, and built using open-source methods and nonprofits; and (3) the creation of a National Pandemic Response Service. Regarding inclusivity, Mr. Raskar noted that Congress should actively require that solutions be accessible broadly and generally; contact tracing cannot be effective only for segments of the population that have access to the latest technology.

Post-testimony questions

Chairman Foster asked about limits of privacy-preserving techniques by providing an example of a person who had been isolated for a week, then interacted with only one other person, and then later received a notification of exposure: such a person likely will know the identity of the infected person. Mr. Raskar replied that data protection has different layers: confidentiality, anonymity, and then privacy. In public health scenarios, Mr. Raskar stated that today, we only care about confidentiality and not anonymity or privacy (eventually, he commented, you will have to meet a doctor).

If we were to implement a federal contact tracing program, Representative Loudermilk asked, how would we ensure citizens that they can know what data will be used and collected, and who has access? Mr. McClendon responded that under the approach developed by the two large technology companies, data is random and stored on a personal phone until the user opts in to upload random numbers to the server. The notification determination is made on the phone and the state provides the messages. The state will not know who the exposed person is until that person opts in by calling the manual contact tracing team.

Representative Maxine Waters (D-CA) asked what developers of a mobile contact tracing technology should consider to ensure that minority communities are not further disadvantaged. Mr. Perry reiterated that AI technologies have not been tested, created, or vetted by persons of color, which has led to various biases.

Congressman Sean Casten (D-IL) asked whether AI used in contact tracing is solely backward-looking or could predict future hotspots. Mr. McClendon replied that to predict the future, you need to know the past. Manual contact tracing interviews, where an infected or exposed person describes where he or she has been, would provide significant data to include in a machine-learning algorithm, enabling tracers to predict where a hotspot might occur in the future. However, privacy issues and technological incompatibility (e.g., county and state tools that are not compatible with each other) mean that a lot of data is currently siloed and even inaccessible, impeding the ability for AI to look forward.

Read more:

Task Force on Artificial Intelligence - hearing to discuss use of AI in contact tracing - Lexology

Posted in Ai

Artificial Intelligence and Satellite Technology to Enhance Carbon Tracking Measures – JD Supra

New carbon emission tracking technology will quantify emissions of greenhouse gas, holding the energy industry accountable for its CO2 output. Backed by Google, this cutting-edge initiative will be known as Climate TRACE (Tracking Real-Time Atmospheric Carbon Emissions).

Advanced AI and machine learning now make it possible to trace greenhouse gas (GHG) emissions from factories, power plants and more. By using image processing algorithms to detect carbon emissions from power plants, AI technology makes use of the growing global satellite network to develop a more comprehensive global database of power plant activity. Because most countries self-report emissions and manually compile results, scientists often rely on data that is several years out of date. Moreover, companies often underreport carbon emissions, rendering existing data inaccurate.

Climate TRACE addresses these issues by partnering with other leaders in sustainability practicesincluding former U.S. Vice President Al Gore, WattTime, CarbonPlan, Carbon Tracker, Earthrise Alliance, Hudson Carbon, OceanMind, Rocky Mountain Institute, Blue Sky Analytics and Hypervine. The Climate TRACE coalition aims to help countries in meeting Paris Agreement targets and place the world on a path to sustainability.

The carbon tracking efforts of Climate TRACE will result in a conglomeration of data to be made available to the public, which may assist plaintiffs in climate liability cases and lead to enhanced enforcement of environmental laws. The slow pace of international climate negotiations has led to an increase in lawsuits demanding action on global warming. As of this year, 1,600 climate-related lawsuits have been filed worldwide, including 1,200 lawsuits in the United States alone. Currently, climate liability cases rely predominantly on a database run by the Carbon Disclosure Project and the Climate Accountability Institute. This database, initially released in 2013 as the Carbon Majors Report, attempts to link carbon pollution to emitters. The 2013 report pinpointed 100 producers responsible for 71% of global industrial GHG emissions. Its 2017 report, for instance, indicated that 25 corporate and state producing entities account for 51% of global industrial GHG emissions. While the Carbon Majors Report has assisted in determining the largest carbon emitters on a global scale, Climate TRACE will provide more frequent and accurate monitoring of pollutants.

Data from Climate TRACE will also help hold countries accountable to the Paris Climate Agreement, expanding upon European efforts to monitor global warming. Early last year, a space budget increase put Europe in the lead to monitor carbon from space using satellite technology. In December 2019, member governments awarded the European Space Agency $12.5 billion. This substantial increase allowed the ESA to devote $1.8 billion to Copernicus, a satellite technology program which continuously tracks Earths atmosphere. The program allowed Europe to analyze human carbon emissions regularly. With Copernicus, the ESA became the only space agency to monitor pledges made under the Paris Climate Agreement. The Climate TRACE coalitionwith members spanning across three continentswill make carbon monitoring a global effort.

Climate TRACE has created a working prototype that is currently in its developmental stages. The coalition intends to release its first version of the AI project by the summer of 2021.

[View source.]

See the original post:

Artificial Intelligence and Satellite Technology to Enhance Carbon Tracking Measures - JD Supra

Posted in Ai

Future of AI in video games focuses on the human connection – TechTarget

App developers, students and researchers are using the transformative power of AI technologies to develop people's emotional connection to video games.

Since the 2001 introduction of the first AI digital helper Cortana in Halo, technology and AI have become pivotal to gameplay. With all the buzz around the release of a new iteration of the popular GPT-3 video game tool, IT developers are more in tune than ever with the needs of creative deployments of popular AI technology. The future of AI in video games lies in the ability of the technology to increase the human connection.

Since the dawn of chatbots and digital assistant creation, one critique has been universal: the helper is not human-like enough. This issue spans enterprises, and IT developers and startups are now developing AI that is human-like, emotional and responsive.

Christian Selchau-Hansen, CEO of enterprise software company Formation and former manager of product at social game development company Zynga, said that one of the major uses of AI in video games is the implementation of generative adversarial network (GAN) technology, image recognition and replication in character design. The ability of an algorithm to read emotion, generate emotion from text and accurately portray emotion enables a heightened level of gameplay.

"Whether it's GPT-3 or the processing and techniques of developments like deepfakes the good things that come from [these developments] are more immersive worlds," Selchau-Hansen said.

"For people to be able to interact with more immersive and complex characters, and not just have the ability to interact with them -- but create new responses based on interactions through facial expressions, language, dialogue and actions," Selchau-Hansen said.

Danny Tomsett, CEO of UneeQ, a digital assistant platform creator, said emotional connection is the creation of a feeling between you and the story or character, and AI allows for the closest representation of visual humans.

Visual representations of humans are not as good as meeting in real life, but a model that can see your emotion and vice versa lets you respond dynamically, Tomsett said.

When looking toward the future of gameplay, Selchau-Hansen imagines a world where you have something akin to a physics engine during the game -- one that controls gravity, wind-resistance and thermal conductivity -- but for emotional interactions.

"You could have an emotional engine where your interactions with a [character] can make them sad, confused, scared, jealous -- and their dialogue would spring from those emotions," Selchau-Hansen said.

The gamification of AI has been a driver of technology, with iterations of DeepBlue and AlphaGo teaching developers that perhaps the most important part of augmenting gameplay is the ability to find the spot between competition and demolition. Gamers want to be challenged but still have a chance to win because their competitors are making human-like decisions.

This idea of competition between humans and computers, a friendly tussle between players, is central to creating brand loyalty -- returning players need to be challenged with dynamic, human-like bots on the other side of the game.

Creating brand loyalty in gaming is also about eschewing flat, two-dimensional, text-based digital interfaces to unlock the power of emotion and story, Tomsett said.

Another crossover between AI and gameplay is the ability to personalize. Much like marketing campaigns and personalized promotions, the future of AI in the video game industry depends on monetizing the emotional connection between the game and the consumer. Algorithms collect data from the game -- what the player collects, what quests they follow, what skins they use -- and suggest and alter additional downloads that have the highest chance of winning over the player.

From gameplay to retail to IT personalization, AI is being used to create and strengthen the idea of product value. That value -- monetary, recreational or business-related -- is offered to the consumer to increase the likelihood of brand loyalty, Selchau-Hansen said.

While the future of AI in video games would naturally point to automation and generated text, the AI-generated video games now testing the fringes of current gaming technology also highlight their limits.

Independent designers are toying with open-source technology to use natural language generation to create virtual games without a gaming studio. Developer Nick Walton's AI Dungeon storytelling game throws you into the development of the decision tree -- your choices change the outcome and help train the game for future players. This interactive virtual role-playing game is modeled on Open-AI's machine learning-based GPT-2 natural language generator. Walton tuned more than 117 parameters and crafted neural networks to output this unique story text.

But the game reflects many of the major issues of language generation. The game is a chaotic story as the program cannot tell what you know or if you have seen a character before. Some of the language is nonsensical. There is no human emotion or human decision making.

Michael Cook, a research fellow at the Royal Academy of Engineering, developed Angelina, an AI digital assistant who is trained to develop intelligently designed videogames.

Angelina is designed to make games based on simple theme inputs and is the first system to make 3D games within the game design engine Unity. Despite the nonsensical gameplay, somewhat comical instability and terrible UX, games by Angelina are an interesting foray into what it means to train an AI or machine learning system -- it's a peek into the mechanics of how to train computational creativity. When you input a word or phrase, Angelina accesses a word association database to create a framework for creation. A "secret" theme leads to word associations like "crypt," "dark," "hidden" and "dungeon," but it can also lead to a tangled web of characters, color and ineffective jump-scares.

It's clear that the future of AI in video games lies somewhere between generated text and finely crafted human emotion to wrangle consumers.

Here is the original post:

Future of AI in video games focuses on the human connection - TechTarget

Posted in Ai

...89101112...203040...