Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast – HYPEBEAST

Artificial intelligence has allowed us to simulate all kinds of situations through computer systems. Some of its main applications are language processing and speech recognition, and now, through play.ht and podcast.ai were actually able to see how far the technology has come by experiencing a conversation with someone who is not even on Earth anymore.

In an entirely AI-generated podcast, podcast.ai has created a full interview between Joe Rogan and Steve Jobs. While the first bit of the podcast is clunky with weird pauses and awkward laughing, it does start to move into real conversation touching on faith, tech companies, drugs, and at one point the AI-generated Jobs uses the analogy of a car where you have to buy all four wheels separately to Adobes services.

The crazy thing is some parts begin to sound believable, and actually keep you listening as you start to make a connection to what they are saying. This could be enforced by the prevalence of Joe Rogan in the current podcast sphere, and the general curiosity of witnessing what Steve Jobs would have said if the two ever did meet. Have a listen below to experience the AI podcast for yourself.

In other tech news, unopened first-generation apple iPhone from 2007 auctions for $39,000 USD.

See the article here:

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast - HYPEBEAST

The next phase of AI is generative – CIO Dive

Enterprises have long sought AI for its ability to supercharge a workforce, picking up slack through automated tasks and a cost-effective option for repetitive labor, compared to humans.

The next act in enterprise AI sees the technology becoming a standalone maker. The technology generates synthetic data to train its own models or identify groundbreaking products as solutions mature and adoption widens, as showcased in Gartner's Hype Cycle for Emerging Technologies 2021 report, published Monday.

Called "Generative AI,", the technology is set to reach the plateau of productivity in the next two to five years. Commercial implementations of generative AI are already at play in the enterprise and, as the technology advances through the hype cycle, non-viable use cases will fade, according to Brian Burke, research VP at Gartner.

Generative AI works by using algorithms to create a "realistic, novel version of whatever they've been trained on," Burke said. Algorithms can identify new materials with specific properties and technologies that generate synthetic data to augment research, among other use cases.

An early implementation for generative AI technology let companies identify marketing content with a higher success rate. Today, capabilities have evolved and AI can produce its own data and generate results from it in critical spaces such as the pharmaceutical industry.

During the pandemic, researchers used AI to augment data and help identify antiviral compounds and therapeutic research for treating COVID-19. The technology helped generate more data to support algorithms, given the novelty of the disease and HIPAA regulations.

Using AI to create can be a big differentiator for companies, said Rodrigo Liang, co-founder and CEO of SambaNova Systems. Competition can leave organizations no choice but to catch up with markets and adopt generative AI.

Despite the evolution of AI, most organizations continue to struggle with adoption.

Whether it's in-house AI or a vendor-made solution, technologies that fail to be adopted by the whole organization amount to wasted resources. AI maturity levels vary in the enterprise, and just 20% of organizations are at the highest levels of AI adoption and deployment, according to Cognizant.

Pressure from competitors and potential financial upside is making companies double down on AI financially, too.

The number of companies with AI budgets ranging from $500,000 to $5 million rose 55% year over year, according to Appen's State of AI and Machine Learning report published in June.

AI use will shift for the enterprise as it moves away from static models to more dynamic technologies.

In the past, AI models trained on a specific outcome could learn to perform a task but not necessarily get better over time, Burke said. "What we've seen evolve in terms of AI is that models are becoming more dynamic, and the data that supports those models becoming more dynamic."

Executives also struggle to account for the ethical dimensions of AI. Businesses are more likely to check an algorithm for unexpected outcomes than their fairness or bias implications, according to the AI Adoption in the Enterprise report published by O'Reilly.

"Machine learning, data science, algorithmic approaches in general, and, yes, AI, have enormous potential to drive innovation," said Christian Beedgen, co-founder and CTO, Sumo Logic, in an email. "But like with all innovation, what really matters is how humans apply this potential."

Companies have turned to explainable AI as a way to contend with the decisions an algorithm makes, and the ethical implications of those decisions.

"As AI continues to seep into our everyday lives, it is up to humans to deeply consider the ethics behind every program they create and whether or not the ends justify the means," said Beedgen.

See the original post:

The next phase of AI is generative - CIO Dive

76% of tech leaders will increase hiring for AI, cognitive solutions, report says – TechRepublic

The growth of cognitive technologies such as artificial intelligence (AI) will lead more than 75% of tech leaders to increase hiring in IT to manage deployments, according to a new report from KPMG. In addition to hiring in IT, 64% of tech leaders said they'd increase hiring in middle management, 62% in customer service, 60% in sales, 50+% in sales, and 43% in senior management, the report said.

"The tech CEOs' commitment to hire across the board shows the strategic value they see in cognitive technologies, and they are building the organizational structure their company will require to execute their strategy," Tim Zanni, global and US chair of KPMG's Technology, Media and Telecommunications practice, said in a press release.

By the year 2021, spending on AI and cognitive technologies is expected to hit $46 billion, according to IDC data. These technologies are making waves in the form of mobile assistants and chat bots, but they are poised to further disrupt even more industries.

SEE: The Machine Learning and Artificial Intelligence Bundle (TechRepublic Academy)

According to 28% of respondents, AI will lead to better productivity and improved efficiency. Some 16% said it could lead to cost reductions, 14% said increased profitability, and 10% predicted faster innovation cycles. Better customer loyalty and faster time to market were also both predicted as outcomes of AI's growth.

Despite their impact on hiring, cognitive technologies were only the third most impactful tech trend noted in the report, cited by 10% of respondents. The Internet of Things (IoT) took first place with 20%, and robotics came in second with 11%.

A quarter of respondents felt that IoT would lead to improved business efficiencies and higher productivity. IoT would also lead to faster innovation cycles, 19% of respondents said, and 13% said it could bring cost reductions too.

"As we evolve as a networked society, IoT will transform the way we interact with technology. From an enterprise perspective this evolution will require a new framework to manage the opportunities and risk," Peter Mercieca, a management consulting leader at KPMG, said in the report.

Some 36% of respondents reported that robotics would lead to improved effectiveness, and the technology also ranked high in speeding innovation cycles. The report noted that robotics have been common in manufacturing for a long time, but are changing to become more collaborative.

These technologies are making a big splash in the enterprise, and their pace of disruption will likely quicken, the report said. Company leaders are recommended to revisit their strategies for new tech investments, including investment and M&A strategies, the report said.

Image: iStockphoto/agsandrew

More:

76% of tech leaders will increase hiring for AI, cognitive solutions, report says - TechRepublic

The Top 100 AI Startups Out There Now, and What They’re Working On – Singularity Hub

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? Theyre some of the solutions that the worlds most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a companys Mosaic scorepulling together information on market trends, money, and momentumalong with other details ranging from patent activity to the latest news analysis to identify the best of the best.

Our final list of companies is a mix of startups at various stages of R&D and product commercialization, said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the worlds most valuable AI startups. For instance, theres San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platforms diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this years AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. Theres even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then theres OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the companys AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

I think this is an interesting field because its a bit of a cat and mouse game, noted Varadharajanis. As your cyber defenses get smarter, your cyber attacks get even smarter, and so its a constant game of whos going to match the other in terms of tech capabilities.

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The companys platform automates whats called endpoint security, referring to laptops, phones, and other devices at the end of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the edge of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a users face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startups claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

Theres also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least thats the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. Its sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters dont even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the worlds most promising AI startups are doing.

You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards, Varadharajanis said. So a lot of creative ways in which companies are applying AI to solve different issues in different industries.

Image Credit: Butterfly Network

See the original post:

The Top 100 AI Startups Out There Now, and What They're Working On - Singularity Hub

AI Storm Brewing – SemiEngineering

The acceleration of artificial intelligence will have big social and business implications.

AI is coming. Now what?

The answer isnt clear, because after decades of research and development, AI is finally starting to become a force to reckon with. The proof is in the M&A activity underway right now. Big companies are willing to pay huge sums to get out in front of this shift.

Here is a list of just some of the AI acquisitions announced or completed over the past few years:

Microsoft: Maluuba (natural language processing/reinforcement learning), Netbreeze (social media monitoring). Google: DeepMind Technologies (famous for beating the world Go champion), Moodstock (image recognition), Clever Sense (social recommendations) and Api.ai (natural language processing). Facebook: Face.com (facial recognition). Intel: Itseez (machine vision), Nervana Systems (machine learning) and Movidius (machine vision). Apple: Turi and Tuplejump (both machine learning). Twitter: Magic Pony, Wetlab and Madbits (all machine learning). Salesforce: MetaMind (natural language processing) and PredictionIO (machine learning) GE: Bit Stew (analytics) and Wise.io (machine learning).

The list goes on and on. AI has turned into an arms race among big companies, which are pouring billions of dollars into this field after a lull that lasted nearly a quarter of a century. The last big explosion in AI research was in the 1980s and early 1990s, when most companies concluded they did not have the technology resourcescompute power, memory and throughputto develop effective AI solutions.

IBM was the big holdout, quietly developing Watson as a for-lease compute platform and showcasing it on Jeopardy (it won) and at the University of North Carolinas UNC Lineberger cancer treatment center, where Watson proved its mettle with a team of trained oncologists. Others are racing to catch up.

Put in perspective, there are several trends that are emerging. First, while AI is not going to take over the world like HAL in the movie classic 2001: A Space Odyssey, it will be a disruptive force that can eliminate high-paying as well as low-paying jobs. The more specialized and higher-paid, the greater the ROI. And as eSilicon Chairman Seth Neiman points out in an interview with Semiconductor Engineering, this can happen with breathtaking speed.

Second, as companies begin understanding how AI can be used, it will become obvious there is no single AI machine or architecture. When the IoT term was first introduced (Kevin Ashton, co-founder of the Auto-ID Center believes he first coined the term in a 1999 presentation, although it was Cisco that really made the term a household name) it was considered a single entity. It is now viewed as a general term that encompasses many different approaches and vertical market segments, each with its own set of architectures that may or may not interact with other market segments. AI will follow the same evolutionary path, splintering into architectures that are tailored for multiple markets.

And third, while the tech industry is still trying to wrap its arms around what this will mean, its clear that AI is here to stay this time. The investments by both companies and governments in this field will keep this part of the market well-funded for years to come.

However, whats not clear yet is how this round of technology will mesh with society. In the past, most technology that was developed was viewed as helpful for a broad range of people. Rather than replacing people, it freed them from mundane tasks to do more creative tasks or to specialize further. Unlike previous technology booms, AI has the potential to displace people at all levelstruck drivers, business consultants, lawyers, accountants, medical specialists with a dozen years of schooling.

Rather than sitting back and waiting for standards, its imperative that tech groups at every level get out in front of this shift and help develop policies that will guide future development. In the tech industry there is always a level of hype surrounding architectural changes, but this is hardly business as usual. Done right, AI can be a big opportunity for years to come, driving continued advances in both semiconductor technology and software. Done wrong, it can have a devastating impact on jobsand how people use and view technology for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Happy 25th Birthday, HAL! AI has come a long way since HAL became operational. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups.

Read the original:

AI Storm Brewing - SemiEngineering

Are Talking Speakers the First AI Bubble? – Investopedia


Investopedia
Are Talking Speakers the First AI Bubble?
Investopedia
Echo speaker, powered by its voice-activated virtual assistant Alexa, all of the tech heavy hitters are getting into the market with Apple Inc.'s (AAPL · Add To Watchlist. AAPL. Created with Highstock 4.2.6. ) HomePod coming out this fall. While lots ...

Visit link:

Are Talking Speakers the First AI Bubble? - Investopedia

Conversational AI and the road ahead – TechCrunch

Katherine Bailey Crunch Network Contributor

Katherine Bailey is principal data scientist at Acquia.

More posts by this contributor:

In recent years, weve seen an increasing number of so-called intelligent digital assistants being introduced on various devices. At the recent CES, both Hyundai and Toyota announced new in-car assistants. Although the technology behind these applications keeps getting better, theres still a tendency for people to be disappointed by their capabilities the expectation of intelligence is not being met.

The city councilmen refused the demonstrators a permit because they feared violence.

What does the word they refer to here the councilmen or the demonstrators? What if instead of feared we wrote advocated? This changes what we understand by the word they. Why? It is clear to us that councilmen are more likely to fear violence, whereas demonstrators are more likely to advocate it. This information, which is vital for disambiguating the pronoun they, is not in the text itself, which makes these problems extremely difficult for AI programs.

The first ever Winograd Schema Challenge was held last July, and the winning algorithm achieved a score on the challenge that was a bit better than random.

Theres a technique for representing the words of a language thats proving incredibly useful in many NLP tasks, such as sentiment analysis and machine translation. The representations are known as word embeddings, and they are mathematical representations of words that are trained from millions of examples of word usage in order to capture meaning. This is done by capturing relationships between words. To use a classic example, a good set of representations would capture the relationship king is to man as queen is to woman by ensuring that a particular mathematical relationship holds between the respective vectors (specifically, king man + woman = queen).

Such vectorized representations are at the heart of Googles new translation system, although they are representations of entire sentences, not just words. The new system reduces translation errors by more than 55-85 percent on several major language pairs and can perform zero-shot translation: translation between language pairs for which no training data exists.

Given all this, it may seem surprising to hear Oren Etzioni, a leading AI researcher with a particular focus on NLP, quip: When AI cant determine what it refers to in a sentence, its hard to believe that it will take over the world.

So, AI can perform adequate translations between language pairs it was never trained on but it cant determine what it refers to? How can this be?

When hearing about how vectorized representations of words and sentences work, it can be tempting to think they really are capturing meaning in the sense that there is some understanding happening. But this would be a mistake. The representations are derived from examples of language use. Our use of language is driven by meaning. Therefore, the derived representations naturally reflect that meaning. But the AI systems learning such representations have no direct access to actual meaning.

For the purposes of many NLP tasks, lack of access to actual meaning is not a serious problem.

Not understanding what it refers to in a sentence is not going to have an enormous effect on translation accuracy it might mean il is used instead of elle when translating into French, but thats probably not a big deal.

However, problems arise when trying to create a conversational AI:

Screenshot from the sample bot you can create with IBMs conversation service following this tutorial.

Understanding the referents of pronouns is a pretty important skill for holding conversations. As stated above, the training data used to train AIs that perform NLP tasks does not include the necessary information for disambiguating these words. That information comes from knowledge about the world. Whether its necessary to actually act as an embodied entity in the world or simply have vast amounts of common sense knowledge programmed in,to glean the necessary information is still an open question. Perhaps its something in-between.

Terry Winograds early Natural Language Understanding program SHRDLU restricted itself to statements about a world made up of blocks. By Ksloniewski (Own work) CC BY-SA 4.0, via Wikimedia Commons

But there are ways of enhancing such conversational AI experiences even without solving natural language understanding (which may take decades, or longer). The image above showing a bot not understanding now turn them back on when the immediately prior request was turn off the windshield wipers demonstrates how disappointing it is when a totally unambiguous pronoun cannot be understood. That is definitely solvable with todays technology.

Read more here:

Conversational AI and the road ahead - TechCrunch

Google is helping fund AI news writers in the UK and Ireland – The Verge

Google is giving the Press Association news agency a grant of 706,000 ($806,000) to start writing stories with the help of artificial intelligence. The money is coming out of the tech giants Digital News Initiative fund, which supports digital journalism in Europe. The PA supplies news stories to media outlets all over the UK and Ireland, and will be working with a startup named Urbs Media to produce 30,000 local stories a month with the help of AI.

The editor-in-chief of the Press Association, Peter Clifton, explained to The Guardian that the AI articles will be the product of collaboration with human journalists. Writers will create detailed story templates for topics like crime, health, and unemployment, and Urbs Medias Radar tool (it stands for Reporters And Data And Robots) will fill in the blanks and helping localize each article. This sort of workflow has been used by media outlets for years, with the Los Angeles Times using AI to write news stories about earthquakes since 2014.

Skilled human journalists will still be vital in the process, said Clifton, but Radar allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually.

The money from Google will also be used to make tools for scraping information from public databases in the UK, like those generated by local councils and the National Health Service. The Radar software will also auto-generate graphics for stories, as well as add relevant videos and pictures. The software will start being used from the beginning of next year.

Some reporters in the UK, though, are skeptical about the new scheme. Tim Dawson, president of the National Union of Journalists, told The Guardian: The real problem in the media is too little bona fide reporting. I dont believe that computer whizzbangery is going to replace that. What Im worried about in my capacity as president of the NUJ is something that ends up with third-rate stories which look as if they are something exciting, but are computer-generated so [news organizations] can get rid of even more reporters.

Visit link:

Google is helping fund AI news writers in the UK and Ireland - The Verge

The Dark Side of Big Techs Funding for AI Research – WIRED

Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected. Google maintains that she resigned, and Alphabet CEO Sundar Pichai said in a company memo on Wednesday that he would investigate what happened.

The episode is a pointed reminder of tech companies influence and power over their field. AI underpins lucrative products like Googles search engine and Amazons virtual assistant Alexa. Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments. A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.

Ben Recht, an associate professor at University of California, Berkeley, who has spent time at Google as visiting faculty, says his fellow researchers sometimes forget that companies interest doesnt stem only from a love of science. Corporate research is amazing, and there have been amazing things that came out of the Bell Labs and PARC and Google, he says. But its weird to pretend that academic research and corporate research are the same.

Ali Alkhatib, a research fellow at University of San Franciscos Center for Applied Data Ethics, says the questions raised by Googles treatment of Gebru risk undermining all of the companys research. It feels precarious to cite because there may be things behind the scenes, which they werent able to talk about, that we learn about later, he says.

At any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest.

Meredith Whittaker, faculty director, AI Now Institute

Alkhatib, who previously worked in Microsofts research division, says he understands that corporate research comes with constraints. But he would like to see Google make visible changes to win back trust from researchers inside and outside the company, perhaps by insulating its research group from other parts of Google.

The paper that led to Gebrus exit from Google highlighted ethical questions raised by AI technology that works with language. Googles head of research, Jeff Dean, said in a statement last week that it didnt meet our bar for publication.

Gebru has said managers may have seen the work as threatening to Googles business interests, or an excuse to remove her for criticizing the lack of diversity in the companys AI group. Other Google researchers have said publicly that Google appears to have used its internal research review process to punish her. More than 2,300 Google employees, including many AI researchers, have signed an open letter demanding the company establish clear guidelines on how research will be handled.

Meredith Whittaker, faculty director at New York Universitys AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms. Its easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest, she says.

Whittaker worked at Google for 13 years but left in 2019, saying the company had retaliated against her for organizing a walkout over sexual harassment and to undermine her work raising ethical concerns about AI. She helped organize employee protests against an AI contract with the Pentagon that the company ultimately abandoned, although it has taken up other defense contracts.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Machine learning was an obscure dimension of academia until around 2012, when Google and other tech companies became intensely interested in breakthroughs that made computers much better at recognizing speech and images.

The search and ads company, quickly followed by rivals such as Facebook, hired and acquired leading academics, and urged them to keep publishing papers in between work on company systems. Even traditionally tight-lipped Apple pledged to become more open with its research, in a bid to lure AI talent. Papers with corporate authors and attendees with corporate badges flooded the conferences that are the fields main publication venues.

Read the original here:

The Dark Side of Big Techs Funding for AI Research - WIRED

Podcast: Doctors have to think about sex; AI text generators spread ‘fake news’? Coffee can indeed make you poop – Genetic Literacy Project

An ER physician says doctors have to consider biological sex to properly care for their patients. Coffee can send some people to the bathroombut probably not for the reason weve been led to believe. AI-powered text generators can write realistic news stories, fueling concerns that the technology will encourage the spread of misinformation online.

Podcast: Play in new window | Download

Subscribe: Apple Podcasts | Android |

Join geneticist Kevin Folta and GLP editor Cameron English on this episode of Science Facts and Fallacies as they break down these latest news stories:

When physicians dont properly consider biological sex, patients are prescribed incorrect treatments and suffer entirely preventable consequences, says Alyson McGregor, an Associate Professor of Emergency Medicine at The Warren Alpert Medical School of Brown University.

The problem runs through our health care system, affecting millions of patients, and stems from the fact that doctors often prescribe multiple medications to female patients without recognizing female sex as an independent risk factor for serious drug interactions, McGregor notes. This occurs because women are more likely to have multiple physicians prescribing medications, each possibly unaware of all the relevant drugs unless the patient reports them.

Is there a way to correct the situation and prevent needless suffering?

Scientists and educators spend a considerable amount of time combating the spread of misinformation online, and their jobs may get much harder in the coming years as text generators powered by artificial intelligence become more widely used. These applications can perform word association, answer questions and, perhaps most importantly, comprehend related concepts.

The latest iteration of the technology developed by OpenAI was able to write 200-500 word sample news articles that were difficult to distinguish from news reports written by humans. There are some inherent risks in the technology, but AI-powered text generators are also poised to do a lot of good.

Its a common joke youve probably heard in your favorite movie or TV show: that first morning cup of coffee makes you poop. While it may not be as universal as implied by pop culture, this reaction to coffee is real. Caffeine might be one of the culprits. However, multiple (albeit small) studies show that coffee stimulates several physiological responses that can send you to the bathroom in short order.

Subscribe to the Science Facts and Fallacies Podcast on iTunes and Spotify.

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the GLPs managing editor. BIO. Follow him on Twitter @camjenglish

Follow this link:

Podcast: Doctors have to think about sex; AI text generators spread 'fake news'? Coffee can indeed make you poop - Genetic Literacy Project

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website – JD Supra

Artificial intelligence (AI) is powerfuland the use of it for content generation is on the rise. In fact, some experts estimate that as much as 90 percent of online content may be generated by AI algorithms by 2026.

Many of the popular AI content generators produce well-written, informative content. But is it the right choice for your firm? Before you decide, lets consider the pros and cons of using this unique sort of copy with your digital marketing.

This article explains how AI content generators works, the pros and cons of AI-generated content, and a few tips for utilizing AI content in your digital marketing workflow.

Consumer-facing artificial intelligence tools are pretty straightforward, as far as the consumer is concerned. You provide some inputs, and the machine provides some outputs.

Heres how it works with content writing. You generally provide the AI generator with a topic and keywords. You can usually select the format youd like the output to take, such as a blog post or teaser copy. Then, its as simple as clicking GO.

The content generator will scrape the web and draft copy for your needs. Some tools can take existing content and rewrite it, which can make content marketing a lot easier.

Not all AI content generators cost money, but youll need to pay something to access the better toolsor to produce a lot of content.

If youre excited about the possibilities, great! There are some significant benefits to AI content generators.

Here are a few pros of AI content tools:

To sum up, AI content tools can quickly produce natural-sounding copy at a fraction of the cost of paying a real copywriter.

There are several important drawbacks to consider with AI-generated content. Speed and cost arent everything when it comes to content generation.

Here are several cons that come with using AI content tools:

AI tools can be hit-or-miss when it comes to empathy and accuracy. Law firms should be very careful when publishing this type of content. There are also serious SEO concerns with using AI content.

Overall, its clear that AI-generated content can provide value. The question is how to best incorporate AI content into your digital marketing efforts.

Here are a few best practices if you choose to use AI-generated content.

All AI-generated content should be reviewed by a real human being prior to publication.We recommend hiring a legal professional to review and edit AI copy. A copywriter can help smooth the rough edges, too. Because the content is already written, the hourly rate youll pay these professionals should be minimal.

Dont Use AI-generated content on your website. This type of tool should be a last resort. If you do use machine-generated copy on your website, make sure to block it from being crawled to avoid search engine penalties. Your website developer can advise on the best way to do this.

Do not hire an agency that brags about AI content as a core strategy.SEO and web development companies should be very aware of the risks that come with using AI content. If they suggest AI-generated content, ask them how they plan to protect your firm against search engine penaltiesand dont work with them if they dont have a good answer.

Our current position is that AI-generated content can be helpful for short blurbs, such as newsletters to clients. All AI content should only be deployed with human oversight.

We recommend against using AI-generated content for website copy. If it must be used, its important to work with a developer or agency that understands how to communicate with search engines so you arent penalized for using AI tools.

[View source.]

Read the original post:

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website - JD Supra

Tinder wants AI to set you up on a date – BBC News – BBC News


BBC News

View original post here:

Tinder wants AI to set you up on a date - BBC News - BBC News

The Android Of Self-Driving Cars Built A 100,000X Cheaper Way To Train AI For Multiple Trillion-Dollar Markets – Forbes

Level 5 self-driving means autonomous cars can drive themselves anywhere, at any time, in any ... [+] conditions.

How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.

Its called Deep Teaching.

Perhaps not surprisingly, it works by taking human effort out of the equation.

And Helm.ai says its the key to unlocking autonomous driving. Including cars driving themselves on roads theyve never seen ... using just one camera.

Our Deep Teaching technology trains without human annotation or simulation, Helm.ai CEO Vladislav Voroninski told me recently on the TechFirst podcast. And its on a similar level of effectiveness as supervised learning, which allows us to actually achieve a higher levels of accuracy as well as generalization ... than the traditional methods.

Artificial intelligence runs on data the way an army marches on its stomach. Most self-driving car projects use annotated data, Voroninski says.

That means thousands upon thousands of images and videos that a human has viewed and labeled, perhaps identifying things like lane or human or truck. Labeling images costs at least dollars per image, which means the cost of annotation becomes the bottleneck.

The cost of annotation is about a hundred thousand X more than the cost of simply processing an image through a GPU, Voroninski says.

And that means that even with budgets of tens of billions of dollars, youre going to be challenged to drive enough training data through your AI to make it smart enough to approach level five autonomy: full capability to drive anywhere at any time in any conditions.

The other problem with level five?

You pretty much have to invent general artificial intelligence to make it happen.

If you mean Level five like literally going anywhere in a sense of being able to go off-roading in a jungle or driving on the moon ... then I think that an AI system that can do that would be on par with a human in many ways, Voroninski told me. And potentially could be AI complete, meaning that it could be as hard as solving general intelligence.

Fortunately, a high-functioning level four self-driving system is pretty much all we need: the ability to drive most places at most times in most conditions.

That will unlock our ability to get driven: to reclaim thousands of hours spent in cars for leisure and work. That will also unlock fractional car ownership and much more cost-effective ride-sharing, plus a host of other applications.

And multiple other trillion dollar markets, including autonomous robots, delivery robots, and more.

So how does deep teaching work?

Deep teaching uses compressive sensing and sophisticated priors to scale limited information into deep insights. Its essentially a shortcut to a form of intelligence. Similar technologies helped us drop the cost of mapping the human genome massively, discover the structure of DNA, and have been used to speed up MRI (magnetic resonance imaging) by a factor of ten.

Science is full of these kinds of reconstruction problems where you observe information, indirect information about some object of interest, and you want to recover the structure of that object from that indirect information, Voroninski says. Compressive sensing is an area of research which solves these reconstruction problems with a lot less data than people previously thought possible, by incorporating certain structural assumptions about the object of interest into their construction process.

Those structural assumptions include priors, kind of a priori assumptions that a system can take for granted about the nature of reality.

One example: object permanence.

A car doesnt just stop existing when it passes behind a truck, but an self-driving AI system without knowledge of this particular prior one that human babies learn in their infancy wouldnt necessarily know that. Supplying these priors speeds up training, and that makes autonomous driving systems smarter.

There are about 20 similar concepts that ours brains use to infer the state of the world according to our eyes, Voroninski says. Supplying enough of these repeatedly useful concepts is critical to deep teaching.

Tesla is about 5 years ahead of it's competition, according to automotive industry consultant Katrin ... [+] Zimmermann.

Thats enabled Helm.ais system to drive Page Mill Road, near Skyline Boulevard in the Bay area, with just one camera and one GPU. Its a curvy, steep mountain road that the system wasnt trained on it received no data or images from that route, Voroninski says but was able to navigate with ease and at reasonable speed.

And frankly, thats mostly what we need.

We dont need a system that can off-road or work in the worst blizzard-and-ice conditions. For effective and useful self-driving, we need a system that can handle 99% of roads and conditions, which probably covers a much higher percentage of our overall driving especially when commuting.

In that sense, making a system thats safer than humans is not insanely difficult, Voroninski says. After all, AI doesnt drink and drive.

But the autonomous bar is actually higher than that.

Simply achieving a system that has safety levels on par with a human is actually fairly tractable, in part because human failure modes are somewhat preventable, you know, things like inattention or aggressive driving, etc, Voroninski told me But in truth even achieving that level of safety is not sufficient to launch a scalable fleet. Really what you need is something thats much safer than a human.

After all, lawyers exist.

And liability for robotic autonomous systems is going to be an issue.

Waymo is Google's self-driving platform.

We currently still lack the legal and regulatory frameworks to deploy L5 technologies at scale both nationally and internationally, says Katrin Zimmermann, a managing director at automotive consulting group TLGG Consulting. Technology might enable you to drive in theory, but policy will allow you to drive in practice.

When solved, however, there are multiple trillion-dollar industries to address. Helm.ai is building technology for self-driving cars, naturally, but the technology is not only for personal vehicles or self-driving taxis. Its also for shipping. Delivery robots for last mile service. Service vehicles like street cleaners. Industrial machines that can navigate autonomously.

Solving safe and reliable autonomy unlocks Pandoras box of capability, and none too soon. We need autonomous systems for environmental reclamation on a global scale, safer manufacturing at lower cost, and a hundred other applications.

Pandoras box, of course, is a mixed blessing.Unlocking autonomy puts hundreds of millions of jobs at risk. Engineering a solution for that will require politicians as well as scientists.

For now, Helm.ai is focused on self-driving and focused on shipping its technology to any car brand that wants it.

What were looking to do is really to solve the critical AI piece of the puzzle for self driving cars and license the resulting software to auto manufacturers and fleets, Voroninski says. So you can sort of think about what were doing as kind of an Android model for self-driving cars.

Read the full transcript of our conversation.

Read the rest here:

The Android Of Self-Driving Cars Built A 100,000X Cheaper Way To Train AI For Multiple Trillion-Dollar Markets - Forbes

Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch

Is AI chipmaker Graphcore out to eat Nvidias lunch? Co-founder and CEO Nigel Toon laughs at that interview opener perhaps because he sold his previous company to the chipmaker back in 2011.

Im sure Nvidia will be successful as well, he ventures. Theyre already being very successful in this market And being a viable competitor and standing alongside them, I think that would be a worthy aim for ourselves.

Toon also flags what he couches an interesting absence in the competitive landscape vis-a-vis other major players that youd expect to be there e.g. Intel. (Though clearly Intel is spending to plug the gap.)

A recent report by analyst Gartner suggests AI technologies will be in almost every software product by 2020. The race for more powerful hardware engines to underpin the machine-learning software tsunami is, very clearly, on.

We started on this journey rather earlier than many other companies, says Toon. Were probably two years ahead, so weve definitely got an opportunity to be one of the first people out with a solution that is really designed for this application. And because were ahead weve been able to get the excitement and interest from some of these key innovators who are giving us the right feedback.

Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. Its building dedicated processing hardware plus a software framework for machine learning developers to accelerate building their own AI applications with the stated aim of becoming the leader in the market for machine intelligence processors.

In a supporting statement, Atomico Partner Siraj Khaliq, who is joining the Graphcore board, talks up its potential as being to accelerate the pace of innovation itself. Graphcores first IPU delivers one to two orders of magnitude more performance over the latest industry offerings, making it possible to develop new models with far less time waiting around for algorithms to finish running, he adds.

Toon says the company saw a lot of investor interest after uncloaking at the time of its Series A last October hence it decided to do an earlier than planned Series B. That would allow us to scale the company more quickly, support more customers, and just grow more quickly, he tells TechCrunch. And it still gives us the option to raise more money next year to then really accelerate that ramp after weve got our product out.

The new funding brings on board some new high profile angel investors including DeepMind co-founder DemisHassabis and Uber chief scientistZoubin Ghahramani. So you can hazard a pretty educated guess as to which tech giants Graphcore might be working closely with during the development phase of its AI processing system (albeit Toon is quick to emphasize that angels such as Hassabis are investing in a personal capacity).

We cant really make any statements about what Google might be doing, he adds. We havent announced any customers yet but were obviously working with a number of leading players here and weve got the support from these individuals which you can infer theres quite a lot of interest in what were doing.

Other angels joining the Series B includeOpenAIs Greg Brockman, Ilya Sutskever,Pieter Abbeel andScott Gray. While existing Graphcore investors Amadeus Capital Partners,Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango and Samsung Catalyst Fund also participated in the round.

Commenting in a statement, Ubers Ghahramani argues that current processing hardware is holding back the development of alternative machine learning approaches that he suggests could contribute to radical leaps forward in machine intelligence.

Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches, he says.A new type of hardware that can support and combine alternative techniques, together with deep neural networks, will have a massive impact.

Graphcore has raised around $60M to date with Toon saying its now 60-strong team has been working in earnest on the business for a full three years, though the company origins stretch back as far as 2013.

Co-founders Nigel Toon (CEO, left) and Simon Knowles (CTO, right)

In 2011 the co-founders sold their previous company, Icera which did baseband processing for 2G, 3G and 4G cellular technology for mobile comms to Nvidia. After selling that company we started thinking about this problem and this opportunity. We started talking to some of the leading innovators in the space and started to put a team together around about 2013, he explains.

Graphcore is building what it calls an IPU aka an intelligence processing unit offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs otherwise needed (but not well suited) for such intensive processing.

Its also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the fastest systems today.

Toon says its hoping to get the IPU in the hands of early access customers by the end of the year. That will be in a system form, he adds.

Although at the heart of what were doing is were building a processor, were building our own chip leading edge process, 16 nanometer were actually going to deliver that as a system solution, so well deliver PCI express cards and well actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.

Through next year well be rolling out to a broader number of customers. And hoping to get our technology into some of the larger cloud environments as well so its available to a broad number of developers.

Discussing the difference between the design of its IPU vs GPUs that are also being used to power machine learning, he sums it up thus: GPUs are kind of rigid, locked together, everything doing the same thing all at the same time, whereas we have thousands of processors all doing separate things, all working together across the machine learning task.

The challenge that [processing via IPUs] throws up is to actually get those processors to work together, to be able to share the information that they need to share between them, to schedule the exchange of information between the processors and also to create a software environment thats easy for people to program thats really where the complexity lies and thats really what we have set out to solve.

I think weve got some fairly elegant solutions to those problems, he adds. And thats really whats causing the interest around what were doing.

He says Graphcores team is aiming for a completely seamless interface between its hardware via its graph-framework and widely used high level machine learning frameworks including Tensorflow, Caffe2, MxNet and PyTorch.

You use the same environments, you write exactly the same model, and you feed it through what we call Poplar [a C++ framework], he notes. In most cases that will be completely seamless.

Although he confirms that developers working more outside the current AI mainstream say by trying to create new neural network structures, or working with other machine learning techniques such as decision trees or Markov field may need to make some manual modifications to make use of its IPUs.

In those cases there might be some primitives or some library elements that they need to modify, he notes. The libraries we provide are all open so they can just modify something, change it for their own purposes.

The apparently insatiable demand for machine learning within the tech industry is being driven at least in part by a major shift in the type of data that needs to be understood from text to pictures and video, says Toon. Which means there are increasing numbers of companies that really need machine learning. Its the only way they can get their head around and understand what this sort of unstructured data is thats sitting on their website, he argues.

Beyond that, he points to various emerging technologies and complex scientific challenges its hoped could also benefit from accelerated development of AI from autonomous cars to drug discovery with better medical outcomes.

A lot of cancer drugs are very invasive and have terrible side effects, so theres all kinds of areas where this technology can have a real impact, he suggests. People look at this and think its going to take 20 years [for AI-powered technologies to work] but if youve got the right hardware available [development could be sped up].

Look at how quickly Google Translate has got better using machine learning and that same acceleration I think can apply to some of these very interesting and important areas as well.

In a supporting statement, DeepMinds Hassabis also goes to far as to suggest that dedicated AI processing hardware might offer a leg up to the sci-fi holy grail goal of developing artificial general intelligence (vs the more narrow AIs that comprise the current cutting edge).

Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalize this learning across a wide range of tasks. This requires a lot of processing power, and the innovative architecture underpinning Graphcores processors holds a huge amount of promise, he adds.

Read more:

Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis - TechCrunch

AI paired with data from drones rapidly forecasts flood damage : The Asahi Shimbun – Asahi Shimbun

A startup developed a system to quickly predict how flooding could affect surrounding areas, pairing artificial intelligence (AI) with drone technology.

The brainchild of Arithmer Inc., an entrepreneurial spinoff from the University of Tokyo, shows the possible flow of floodwaters from rivers and streams on a 3-D map, using measurement data from drones.

As the simulation can be completed within a few hours, compared with several months to some years under conventional methods, the invention is drawing attention from local governments nationwide.

In June, the coastal town of Hirono in Fukushima Prefecture became the first municipality to sign an agreement with Arithmer to introduce the technology. It is looking to utilize the system not only for forecasting damage from floods and tsunami but also for issuing disaster victim certificates faster.

According to Arithmer, which utilizes mathematical theories to develop AI programs, inquiries are pouring in from around the country.

Yoshihiro Ota, 48, president of Arithmer, who is also a modern mathematician, said the AI-based technology will prove helpful for both municipalities and businesses.

"Flooding estimates can be made by combining all available elements such as rainfall and where river embankments collapse, so our system will allow evacuation centers and factories to be set up at much safer locations," he said.

In torrential rainfall these days, inundation damage has been reported around small and midsize rivers and other locations that were not identified by local governments as dangerous.

That is, in part, because geographical data collected for estimates through aerial laser measuring and by other means typically do not cover an entire region. Another problem is that it takes much time to process a vast amount of data on rainfall and water flows, rendering it difficult to locate all areas that would likely be inundated.

The technology developed by Arithmer and its partners can create computerized reproductions of streets and rivers scanned by drones at a precision level of 1 centimeter by 1 cm.

It can also automatically complete the otherwise time-consuming flooding prediction promptly because the AI system learns characteristics of each area based on the estimated rainfall, water levels in rivers, the locations of dams and other factors.

While 100 or so scenarios can be simulated, it is possible to identify the worst possible case from among them.

See more here:

AI paired with data from drones rapidly forecasts flood damage : The Asahi Shimbun - Asahi Shimbun

Allen-backed AI2 incubator aims to connect AI startups with world-class talent – TechCrunch

You cant swing a cat these days without hitting some incubator or accelerator, or a startup touting its artificial intelligence chops but for some reason, there are few if any incubators focused just on the AI sector. Seattles Allen Institute for AI is doing just that, with the promise of connecting small classes of startups with the organizations formidable brains (and 250 grand).

AI2, as the Paul Allen-backed nonprofit is more commonly called, already spun off two companies: XNOR.ai, which has made major advances in enabling AI tasks to run on edge devices, is operating independently and licensing its tech to eager customers. And Kitt.ai, a (profitable!) natural language processing platform, was bought by Baidu just last month.

Were two for two, and not in a small way, said Jacob Colker, who has led several Seattle and Bay Area startups and incubators, and is currently the Entrepreneur-in-Residence charged with putting AI2s program on the map. Until now the incubation program has kept a low profile.

Startups will get the expected mentorship and guidance on how to, you know, actually run a company but the draw, Colker emphasized, is the people. A good AI-based startup might get good advice and fancy office space from just about anyone but only AI2, he pointed out, is a major concentration of three core competencies in machine learning, natural language processing, and computer vision.

YOLO in action, from the paper presented at CVPR.

XNOR.ai, still partly run out of the AI2 office, is evidence of that. The companys latest computer vision system, YOLO, performs the rather incredible feat of both detecting and classifying hundreds of object types on the same network, locally and in real time. YOLO scored runner-up for Best Paper at this years CVPR, and thats not the first time its authors have been honored. Id spend more time on the system but its not what this article is about.

There are dozens more PhDs and published researchers; AI2 has plucked (or politely borrowed) high-profile academics from all over, but especially the University of Washington, a longstanding presence at the frontiers of tech. AI2 CEO Oren Etzioni is himself a veteran researcher and is clearly proud of the team hes built.

Obviously AI is hot right now, he told me, but were not jumping on the bandwagon here.

The incubator will have just a handful of companies at a time, he and Colker explained, and the potential investment of up to $250K is more than most such organizations are willing to part with. And as a nonprofit, there are fewer worries about equity terms and ROI.

But the applications of supervised learning are innumerable, and machine learning has become a standard developer tool so ambitious and unique applications of AI are encouraged.

Were not looking for a doohickey, Etzioni said. We want to make big bets and big companies.

AI2 is hoping to get just 2-5 companies for its first batch. Makes it a lot easier for me to keep eyes on them, thats for sure. Interested startups can apply at the AI2 site.

Read more here:

Allen-backed AI2 incubator aims to connect AI startups with world-class talent - TechCrunch

Task Force on Artificial Intelligence – hearing to discuss use of AI in contact tracing – Lexology

On July 8, 2020, the House Financial Services Committees Taskforce on Artificial Intelligence held a hearing entitled Exposure Notification and Contact Tracing: How AI Helps Localities Reopen Safely and Researchers Find a Cure.

In his opening remarks, Congressman Bill Foster (D-IL), chairman of the task force, stated that the hearing would discuss the essential tradeoffs that the coronavirus disease 2019 (COVID-19) pandemic was forcing on the public between life, liberty, privacy and the pursuit of happiness. Chairman Foster noted that what he called invasive artificial intelligence (AI) surveillance may save lives, but would come at a tremendous cost to personal liberty. He said that contact tracing apps that use back-end AI, which combines raw data collected from voluntarily participating COVID-19-positive patients, may adequately address privacy concerns while still capturing similar health and economic benefits as more intrusive monitoring.

Congressman Barry Loudermilk (R-GA) discussed how digital contact tracing could be more effective than manual contact tracing, but noted that it must have strong participation from people 40-60 percent adoption rate overall to be effective. He said that citizens would need to trust that their privacy would not be violated. To help establish this trust, he suggested, people would need to be able to easily determine what data would be collected, who would have access to the data and how the data would be used.

Four panelists testified at this hearing. Below is a summary of each panelists testimony, followed by an overview of some of the post-testimony questions that committee members raised:

Brian McClendon, the CEO and co-founder of the CVKey Project, discussed how privacy, disclosure and opt-in data collection impact the ability to identify and isolate those infected with COVID-19. AI and machine learning require large amounts of data. He stated that while the most valuable data to combat COVID-19 can be found in the contact-tracing interviews of infected and exposed people, difficulties exist in capturing this information. For example, attempted phone calls to reach exposed individuals may go unanswered because people often do not pick up calls from unknown numbers. Mobile apps, he said, offer a way to conduct contact tracing with greater accuracy and coverage. Mr. McClendon discussed two ways that such apps could work: (1) using GPS location or (2) via low-energy Bluetooth. For the latter, Mr. McClendon explained a method developed by two large technology companies: when a user of a digital contact tracing app tests positive for COVID-19, he or she then chooses to opt in to upload non-personally identifiable information to a state-run cloud server, which would then determine whether potential exposures have occurred and provide in-app notifications to such users.

Krutika Kuppalli, M.D., an infectious diseases physician, discussed how using contact tracing can help impede the spread of infectious diseases. She noted that it is important to remember ethical considerations involving public health information, data protection and data privacy when using these technologies.

Andre M. Perry, a fellow at the Brookings Institution, began his presentation by discussing how COVID-19 has disproportionately affected Black and Latino populations, reflecting historical inequalities and structural racism. Mr. Perry identified particular concerns regarding AI and contact tracing as they pertain to structural racism and bias. These tools, he stated, are not neutral and can either exacerbate or mitigate structural racism. To address such bias, he suggested, contact tracing should include people who have generally been excluded from systems that have provided better health and economic outcomes. Further, the use of AI tools in the healthcare arena presents the same risk as in other fields: the AI is only as good as the programmers who design it. Bias in programming can lead to flaws in technology and amplify biases in the real world. Mr. Perry stated that greater recruitment and investment with Black-owned tech firms, rigorous reviews and testing for bias and more engagement with local communities is required.

Ramesh Raskar, a professor at MIT and the founder of the PathCheck Foundation, emphasized three elements during his presentation: (1) how to augment manual contact tracing with apps; (2) how to make sure apps are privacy-preserving, inclusive, trustworthy, and built using open-source methods and nonprofits; and (3) the creation of a National Pandemic Response Service. Regarding inclusivity, Mr. Raskar noted that Congress should actively require that solutions be accessible broadly and generally; contact tracing cannot be effective only for segments of the population that have access to the latest technology.

Post-testimony questions

Chairman Foster asked about limits of privacy-preserving techniques by providing an example of a person who had been isolated for a week, then interacted with only one other person, and then later received a notification of exposure: such a person likely will know the identity of the infected person. Mr. Raskar replied that data protection has different layers: confidentiality, anonymity, and then privacy. In public health scenarios, Mr. Raskar stated that today, we only care about confidentiality and not anonymity or privacy (eventually, he commented, you will have to meet a doctor).

If we were to implement a federal contact tracing program, Representative Loudermilk asked, how would we ensure citizens that they can know what data will be used and collected, and who has access? Mr. McClendon responded that under the approach developed by the two large technology companies, data is random and stored on a personal phone until the user opts in to upload random numbers to the server. The notification determination is made on the phone and the state provides the messages. The state will not know who the exposed person is until that person opts in by calling the manual contact tracing team.

Representative Maxine Waters (D-CA) asked what developers of a mobile contact tracing technology should consider to ensure that minority communities are not further disadvantaged. Mr. Perry reiterated that AI technologies have not been tested, created, or vetted by persons of color, which has led to various biases.

Congressman Sean Casten (D-IL) asked whether AI used in contact tracing is solely backward-looking or could predict future hotspots. Mr. McClendon replied that to predict the future, you need to know the past. Manual contact tracing interviews, where an infected or exposed person describes where he or she has been, would provide significant data to include in a machine-learning algorithm, enabling tracers to predict where a hotspot might occur in the future. However, privacy issues and technological incompatibility (e.g., county and state tools that are not compatible with each other) mean that a lot of data is currently siloed and even inaccessible, impeding the ability for AI to look forward.

Read more:

Task Force on Artificial Intelligence - hearing to discuss use of AI in contact tracing - Lexology

The nominees for the VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

Last Chance: Register for Transform, VB's AI event of the year, hosted online July 15-17.

At our AI-focusedTransform 2020event, taking place July 15-17entirely online, VentureBeat will recognize and award emergent, compelling, and influential work through our second annual VB AI Innovation Awards. Drawn from our daily editorial coverage and the expertise of our nominating committee members, these awards give us a chance to shine a light on the people and companies making an impact in AI.

Here are the nominees in each of the five categories NLP/NLU Innovation, Business Application Innovation, Computer Vision Innovation, AI for Good, and Startup Spotlight.

Dr. Dilek Hakkani-Tur

A senior principal scientist at Amazon Research and faculty member at the University of California, Santa Cruz, Dr. Hakkani-Tur currently works on solving natural dialogue for Amazons Alexa AI. She has researched and worked on natural language processing, conversational AI, and more for over two decades, including stints at Google and Microsoft. She holds dozens of patents and has written or co-authored more than 200 papers in the area of natural language and speech processing. Recent work includes improving task-oriented dialogue systems, increasing the usefulness of open-domain dialogue responses, and repurposing existing data sets for dialogue state tracking for natural language generation (NLG).

BenevolentAI

BenevolentAIs mission is to use AI and machine learning to improve drug discovery and development. The amount of available data is overwhelming, and despite a steady stream of new research, too many pharmaceutical experiments fail today. BenevolentAI helps by accelerating the indexing and retrieval of medical papers and clinical trial reports about new treatments for diseases that dont have cures. Fact-based decision-making is essential everywhere, but for the pharmaceutical industry, the facts just need to be harvested in a synthetic, relevant, and efficient way.

StereoSet

Research continues to uncover bias in AI models. StereoSet is a data set designed to measure discriminatory behaviors like racism and sexism in language models. Researchers Moin Nadeem, Anna Bethke, and Siva Reddy built StereoSet and have made it available to anyone who makes language models. The teams maintains a leaderboard to show how models like BERT and GPT-2 measure up.

Hugging Face

Hugging Face seeks to advance and democratize natural language processing (NLP). The company wants to contribute to the development of technology in this domain by growing the open source community, conducting research, and creating NLP libraries like Transformers and Tokenizers. Hugging Face offers free online tools anyone can use to leverage models such as BERT, XLNet, and GPT-2. The company says more than 1,000 companies use its tools in production, including Apple and Microsofts Bing group.

Jumbotail

Jumbotails technology updates traditional mom-and-pop stores in India, often known as kirana stores, by connecting them with recognized brands and other high-quality product producers to help transform them into modern convenience stores. Jumbotail does so without raising the cost to customers by collecting and mining millions of data points in real time every day. Thanks to its AI backend, Jumbotail became Indias leading online wholesale food and grocery marketplace, with a full stack that includes integrated supply chain and logistics, as well as an in-house financial tech platform for payments and credit. The insights and tech developed around this new business model empower producers and customers, and Jumbotail is poised to expand to other continents.

Codota

Codota is developing a platform powered by machine learning that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code. By automating routine programming tasks that would normally require a team of skilled developers, the company is helping reduce the estimated $312 billion organizations spend on debugging each year. Codotas cloud-based and on-premises solutions, which are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix, complete lines of code based on millions of programs and individual context locally, without sending any sensitive data to remote servers.

Rasa

Rasa is an open source conversational AI company whose tools enable startups to build their own (close to) state-of-the-art natural language processing systems. These tools some of which have been downloaded over 3 million times bring AI assistants to life by providing the technical scaffolding necessary for robust conversations. Rasa invests in research to create conversational AI, furnishing developers at companies like Adobe, Deutsche Telekom, Lemonade, Airbus, Toyota, T-Mobile, BMW, and Orange with solutions to understand messages, determine intent, and capture key contextual information.

Dr. Richard Socher

Dr. Richard Socher is probably best known for founding MetaMind, which Salesforce acquired in 2016, and for his contribution to the landmark ImageNet database. But in his most recent role as chief scientist and EVP at Salesforce (he just left to start a new company), Socher is responsible for bringing forth AI applications, from initial research to deployment.

Platform.ai

To help domain experts without AI expertise deploy AI products and services, Platform.ai offers computer vision without coding. Its an end-to-end rapid development solution that uses proprietary and patent-pending AI and HCI algorithms to visualize data sets and speed up labeling and training by 50-100 times. The goal is to empower companies to build good AI. Platform.ai can count big-name brands like GE, Claro, and Mattel as customers. The companys founders include chief scientist Jeremy Howard, who is also the founding researcher of deep learning education organization Fast.ai and a professor at the University of San Francisco.

Abeba Birhane and Dr. Vinay Prabhu

In their powerful work, Large image datasets: A pyrrhic win for computer vision?, researchers Abeba Birhane, Ph.D. candidate at University College Dublin, and Dr. Vinay Prabhu, principal machine learning scientist at UnifyID, examined the problematic opacity, data collection ethics, labeling and classification, and consequences of large image data sets. These data sets, including ImageNet and MITs 80 Million Tiny Images, have been cited hundreds of times in research. Birhane and Prabhus work is under peer review, but it has already resulted in MIT voluntarily and formally withdrawing the Tiny Images data set on the grounds that it contains derogatory terms as categories, as well as offensive images, and that the nature of images in the data set makes remedying it unfeasible.

Dr. Dhruv Batra

An assistant professor in the School of Interactive Computing at Georgia Tech and a research scientist at Facebook AI Research, Dr. Dhruv Batra focuses primarily on machine learning and computer vision. His long-term research goal is to create AI agents that can perceive their environments, carry natural-sounding dialogue, navigate and interact with their environment, and consider the long-term consequences of their actions. Hes also cofounder of Caliper, a platform designed to help companies better evaluate the data science skills of potential machine learning, AI, and data science hires. And he helped create Eval.ai, an open source platform for evaluating and comparing machine learning (ML) and artificial intelligence (AI) algorithms at scale.

Ripcord

Ripcord offers a portfolio of physical robots that can digitize paper records, even removing staples. Employing computer vision, lifting and positioning arms, and high-quality RGB cameras that capture details at 600 dots per inch, the companys robots are able to scan at 10 times the speed of traditional processes and handle virtually any format. Courtesy of partnerships with logistics firms, Ripcord transports files from customers such as Coca-Cola, BP, and Chevron to its facilities, where it scans them and either stores them to meet compliance requirements or shreds and recycles them. The companys Canopy platform uploads documents to the cloud nearly instantly and makes them available as searchable PDFs.

Machine Learning Emissions Calculator

Authors Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres built an online calculator so anyone can understand the carbon emissions their research generates. Machine learning research demands high compute resources, and even as the field achieves key technological breakthroughs, the authors of the calculator believe transparency about the environmental impact of those achievements should be generalized and included in any paper, blog post, or publication about a given work. They also provide a simple template for standardized, easy reporting.

Niramai

Niramai developed noninvasive, radiation-free early-stage breast cancer detection for women of all age groups using thermal imaging technologies and AI-based analytics software. The company works with various government and nonprofit entities to enable low-cost health check-ups in rural areas in India. Prevention and early detection are key to improving the outcomes of cancers, but health centers are not always equipped with expensive screening machines. Because thermal imaging is safe, cost-effective, and easy to deploy, it can improve early screening in low-tech facilities around the world.

Dr. Pascale Fung

Dr. Pascale Fung is director of the Centre for AI Research (CAiRE) at the Hong Kong University of Science and Technology (HKUST). Among other accolades and honors, Fung represents the university at Partnership on AI and is an IEEE fellow because of her contributions to human-machine interactions. Through her work with CAiRE, she has helped create an end-to-end empathetic chatbot and a natural language processing Q&A system that enables researchers and medical professionals to quickly access information from the COVID-19 Open Research Dataset (CORD-19).

Dr. Timnit Gebru

Dr. Timnit Gebru continues to be one of the strongest voices battling racism, misogyny, and other biases in AI not just in the actual technology, but within the wider community of AI researchers and practitioners. Shes the co-lead of Ethical AI at Google and cofounded Black in AI, a group dedicated to sharing ideas, fostering collaborations, and discussing initiatives to increase the presence of Black individuals in the field of AI. Her work includes Gender Shades, the landmark research exposing the racial bias in facial recognition systems, and Datasheets for Datasets, which aims to create a standardized process for adding documentation to data sets to increase transparency and accountability.

Relimetrics

Relimetrics develops full-stack computer vision and machine learning software for QA and process control in Industry 4.0 applications. Unlike many other competitors in the field of visual inspection, Relimetrics proposes an end-to-end flow that can be adopted by large groups, as well as smaller manufacturers. Industry 4.0 is associated with a plethora of technological stacks, but few are able to scale to large and small manufacturers across multiple industries yet remain simple enough for domain experts to deploy them, which is where Relimetrics comes in.

Dr. Daniela Braga, DefinedCrowd

DefinedCrowd creates high-quality training data for enterprises AI and machine learning projects, including voice recognition, natural language processing, and computer vision workflows. The company crowdsources data labeling and more from hundreds of thousands of paid contributors and passes the massive curation on to its enterprise customers, which include several Fortune 500 companies. The startups cofounder and CEO, Dr. Daniela Braga, has credentials in speech technology and crowdsourcing dating back nearly two decades, including nearly seven years at Microsoft that included work on Cortana. She has led DefinedCrowd through several rounds of funding most recently, a large $50.5 million round in May 2020.

Flatfile

Flatfile wants to replace manual data janitoring for enterprises with its AI-powered data onboarding technology. Flatfile is content agnostic, so a company in essentially any industry can take advantage of its Portal and Concierge platforms, which are able to run on-premises or in the cloud. Flatfile has completed two funding rounds, one of which wrapped up in June 2020. As of September 2019, the company had attracted 30 customers with essentially no paid advertising. Less than a year later, it had 400 companies on its waitlist, ranging from startups up to publicly traded companies.

DoNotPay

DoNotPay, founded by British-born entrepreneur Josh Browder, offers over 100 bots to help consumers cancel memberships and subscriptions, fight corporations, file for benefits, sue robocallers, and more. While much of the companys automation engine is rules-based, it leverages third-party machine learning services to parse terms of service (ToS) agreements for problematic clauses, such as forced arbitration. To address challenges stemming from the pandemic, DoNotPay recently launched a bot that helps U.S.-based users file for unemployment. In the future, the startup plans to bring to market a Chrome extension that will work proactively for users in the background.

See original here:

The nominees for the VentureBeat AI Innovation Awards at Transform 2020 - VentureBeat

Top tech trends for 2021: Gartner predicts hyperautomation, AI and more will dominate business technology – TechRepublic

Operational resiliency is key as the COVID-19 pandemic continues to change how companies will do business next year.

There are nine top strategic technology trends that businesses should plan for in 2021 as the pandemic continues, according to Gartner's analysts. Their findings were presented on Monday at the virtual Gartner IT Symposium/Xpo Americas conference, which runs through Thursday.

Organizational plasticity is key to these trends. "When we talk about the strategic technology trends, we actually have them grouped into three different themes, which is people centricity, location independence, and resilient delivery," said Brian Burke, research vice president at Gartner. "What we're talking about with the trends is how do you leverage technology to gain the organizational plasticity that you need to form and reform into whatever's going to be required as we emerge from this pandemic?"

SEE: COVID-19 workplace policy (TechRepublic Premium)

Here are the top nine trends, in no particular order. And they will have an impact for more than the next year. Companies can look at these for insight through 2025, per Gartner.

"We don't prioritize these. So we don't say that one is more important than the other," Burke explained. "Different organizations in different industries will prioritize the impact of the trends on them as being higher or lower, but when we look really across industries and across geographies and across these trends, we think that these are the most impactful trends that organizations generally are going to face over the next five years."

The Internet of Behaviors (IoB) is an emerging trend. The term "Internet of Behaviors" was first coined inGartner's tech predictions for 2020. This is how organizations, whether government or private sector, are leveraging technology to monitor behavioral events and manage the data to upgrade or downgrade the experience to influence those behaviors. This is what Gartner calls the "digital dust" of peoples' daily lives. It includes facial recognition, location tracking, and big data.

Burke said, "In practical terms, it's real things like health insurance companies that are monitoring your fitness bands and your food intake, and the number of times you go to the gym, and those things to adjust your premiums."

Gartner predicts that by the end of 2025, more than half of the world's population will be subject to at least one IoB program. Burke said: "That might be a little bit of an understatement because when you think about the social credit system in China, you're already up to double digit percentages of people that are being monitored just with one implementation. There's all kinds of these things that are popping up here and there and everywhere."

The cybersecurity mesh technology trend enables people to access any digital asset security, no matter where the asset is, or where the person is located. Burke said: "The cybersecurity mesh is really how we've really reached a tipping point or inflection point with security, and that's causing us to really decouple policy enforcement from policy decision-making. Those were coupled in the past. What that allows us to do is it allows us to put the security perimeter around the individual as opposed to around the organization."

He added "The way that security professionals have traditionally thought about security is that inside of the organization is secure. Then we make sure that everything outside of your organization is secured through that security mechanism inside the organization, inside the firewall, so to speak."

With more digital assets outside of the firewall, particularly with cloud and more remote employees, the security perimeter needs to be around an individual and enforcement is handled through a cloud access security broker, so that policy enforcement is done at the asset itself, Burke explained.

Gartner predicts that by 2025, the cybersecurity mesh will support more than half of digital access control requests.

Another trend is total experience (TX). Last year, Gartner introduced multiexperience and this is a step beyond that. Multiexperience is multiple modes of access using different technologies, and TX ties together customer experience, employee experience, and user experience with the multiexperience environment, Burke said.

Organizations need a TX strategy as interactions become more mobile, virtual, and distributed, particularly as a result of the COVID-19 pandemic.

"The challenge is that in most organizations, those different disciplines are siloed. So what we're saying the basis of that prediction is that if you can bring together customer experience, employee experience, multi experience and user experience, the common notarial effect, common notarial innovation as a combination of strategies is harder to replicate than in a single strategy, according to Michael Porter. And we believe that, too. So you can bring those things together. That's where you'll gain the competitive advantage that will be realized through those experience metrics," Burke said.

Gartner predicts that organizations providing a TX will outperform competitors across key satisfaction metrics over the next three years.

This trend, intelligent composable business, is about leveraging from an application perspective and leveraging packaged business capabilities, which can be thought of as chunks of functionality accessible through APIs, Burke said.

"They can be developed by vendors or provided by vendors or developed in-house. That kind of framework, that allows you to cobble together those package business capabilities, and then access data through a data fabric to provide it's configuration and rapid reconfiguration of business services that can be highly granular even personal acts."

"The intelligent composable business is about bringing together things like better decision making, better access to data that changes the way that we do things, which is required for flexible applications, and which we can deliver when we have this composable approach to application delivery," Burke said.

Hyperautomation is another key strategic trend for 2021. It was a top strategic trend last year as well, and it has been evolving.

"We've seen tremendous demand for automating repetitive manual processes and tasks; so robotic process automation was the star technology that companies were focused on to do that. That has been happening for a couple of years, but what we're seeing now is that it's moved from task based automation, to process based automation, so automating a number of tasks in a process, to functional automation across multiple processes and even moving towards automation at the business ecosystem level. So really, the breadth of automation has expanded as we go forward with hyperautomation," Burke explained.

Another strategic trend, anywhere operations, refers to an IT operating model that supports customers everywhere and enables employees everywhere and manages the deployment of business services across distributed infrastructure.

Burke said that anywhere operations were always there but the pandemic made urgent.

"There always had been a movement towards location independent and providing services at the point where they're required. But back at least in America and Europe in March, suddenly all of these people working from home really raised the awareness of it, which was we have an immediate need to be able to support remote employees and most organizations were able to resolve that really quickly. But then we also are dealing with our customers, and our customers are remote and our products need to become a deliverable remotely as well."

With employees working from home, and salespeople working from home, talking to purchasing agents and buyers working from home, it ramped up the problem and the need to deliver services to people wherever they are and wherever they are required, Burke said.

Gartner predicts that by the end of 2023, 40% of organizations will have applied anywhere operations to deliver optimized and blended virtual and physical customer and employee experiences.

This trend involves providing engineering discipline to an organization because only 53% of projects make it from artificial intelligence (AI) prototypes to production, according to Gartner research.

"AI engineering is about providing the sort of engineering discipline, a robust structure that will emphasize having AI projects that are delivered in a consistent way to ensure that they can scale, move into production, all of those kinds of things. So it's really bringing the engineering discipline to AI for end user organizations. So when you talk about large vendors, yes, they've been delivering successfully for the past quite a few years, but end user organizations are needing to move out of the experimental stage with AI and move into a robust delivery model and that's really what AI engineering's about," Burke said.

Distributed cloud is another technology trend and it involves the distribution of public cloud services to different physical locations while the operation, governance, and evolution of the services are the responsibility of the public cloud provider.

Gartner predicts that by 2025, most cloud service platforms will provide at least some distributed cloud services that begin at the point of need.

Privacy is more important than ever as global data protection legislation matures, and Gartner predicts that by 2025, half of large organizations will implement privacy-enhancing computation for processing data in untrusted environments and multiparty data analytics use cases. Privacy-enhancing computation protects data in use while maintaining secrecy or privacy.

Burke said the actual number of companies that will use privacy-enhancing computation is tough to assess. "That's a difficult one to gauge because what we've seen over the years of course, is that a lot of organizations have not focused as much attention as they probably require on privacy. But we think that what's happening now is that privacy legislation globally is really starting to take hold. So when privacy legislation is introduced, it takes a while for enforcement to catch up to legislation."

He added: "Privacy is going to be an issue for organizations going forward. The importance is going to increase, but also the opportunities are going to be increased to be able to use trusted third parties for analytics and share data across priorities without exposing the private details in that data and that kind of thing."

"One of the things that's really an underlying premise of all of our research, including the top technology trends, is that we're not going to come out of the pandemic and go back to what we were," Burke said. "We're going to come out of the pandemic, but we're going to move forward on a different trajectory. So really, trying to anticipate what that trajectory is going to be for your organization helps to guide you on how you're going to emerge from the pandemic on that different trajectory. So these trends are focused on organizational agility because that's what's going to be successful as we step into a new future phase, hopefully sometime soon."

We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet. Delivered Daily

AI engineering is one of the key strategic trends Gartner predicts for 2021.

Image: iStock

Read more:

Top tech trends for 2021: Gartner predicts hyperautomation, AI and more will dominate business technology - TechRepublic

Loyal Markets on the FX Market and AI Technology – GlobeNewswire

BELIZE CITY, Belize, Sept. 04, 2020 (GLOBE NEWSWIRE) -- With forex trading growing in popularity along with the artificial intelligence revolution, companies like Loyal Markets are playing its part in helping the industry realise the full potential of artificial intelligence in trading.

Both the forex and technology industries are changing and accelerating at an unprecedented rate. As regulation shifts to keep up with the growth, brokers are competing to unveil the latest technological advancements. As such, most have now expanded their offerings to include on-the-go trading through mobile apps. The challenge in the competitive field of forex trading, therefore, is to create a solution that stands out from the pack one that simultaneously adheres to regulatory changes while also meeting the needs of a new trading generation.

Loyal Markets has been using artificial intelligence to create a proprietary system that combines the machine learning of AI. with the discretion of humans to analyse trading insights and to find trading patterns and trends with high odds of success.

Some of the most valuable information for retail investors in forex trading is currency patterns and trends. Investors of Loyal Markets can now select various different AI trading solutions from the trading platform to assist in their trading decisions.

"With the Intraday Pattern Feed and Trend Prediction Engine, using artificial intelligence to trade forex currency is now significantly simpler," said Will Colmore. "Retail traders and independent investment advisors can use the same technology as Wall Street firms to find patterns early."

This technology can also provide insights on the percentage of outcomes that confirm successful trade signals in the past. Pre-calculated through backtesting, this information enables Loyal Market's Fund Management team to make informed decisions about the pattern using artificial intelligence's predictions.

About Loyal Markets

Loyal Markets is one of the world's leading brokerage firms. The company's mission is to expand internationally and become a global financial powerhouse. Uniting a work-force which specialized investment professionals globally, Loyal Markets also boasts a comprehensive administrative support, state-of-the-art artificial intelligence and excellent risk control protocols.

Media ContactCompany: Loyal MarketsContact Person: Will ColmoreEmail: contact@loyalmarkets.comWebsite: https://www.loyalmarkets.comTelephone: +501 4892 5899Address: 1782 Coney Dr, Belize City, Belize

See more here:

Loyal Markets on the FX Market and AI Technology - GlobeNewswire