AI is struggling to adjust to 2020 – TechCrunch

Andrea GaglianoContributor

2020 has made every industry reimagine how to move forward in light of COVID-19: civil rights movements, an election year and countless other big news moments. On a human level, weve had to adjust to a new way of living. Weve started to accept these changes and figure out how to live our lives under these new pandemic rules. While humans settle in, AI is struggling to keep up.

The issue with AI training in 2020 is that, all of a sudden, weve changed our social and cultural norms. The truths that we have taught these algorithms are often no longer actually true. With visual AI specifically, were asking it to immediately interpret the new way we live with updated context that it doesnt have yet.

Algorithms are still adjusting to new visual queues and trying to understand how to accurately identify them. As visual AI catches up, we also need a renewed importance on routine updates in the AI training process so inaccurate training datasets and preexisting open-source models can be corrected.

Computer vision models are struggling to appropriately tag depictions of the new scenes or situations we find ourselves in during the COVID-19 era. Categories have shifted. For example, say theres an image of a father working at home while his son is playing. AI is still categorizing it as leisure or relaxation. It is not identifying this as work or office, despite the fact that working with your kids next to you is the very common reality for many families during this time.

Image Credits: Westend61/Getty Images

On a more technical level, we physically have different pixel depictions of our world. At Getty Images, weve been training AI to see. This means algorithms can identify images and categorize them based on the pixel makeup of that image and decide what it includes. Rapidly changing how we go about our daily lives means that were also shifting what a category or tag (such as cleaning) entails.

Think of it this way cleaning may now include wiping down surfaces that already visually appear clean. Algorithms have been previously taught that to depict cleaning, there needs to be a mess. Now, this looks very different. Our systems have to be retrained to account for these redefined category parameters.

This relates on a smaller scale as well. Someone could be grabbing a door knob with a small wipe or cleaning their steering wheel while sitting in their car. What was once a trivial detail now holds importance as people try to stay safe. We need to catch these small nuances so its tagged appropriately. Then AI can start to understand our world in 2020 and produce accurate outputs.

Image Credits: Chee Gin Tan/Getty Images

Another issue for AI right now is that machine learning algorithms are still trying to understand how to identify and categorize faces with masks. Faces are being detected as solely the top half of the face, or as two faces one with the mask and a second of only the eyes. This creates inconsistencies and inhibits accurate usage of face detection models.

One path forward is to retrain algorithms to perform better when given solely the top portion of the face (above the mask). The mask problem is similar to classic face detection challenges such as someone wearing sunglasses or detecting the face of someone in profile. Now masks are commonplace as well.

Image Credits: Rodger Shija/EyeEm/Getty Images

What this shows us is that computer vision models still have a long way to go before truly being able to see in our ever-evolving social landscape. The way to counter this is to build robust datasets. Then, we can train computer vision models to account for the myriad different ways a face may be obstructed or covered.

At this point, were expanding the parameters of what the algorithm sees as a face be it a person wearing a mask at a grocery store, a nurse wearing a mask as part of their day-to-day job or a person covering their face for religious reasons.

As we create the content needed to build these robust datasets, we should be aware of potentially increased unintentional bias. While some bias will always exist within AI, we now see imbalanced datasets depicting our new normal. For example, we are seeing more images of white people wearing masks than other ethnicities.

This may be the result of strict stay-at-home orders where photographers have limited access to communities other than their own and are unable to diversify their subjects. It may be due to the ethnicity of the photographers choosing to shoot this subject matter. Or, due to the level of impact COVID-19 has had on different regions. Regardless of the reason, having this imbalance will lead to algorithms being able to more accurately detect a white person wearing a mask than any other race or ethnicity.

Data scientists and those who build products with models have an increased responsibility to check for the accuracy of models in light of shifts in social norms. Routine checks and updates to training data and models are key to ensuring quality and robustness of models now more than ever. If outputs are inaccurate, data scientists can quickly identify them and course correct.

Its also worth mentioning that our current way of living is here to stay for the foreseeable future. Because of this, we must be cautious about the open-source datasets were leveraging for training purposes. Datasets that can be altered, should. Open-source models that cannot be altered need to have a disclaimer so its clear what projects might be negatively impacted from the outdated training data.

Identifying the new context were asking the system to understand is the first step toward moving visual AI forward. Then we need more content. More depictions of the world around us and the diverse perspectives of it. As were amassing this new content, take stock of new potential biases and ways to retrain existing open-source datasets. We all have to monitor for inconsistencies and inaccuracies. Persistence and dedication to retraining computer vision models is how well bring AI into 2020.

See the original post here:

AI is struggling to adjust to 2020 - TechCrunch

US Restricts Export of AI Related to Geospatial Imagery – Tom’s Hardware

The U.S. Bureau of Industry and Security announced yesterday that it would restrict the export of artificial intelligence-related technologies beginning January 6. That might seem like bad news for the American tech industry, but it's actually not as bad as it could've been, because right now the restrictions only apply to geospatial imagery.

Those restrictions won't prohibit U.S. tech companies from exporting AI products related to geospatial imagery outright. The rules allow for the export of such tech to Canada, for example, and companies can apply for licenses to export their wares to other countries. There's just no guarantee those licenses will be granted.

James Lewis, from the Center for Strategic and International Studies think tank, told Reuters that the Bureau of Industry and Security essentially wants "to keep American companies from helping the Chinese make better AI products that can help their military." He said the U.S. fears the possibility of AI-controlled targeting systems.

The restrictions essentially just give the U.S. government more control over certain technologies that could give other countries a military advantage. While some companies might rankle under those restrictions--especially if their shareholders aren't pleased--it's not uncommon for governments to enforce these kinds of rules.

Things could have been much worse. AI has become a central part of many services, and it's possible to use AI on nearly any kind of hardware if you're patient enough, so broader restrictions could've made problems for much of the industry. Instead, the U.S. government introduced a narrow rule that applies to specific tech.

But that might not always be the case. Reuters reported in December that the U.S. was considering other rules that would also limit the export of technologies related to quantum computing, Gate-All-Around Field Effect transistor tech, 3D-printing and chemical weapons. (Which, again, isn't that surprising.) More on that here.

Read more from the original source:

US Restricts Export of AI Related to Geospatial Imagery - Tom's Hardware

Q&A: High-performance flash key to unlocking data-intense AI workloads – SiliconANGLE News

Artificial-intelligence workloads are intrinsically data-intensive. They need massive amounts of data to train and produce valuable insights. So storage becomes a key consideration for running these AI applications. First, they need to run where the data resides, which varies in distributed environments. And then those informationalgems that result from the AI algorithms need to be stored somewhere.

Flash storage has a multi-dimensional performance that can take any size of file or workload and run through it without creating storage-related bottlenecks. Two IT leaders in high-performance computing, Nvidia Corp.and Pure Storage Inc., saw these demands in their customers. In response, they joined forces and created AIRI, an AI-ready infrastructure that can help unlock data intelligence.

You know, a lot of it comes from our customers, saidCharlie Boyle(pictured right), vice president and general manager of DGX Systems at Nvidia. Thats how we first started with Pure. Its our joint customer saying we need this stuff to work really fast. Theyre making a massive investment with us in computing. And so if youre going to run those systems at 100%, you need storage that can feed them.If the customer has data, we want it to be as simple as possible for them to run AI.

Boyle and Brian Schwarz (pictured left), vice president of product management at Pure Storage, spoke with Dave Vellante (@dvellante) and Lisa Martin (@LisaMartinTV), co-hosts of theCUBE, SiliconANGLE Medias mobile livestreaming studio, during the Pure//Accelerate event in Austin, Texas. They discussed theNvidia and Pure Storage partnership, the adoption of AIRI, and AI advancements in the industry(see the full interview with transcript here). (* Disclosure below.)

[Editors note: The following answers have been condensed for clarity.]

Martin: Give us a little bit of an overview of where Pure and Nvidia are.

Schwarz: It really was born out of work with mutual customers. We brought out the FlashBlade product. Obviously, Nvidia was in the market with DGXs for AI, and we really started to see overlap in a bunch of initial [AI] deployments. So thats really kind of where the partnership was born. And, obviously, the AI data hub is the piece that we really talked about at this years Accelerate.

Martin: Tell us a little bit about the adoption [of AIRI] and what customers are able to do with this AI-ready infrastructure?

Boyle: [Early customers] had been using storage for years, and AI was kind of new to them, and they needed that recipe. So the early customer experiences turned into AIRI the solution. And the whole point of it is to simplify AI.

AI sounds kind of scary to a lot of folks, and the data scientists really just need to be productive. They dont care about infrastructure, but IT has to support this. So IT was very familiar with Pure Storage. They used them for years for high-performance data, and as they brought in the Nvidiacompute to work with that, having a solution that we both supported was super important to the IT practitioners.

Vellante: How do you see the landscape? Are you seeing pretty aggressive adoption or is it still early?

Boyle: So, every customer is at a different point. Theres definitely a lot of people that are still early, but weve seen a lot of production use cases. So depending on the industry, it really depends on where people are in the maturity curve. But really our message out to the enterprise is start now, whether youve got one data scientist or youve got some community data scientists, theres no reason to wait on AI.

Vellante: So, what are the key considerations for getting started?

Schwarz: So I think understanding the business value creation problem is a really important step. And many people go through an early stage of experimentation a prototyping stage before they go into a mass-production use case. Its a very classic IT adoption curve.

If you look forward over the next 15 to 20 years, theres a massive amount of AI coming, and it is a new form of computing the GPU-driven computing. And the whole point about AIRI is getting the ingredients right to have this new set of infrastructure have storage, network, compute, and the software stack.

Martin: For other customers in different industries, how do you help them even understand the AI pipeline?

Boyle: A lot of it is understanding your data, and thats where Pure and the [AI] data hub comes in. And then formulate a question like, what could I do if I knew this thing? Because thats all about AI and deep learning. Its coming up with insights that arent natural when you just stare at the data. How can the system understand what you want. And then what are the things that you didnt expect to find that AI is showing you about your data. AI can unlock things that you may not have pondered yourself.

And one of the biggest aha moments that Ive seen in customers in the past year or so is just how quickly by using GPU computing they can actually look at their data, do something useful with it, and then move on to the next thing. So, that rapid experimentation is what AI is all about.

Vellante: Youre not going to help but run into [machine intelligence]; its going to be part of your everyday life. Your thoughts?

Boyle: We all use AI every day; you just dont know it. Its the voice recognition system, getting your answer right the first time all in less than a second. Before thatd be like you talked to an IVR system, wait, then you go to an operator; and now people are getting such a better user experience out of AI-backed systems.

Vellante: The AI leaders are applying machine intelligence to that data. How has this modern storage that we heard about this morning affected that customers abilities to really put data at their core?

Schwarz: I think one of the real opportunities, particularly with flash, is to consolidate data into a smaller number of larger, kind of islands of data, because thats where you can really drive the insights. And historically in a disk-driven world, you would never try to consolidate your data, because there were too many bad performance implications of trying to do that. The difference with flash is theres so much performance at the core of it, at the foundation of it.

Martin: I want to ask you about distributed environments. You know customers have so much choice for everything these days. On-prem, hosted, SaaS, public cloud. What are some of the trends that youre seeing?

Schwarz: The first thing I always tell people is, wheres your data gravity? Moving very large sets of data is actually still a hard challenge today. So running your AI where your data is being generated is a good first principle. The second thing is about giving people flexibility. So trying to use a consistent set of infrastructure and software and tooling that allows people to migrate and change over time is an important strategy.

Vellante: So, ideally, on-prem versus cloud implementations shouldnt be different but are they today?

Boyle: So, at the lowest level, there are always technical differences, but at the layer that customers are using it, we run one software stack, no matter where youre running. Its the same in the Nvidiasoftware stack. And its really [about] running AI where your data is.

Vellante: Now that youve been in the market for a while, what are Pures competitive differentiators?

Schwarz: Why do we think [flash] is a great fit for an AI use case? One is the flexibility of the performance we call it multi-dimensional performance. Small files, large files, meta data-intensive workloads, FlashBladecan do them all. Its a ground-up design, its super flexible on performance, but also, more importantly, I would argue simplicity is a real hallmark of who we are.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of the Pure//Accelerate event. (* Disclosure: TheCUBE is a paid media partner for the Pure//Accelerate event. Neither Pure Storage Inc., the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Wed like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Continue reading here:

Q&A: High-performance flash key to unlocking data-intense AI workloads - SiliconANGLE News

Investors bet big on AI for health diagnostics – VentureBeat

Were seeing a new wave of venture investments in healthtech companies especially those with strong artifical intelligence and machine learning components. Led by some of the worlds largest biopharma companies and tech-focused venture capitalists, these investments are backing efforts to speed drug discovery, improve tests and treatments, and further medical research. For now, most of the investment is focused in the diagnostics/tools (Dx/Tools) sector.A Silicon Valley Bank analysis last month found that 44 venture-backed deals raised $2.2 billion between 2015 and the first half of 2017 for Dx/Tools companies that use AI/ML as part of their underlying technology.

The investors are increasingly diverse:

For our analysis, SVB segmented Dx/Tools into three subsectors: Dx Tests (yes/no test results), Dx/Tools Analytics (actionable data analytics to help direct treatment), and R&D Tools (research equipment and services for biopharma and academia). These deals include multi-$100 million financings for three companies: GRAIL, Guardant Health, and Human Longevity.

Tech-focused and healthcare investors view investments in this new subsector through different lenses.

Tech investors tend to see their AI/ML investments in Dx/Tools as a vehicle for tackling big data in the healthcare arena. When that complex problem is solved, they expect the market will be huge as will the exit opportunities. Thus, tech investors are making early-stage bets. For example, they are banding together in AI/ML platform companies like Atomwise, Cofactor Genomics, Color Genomics, Gingko Bioworks, and Neurotrack.

Healthcare investors typically consider regulatory pathway, reimbursement, revenue ramp, and the acquirer landscape as they evaluate investments. While these investors see much promise in AI/ML technologies, so far they have largely remained on the sidelines. AI/ML represents a new paradigm in healthcare company formation, and these early-stage companies are just beginning to address approval and commercialization, and thus are often considered too early for healthcare investors.

Looking ahead, collaboration among tech and healthcare investors seems natural: It would create an enhanced team to take advantage of technology expertise and experience in healthcare market approval and adoption. To date, there have been limited collaborations, such as Guardant Health.

Valuation remains one of the sticking points. Anecdotally, there are numerous examples of healthcare investors being outbid by tech investors. But as early-stage companies mature, we expect to see more activity by traditional healthcare venture investors.

At this stage, there are several key questions that have yet to be answered:

There will be some big wins in this space, but the next financing rounds will serve as a key indicator of investor confidence. Well likely see an investor mix led by new tech investors and biopharma corporate venture arms. And we also expect large tech companies to invest as they continue to expand their healthcare footprint. Again, how big a role healthcare venture investors will play is uncertain.

On the acquisition side, big biopharma will continue to target AI/ML companies. And large tech companies looking to make further inroads into healthcare (such as Google, Amazon, Apple, Microsoft, and Dell) will not likely pass up opportunities to take a stake in this emerging healthcare sector.

As machine learning and artificial intelligence are rapidly commercialized for healthcare applications, we expect healthcare investing to shift paradigms, leading to new waves of investors and opportunities for promising companies.

Jonathan Norris is Managing Director at Silicon Valley Bank.

Link:

Investors bet big on AI for health diagnostics - VentureBeat

MIT develops AI that identifies similarities between unrelated artworks, spanning centuries, artists, and mediums – Art Critique

Art imitating art? That isnt quite the case here, but a new algorithm developed by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in conjunction with Microsoft searchers for art that looks like other artworks. Called MosAIc, the algorithm uses artificial intelligence to search for works that are similar in nature or similar in concept, from various cultures and time periods, to a specified artwork.

Led by Mark Hamilton, a CSAIL PhD student, MosAIc was inspired by Rembrandt and Velzquez, an exhibition centred on Rembrandt and Diego Velzquez shown at Amsterdams Rijksmuseum that wrapped up in January of this year. The exhibition showcased the similarities between works by the 17th century masters, one Dutch and one Spanish, among others, although there is no evidence to support them the artists would have been aware of one another. One such comparison included in the exhibition was of Francisco de Zurbarns The Martyrdom of Saint Serapionand Jan AsselijnsThe Threatened Swan. The pair of paintings piqued Hamiltons interest, and soon, MosAIc, which brings together unlikely, yet uncanny artworks, was born.

AI has become more ingrained in the art world in recent years, as artworks created by AI have started to take off, even if for some theyre simply fascinating oddities. Instead, MosAIc works as a Conditional Image Retrieval (CIR) application that needs just one image to get started, which sets it apart from other existing projects, like X Degrees of Separation by Google, that seek similar goals.

MosAIc was trained using New York Citys Metropolitan Museum of Art and the Rijksmuseums open-access collections. Using a treelike data-structure, called a conditional KNN tree, MosAIc searches its stores for artworks that are related to the original image. The related works are categorised into two branches, either media or culture, which then allows users to adjust the parameters in order to find objects from a specific culture or medium of art to narrow down, or prune, their search. MosAIc spans centuries of works, creating a network of art from various cultures and artists who were working a plethora of mediums. Additionally, Hamilton and his team worked to allow MosAIc to link artworks that are were similar in meaning and theme, not only colour and style.

Every time I use the algorithm, Hamilton told ArtNet News, I find surprises.

Hamilton, along with Stephanie Fu, William T. Freeman, and Mindren Lu, all of MIT, have published a paper on their research into CIR, such as MosAIc, discussing its aims and abilities. The team doesnt expect MosAIc to replace curators, recognising the invaluable expertise they and art historians bring in discovering deeper meaning and context amongst artists and their works; however, they hope MosAIc will become a resource for experts planning exhibitions and perhaps even those in other areas.

Going forward, we hope this work inspires others to think about how tools from information retrieval can help other fields like the arts, humanities, social science, and medicine, Hamilton said in MIT News. These fields are rich with information that has never been processed with these techniques and can be a source for great inspiration for both computer scientists and domain experts. This work can be expanded in terms of new datasets, new types of queries, and new ways to understand the connections between works.

MosAIc also surprised the team in its ability to pick out the blind spots of GANs, or generative adversarial networks that are used in making so called deepfakes. While deepfakes can be an informational and engaging tool, think of the way AI brought the Mona Lisa or Salvador Dal, himself, to life they have also been linked to the creation of fake news and false propaganda. In being able to flag such GANs, MosAIc could be beneficial in identifying AI-generated fakes that have become more prevalent in the last year or so and will most likely only become more a part of day-to-day life in the future.

Related

The rest is here:

MIT develops AI that identifies similarities between unrelated artworks, spanning centuries, artists, and mediums - Art Critique

How Europes AI ecosystem could catch up with China and the U.S. – VentureBeat

From Alibaba and Baidu to Google, Facebook, and Microsoft, China and the United States produced virtually every one of the top consumer AI companies in the world today. That leaves Europe trailing behind the U.S. and China, even though Europe still has the largest community of cited AI researchers.

Startup founders, analysts, and organizations seeking to bring ecosystems together for collective action pondered how the European AI ecosystem can catch up with China and the United States at TechBBQ, a gathering of hundreds of Nordic tech startups held recently in Copenhagen.

Presenters argued that Europe has to turn things around not just for the good of the European economy, but also to provide the world with an alternative to the corporate-driven approach of the U.S. and the state-driven approach of China.

If you look today at some of the spending, which is devoted to artificial intelligence and frontier technologies, were pretty much squeezed between the U.S. and now China, and China is leading, said Jacques Bughin, a senior advisor at the McKinsey Global Institute.

Bughin and others at McKinsey in February coauthored the Notes from the AI frontier report that evaluates the European AI ecosystem and identifies areas where Europe can begin making strides.

Europe edges out the U.S. in total number of software developers (5.7 million to 4.4 million), and venture capital spending in Europe continues to rise to historically high levels. Even so, the U.S. and China beat Europe in venture capital spending, startup growth, and R&D spending. The U.S. also outpaces Europe in AI, big data, and quantum computing patents.

A Center for Data Innovation study released last month also concluded that the U.S. is in the lead, followed by China, with Europe lagging behind.

Multiple surveys of business executives have found that businesses around the world are struggling to scale the use of AI, but European firms trail major U.S. companies in this metric too, with the exception of smart robotics companies.

This trend could be in part due to lower levels of data digitization, Bughin said.

About 3-4% of businesses surveyed by McKinsey were found to be using AI at scale. The majority of those are digital native companies, he said, but 38% of major companies in the U.S. are digital natives compared to 24% in Europe.

In Europe, you have two problems: Youve got a startup problem, but you also have an incumbency problem, where most of the companies [are] actually lagging in terms of knowledge of technologies and division of these technologies compared to the U.S., Bughin said.

Then theres McKinseys AI Readiness Index, which combines eight factors like human skills, investment capacity, number of AI startups per capita, and infrastructure thought to influence a countrys ability to build and support an AI industry and implement the technology in existing industries. In this area, the top-ranking countries are the U.S. and select European countries, such as Ireland, Sweden, Finland, and the U.K.

China excels in categories like ICT connectedness, investment capacity, and AI startups, but the countrys lower preparedness in categories like digital readiness bumps it down to a rank of 7th, between Estonia and Holland.

Countries in southern and eastern Europe generally rated lower in each of the eight AI enabler categories than those in western or northern Europe.

Countries with vibrant, innovative AI startups likely to scale and go international typically have local venture capitalist funding, as well as state investments to build a strong infrastructure that supports businesses and allows the formation of a market.

For those lagging behind, turning things around is essential, Bughin said, because AI will be a major driver of GDP growth in the decades ahead.

If laggard European countries were to close the current readiness gap with the United States, Europes GDP growth could accelerate by another 0.5 point[s] a year, or an extra 900 billion by 2030, the report reads.

Bughin has a number of ideas for how Europe can transform into a leader in AI. To grow the AI ecosystem in Europe, he suggests, the investment will be about gaining a technical understanding of how machine intelligence works.

AI is more than technology. As I say, its about scalability. You need social, emotional skills, you need technical skills, you need digital skills. Its a major transformation, and its all about ecosystem, he said. Earlier this year, OpenAI CTO Greg Brockman also posited the idea that developing emotional fortitude can be a necessary prerequisite for tackling the technical details of AI.

Bughin also recommends that startups recognize theres a bigger picture than their own company. Its really about not only you as an entrepreneur, but an ecosystem of entrepreneurship, Bughin said. It matters not only because as a small startup you want to make money, but to make money you need a market.

Finally, Bughin recommends governments and businesses invest in the growth of an AI ecosystem, but that funding of the eight major areas laid out in the AI Readiness Index needs to be ongoing and not a fleeting investment for a few years.

If you want the revenue of the market, you need to stand there for quite a while, he said. Its not the game of three years. Its a game of 10 to 15 years.

Another route to differentiate Europe from the U.S. and China is a more privacy-driven approach built on the back of human rights-respecting regulation like GDPR. But when asked about the idea, Bughin said, This is a narrative, not necessarily a business model.

Bughin believes there are B2B2C opportunities in sectors like biotechnology, health care, and agriculture that can spill over into the rest of the economy. In that model, opportunities may outsize consumer-driven business models, and privacy wont carry the same importance in B2B2C as it does in the B2C space.

At TechBBQ, Digital Hub Denmark spoke onstage about opportunities and challenges Europe faces due to AI. With a prominent spot directly across from the mainstage, the organization made to promote entrepreneurship also hosted an AI design sprint workshop and a discussion among about a half dozen AI startups like 2021.ai and Neural AI on topics like how to create a Danish AI cluster.

Digital Hub Denmark CEO Camilla Rygaard-Hjalsted thinks Europe will never catch up with the AI investment flowing to businesses in the United States and China, but that Europe can still become a global leader.

I strongly believe that we can become frontrunners within an ethical application of AI in our societies, she said. In the short run, the stronger European regulation compared to China and the U.S. in this field might decrease our ability to scale revenue; however, in the long run, this focus on AI for the people can serve as our competitive advantage, and we become [a] role model for the rest of [the] world one can only hope.

Above: A timeline of major events in AI history dating back to the 1950s created by artist Hjotefar.

Image Credit: Digital Hub Denmark

Like Bughin, she believes AI will be an important driver of GDP in Europe and that talent shortage will be a major issue in the decade ahead. To support continued growth of a European AI ecosystem, she supports the acceleration of digital frontrunner companies and ensuring that startups gain access to public data.

One example of extraordinary access to public data growing a business comes from Corti, a Danish company that recorded 112 conversations with emergency operators in order to create a deep learning algorithm that can detect cardiac arrest events via phone calls.

Rygaard-Hjalsted also believes Denmarks aggressive climate change goal to reduce greenhouse gas emissions by 70% by 2030 compared to 1990 levels could attract talent.

Todays scarce resource is really talent. As the CEO of Digital Hub Denmark, I believe that the combination of AI for the people and the relentless effort to solve the rising climate issues will make us attractive to international AI talent looking for purpose and thus provide the international investments needed to scale climate solutions, she said.

Anna Metsranta is a business designer at Solita, a B2B company that helps other businesses get on the path to becoming AI companies by digitizing their operations, helping them become data-driven, and developing AI models.

One of the biggest challenges she spelled out during a panel conversation about the European AI ecosystem is how hype and a lack of basic understanding keeps business leaders from taking decisive action.

The problem with the inflated expectations caused by the hype is that when senior management expects miracles, and they expect that they can just pour all of the data into this magical black box called AI, and fantastic insight will come out of it, they dont see the potential of the realistic use cases, which might be quite modest, she said. And they should be modest to get started with the technology to start growing your maturity and your understanding. That [expectation] leads to lack of funding, [and then] we cant get companies to fund these initiatives.

In other words, hype inflates expectations, while low levels of understanding leads to a lack of vision among business executives.

If you dont understand the technology, then you firstly dont understand its possibilities. And this leads to a lack of vision; you cant think What could I do with this technology? How could it help my business transform? Thats one problem. The other problem is that you dont see its limitations. Then you buy into this ridiculous hype, these sensationalist news headlines that typically state AI can do anything or its a threat to humanity that will take all of our jobs and then it will kill us all off, she said.

Some executives try to buy their way out of learning these things by hiring a lot of data scientists. Data-driven companies need data scientists, but hiring alone doesnt work because business leaders still have to make decisions about where the company is headed, Metsranta said.

AI will become ubiquitous in business the same way AI is becoming ubiquitous in smartphones, she said. So in order to avoid the negative impact of inaccurate expectations and ensure funding for AI projects, she prescribes more education for business executives and killing the myth of The Terminator scenario in AI.

In response to Metsrantas call for more informed opinions on AI, Christian Hannibal, director of digital policy at Dansk Industri, suggested more programs like an AI public education initiative launched in Finland last year. In June 2018, the University of Helsinki and Finnish tech firm Reaktor launched the Elements of AI course to demystify the technology, with the goal of educating 1% of the Finnish population.

More than 200,000 people have completed the free course thus far, according to the Elements of AI website.

I would very much like to see this initiative rolled out on a European scale, because if theres something Europe can do that the U.S. and China havent done, [it] is to democratize the knowledge of AI so that we go beyond the hype and give a lot more people insights about what the technology can do in their trucking companies and sawmills and hospitals and whatnot, he said.

AI conversations onstage at TechBBQ revolved around a sense of urgency that Europe needs to make strides now to be considered alongside the United States and China. Some of the ways Europe can get there, like the need for R&D spending or funding for startups, are the same as anywhere else in the world. But speakers at TechBBQ working with both large corporations and startups seem to believe Europe can also lean on its unique assets like aggressive climate change initiatives and privacy regulation.

If Europe can leverage its distinct advantages, even if it cant catch up in total venture capital spending, it could successfully create a vision of what the world can be with AI thats different than the Chinese model that generally bends toward the state and the U.S. model that generally bends toward corporations.

Continued here:

How Europes AI ecosystem could catch up with China and the U.S. - VentureBeat

Will AI cross the proverbial chasm? Algorithmia resolves the practical pitfalls of machine learning – ZDNet

"A lot of people in academia are not very good at software engineering," says Kenny Daniel, co-founder and chief technology officer of cloud computing startup Algorithmia. "I always had more of the software engineering bent."

That, in a nutshell, is some of what makes six-year-old, Seattle-based Algorithmia uniquely focused in a world over-run with machine learning offerings.

Amazon, Microsoft, Google, IBM, Salesforce, and other large companies have for some time been offering cut-and-paste machine learning in their cloud services. Why would you want to stray to a small, young company?

No reason, unless that startup had a particular knack for hands-on support of machine learning.

That's the premise of Daniel's firm, founded with Diego Oppenheimer, a graduate of Carnegie Mellon and a veteran of Microsoft. The two became best friends in undergrad at CMU, and when Oppenheimer went to industry, Daniel went to pursue a PhD in machine learning at USC. While researching ML, Daniel realized he wanted to build things more than he wanted to just theorize.

"I had the idea for Algorithmia in grad school," Daniel recalled in an interview with ZDNet. "I saw the struggle of getting the work out into the real world; my colleagues and I were developing state-of-the-art [machine learning] models, but not really getting them adopted in the real world the way we wanted."

He dropped out of USC and hooked up with Oppenheimer to found the company. Oppenheimer had seen from the industry side that even for large companies such as Microsoft, there was a struggle to get enough talent to get things deployed and in production.

The duo initially set out to create an App Store for machine learning, a marketplace in which people could buy and sell ML models, or programs. They got seed funding from venture firm Madrona Ventures, and took up residence in Seattle's Pike Place. "There's a tremendous amount of ML talent out here, and the rents are not as crazy" as Silicon Valley, he explained.

"If companies are not getting the pay-off, if there's a lack of progress, we could be looking at another hype cycle," says Kenny Daniel, CTO and co-founder of machine learning operations service provider Algorithmia.

Their intent was to match up consumers of machine learning, companies that wanted the models, with developers. But Daniel noticed something was breaking down. The majority of customers using the service were consuming machine learning from their own teams. There was little transaction volume because companies were just trying to get stuff to work.

"We said, okay, there's something else going on here: people don't have a great way of turning their models into scalable, production-ready APIs that are highly available and resilient," he recalled having realized.

"A lot of these companies would have data scientists building models in Jupyter on their laptop, and not really having a good way to hook them up to a million iOS apps that are trying to recognize images, or a back-end data pipeline that's trying to process terabytes of data a day."

There was, in other words, "a gap there in software engineering." And so the business shifted from a focus on a marketplace to a focus on providing the infrastructure to make customers' machine learning models scale up.

The company had to solve a lot of the multi-tenant challenges that were fundamental limitations, long before those techniques became mainstream with the big cloud platforms.

Also: How do we know AI is ready to be in the wild? Maybe a critic is needed

"We were running functions before AWS Lambda," says Daniel, referring to Amazon's server-less offering.

Problems such as, "How do you manage GPUs, because GPUs were not built for this kind of thing, they were built to make games run fast, not for multi-tenant users to run jobs on them."

Daniel and Oppenheimer started meeting with big financial and insurance firms, to discuss solving their deployment problems. Training a machine learning model might be fine on AWS. But when it came time to make predictions with the trained model, to put it into production for a high volume of requests, companies were running into issues.

The companies wanted their own instances of their machine learning models in virtual private clouds, on AWS or Azure, with the ability to have dedicated customer support, metrics, management and monitoring.

That lead to the creation of an Algorithmia Enterprise service in 2016. That was made possible by fresh capital, an infusion of $10.5 million from Gradient Ventures, Google's AI investment operation, followed by a $25 million round last summer. In total. Algorithmia has received $37.9 million in funding.

Today, the company has seven-figure deals with large institutions, most of it for running private deployments. You could get something like what Algorithmia offers by using Amazon's SageMaker, for example. But SageMaker is all about using only Amazon's resources. The appeal with Algorithmia is that the deployments will run in multiple cloud facilities, wherever a customer needs machine learning to live.

"A number of these institutions need to have parity across wherever their data is," said Daniel. "You may have data on premise, or maybe you did acquisitions, and things are across multiple clouds; being able to have parity across those is one of the reasons people choose Algorithmia."

Amazon and other cloud giants each tout their offerings as end-to-end services, said Daniel. But that runs counter to reality, which is that there is a soup composed of many technologies that need to be brought together to make ML work.

"In the history of software, there hasn't been a clear end-to-end, be-all winner," Daniel observed. "That's why GitHub, and GitLab, and Bitbucket and all these continue to exist, and there are different CI [continuous integration] systems, and Jenkins, and different deployment systems and different container systems."

"It takes a fair amount of expertise to wire all these things together."

There is some independent support for what Daniel claims. Gartner analyst Arun Chandrasekaran puts Algorithmia in a basket that he calls "ModelOps." The application "life cycle" of artificial intelligence programs,

Chandrasekaran told ZDNet, is different from that of traditional applications, "due to the sheer complexity and dynamism of the environment."

"Most organizations underestimate how long it will take to move AI and ML projects into production."

Also: Recipe for selling software in a pandemic: Be essential, add some machine learning, and focus, focus, focus

Chandrasekaran predicts the market for ModelOps will expand as more and more companies try to deploy AI and run up against the practical hurdles.

While there is the risk that cloud operators will subsume some of what Algorithmia offers, said Chandrasekaran, the need to deploy outside a single cloud supports the role of independent ModelOps vendors such as Algorithmia.

"AI deployments tend to be hybrid, both from the perspective of spanning multiple environments (on-premises, cloud) as well as the different AI techniques that customers may use," he told ZDNet.

Aside from cloud vendors, Algorithmia competitors include Datarobot, H20.ai, RapidMiner, Hydrosphere, Modelop and Seldon.

Some companies may go 100% AWS, conceded Daniel. And some customers may be fine with generic abilities of cloud vendors. For example, Amazon has made a lot of progress with text translation technology as a service, he noted.

But industry-specific, or vertical market machine learning, is something of a different story. One customer of Algorithmia, a large financial firm, needed to deploy an application for fraud detection. "It sounds crazy, but we had to figure out all this stuff of, how do we know this data over here is used to train this model? It's important because its an issue of their [the client's] liability."

The immediate priority for Algorithmia is a new product version called Teams that lets companies organize an invite-only, hosted gathering of those working on a particular model. It can stretch across multiple "federated" instances of a model, said Daniel. The pricing is by compute usage, so it's a pay-as-you-go option, versus the annual billing of the Enterprise version.

Also: AI startup Abacus goes live with commercial deep learning service, takes $13M Series A financing

To Daniel, the gulf that he observed in academia between pure research and software engineering is the thing that has always shot down AI in past. The so-called "AI winter" periods over the decades were in large part a result of the practical obstacles, he believes.

"Those were periods when there was hype for AI and ML, and companies invested a lot of money," he said. "If companies are not getting the pay-off, if there's a lack of progress, we could be looking at another hype cycle."

By contrast, if more companies can be successful in deployment, it may lead to a flourishing of the kind of marketplace that he and Oppenheimer originally envisioned.

"It's like the Unix philosophy, these small things combining, that's the way that I see it," he said. "Ultimately, this will just enable all sorts of things, completely new scenarios, and that's incredibly valuable, things that we can make available in a free market of machine learning."

Read more here:

Will AI cross the proverbial chasm? Algorithmia resolves the practical pitfalls of machine learning - ZDNet

Fintech workforce to expand 19% by 2030 thanks to AI, Cambridge University predicts – Finextra

In a recent report, the Cambridge Centre for Alternative Finance (CCAF) and the World Economic Forum (WEF) found that rather than observing AI as a single instrument for blanket application across the industry, AI can be viewed as a toolkit that is being used to tinker and build services in an abundance of ways to achieve a variety of objectives.

Using data collected in a global survey during 2019, the report analysed a sample of 151 fintechs and incumbents across 33 countries to paint a rich picture of how artificial technology is being developed and deployed within the financial services sector.

While 77% of respondents noted that they expect AI to become an essential business driver across the financial services industry in the near term, the report found that the way incumbents and fintechs are leveraging AI technologies differ in a number of ways.

A higher share of fintechs tend to be creating AI-based products and services, employing autonomous decision-making systems, and relying on cloud-based systems.

Whereas incumbents appear to focus on harnessing AI to improve existing products. This might explain why AI appears to have a higher positive impact on fintechs profitability.

30% of the fintechs surveyed indicated a substantial increase in profit as a result of AI, while only 7% of incumbents indicated such profitability.

As incumbents tend to leverage AI capabilities to foster process innovation within existing products and systems, fintechs are setting a wider trend of selling AI-enabled products as a service.

This approach presents a distinct new value proposition for firms (largely fintechs at this stage), to achieve two-fold economies of scale.

The firms can leverage both the prong of training AI and the prong of servicing new business areas, to offer superior services with unique selling points. The report refers to this as an AI Flywheel where business innovation can become a self-reinforcing cycle.

Another key difference is that while incumbents expect AI technologies to replace almost 9% of jobs within their organisation by 2030, fintechs forecast that AI will expand their workforce by 19%.

Reductions are expected to be most numerous within investment management, noting an anticipated net decrease of 24% over the next 10 years. The report predicts that in line with these figures, 37,700 new fintech roles would be created within the pool of firms in the surveyed sample.

The report also highlights a topic that holds particular currency at present, being quality and access to data and talent required to interpret that data: Regardless of how innovative an AI technology is, its ability to deliver real economic value is contingent upon the data it consumes, the report says.

This concern is of huge importance for sustainable finance, as firms look increasingly toward AI technologies to drive investment returns in line with ESG policy.

The report says that responses illustrate AI-enabled impact assessment and sustainable investing appears to possess the highest correlation with high AI-induced returnshowever, real-world adoption may still be thwarted by data-related issues and a lack of algorithmic explainability.

Given the central role AI is increasingly playing within the financial services industry, FCA and Bank of England recently established the AI Public Private Forum (AIPPF) to explore the technical and public policy issues surrounding the adoption of AI and machine learning across the banking system.

Finextra Research and ResponsibleRisk will be focusing on sustainable finance in investment and asset management at the second SustainableFinance.Live Co-Creation Workshop in March 2020.

Register your interest for the event, where you will be able to discuss the demand for sustainability, the challenges that lie ahead for sustainable investment and how firms across financial services and technology can achieve the UNs Sustainable Development Goals by 2030.

Read more here:

Fintech workforce to expand 19% by 2030 thanks to AI, Cambridge University predicts - Finextra

Adobe tests an AI recommendation tool for headlines and images – TechCrunch

Team members at Adobe have built a new way to use artificial intelligence to automatically personalize a blog for different visitors.

This tool was built as part of the Adobe Sneaks program, where employees can create demos to show off new ideas, which are then showcased (virtually, this year) at the Adobe Summit. While the Sneaks start out as demos, Adobe Experience Cloud Senior Director Steve Hammond told me that 60% of Sneaks make it into a live product.

Hyman Chung, a senior product manager for Adobe Experience Cloud, said that this Sneak was designed for content creators and content marketers who are probably seeing more traffic during the coronavirus pandemic (Adobe says that in April, its own blog saw a 30% month-over-month increase), and who may be looking for ways to increase reader engagement while doing less work.

So in the demo, the Experience Cloud can go beyond simple A/B testing and personalization, leveraging the companys AI technology Adobe Sensei to suggest different headlines, images (which can come from a publishers media library or Adobe Stock) and preview blurbs for different audiences.

Image Credits: Adobe

For example, Chung showed me a mocked-up blog for a tourism company, where a single post about traveling to Australia could be presented differently to thrill-seekers, frugal travelers, partygoers and others. Human writers and editors can still edit the previews for each audience segment, and they can also consult a Snippet Quality Score to see the details behind Senseis recommendation.

Hammond said the demo illustrates Adobes general approach to AI, which is more about applying automation to specific use cases rather than trying to build a broad platform. He also noted that the AI isnt changing the content itself just the way the content is promoted on the main site.

This is leveraging the creativity youve got and matching it with content, he said. You can streamline and adapt the content to different audiences without changing the content itself.

From a privacy perspective, Hammond noted that these audience personas are usually based on information that visitors have opted to share with a brand or website.

Read more:

Adobe tests an AI recommendation tool for headlines and images - TechCrunch

The secret to training AI might be knowing how to say good job – Quartz

Its tough to appreciate how efficient at learning humans really are. From just a few experiences, we can figure out complex tasks like learning to walk or becoming pros at the office coffee machine (roughly of equal importance).

But we havent been able to give machines that same gift. Reinforcement learning, a promising sector of AI research where algorithms test different ways to accomplish a task until they can reliably get it right, is one method used to get machines to learn by doing.

The fields biggest problem: Whats the best way to tell AI it has done something right?

This week, research trying to answer that question was published by major outfits in Silicon Valley: A joint venture between Alphabet DeepMind and Elon Musk-funded OpenAI, as well as separate work from Microsoft-owned Maluuba.

The two papers represent different perspectives on how machines of the future might learn. OpenAI and DeepMinds work suggests that humans may be the best shepherds for fledging AI, guiding the way it learns to ensure its safety. Maluuba takes a new look at an idea AI researchers have hammered at for years, trying to find a way for its algorithm to better understand its failures and successes without human intervention.

The DeepMind and OpenAI research, posted June 13, has humans watch two videos of a 3D object trying to do a front flip. The human chooses the video where the algorithm made the better attemptbut theres a secret! The algorithm has already tried to predict which attempt was better, so the human not only shows a better way to do the task, but gives a nod to how humans perceive the better attempt.

Much reinforcement learning research from DeepMind and OpenAI in the past has focused on video games, where theres a clear goal: Get more points. This new research has an objective goal (do a front flip), but the human judgement can be subjective. OpenAI researchers say this idea could improve AI safety, because future algorithms would be able to align themselves with what humans think are correct and safe behaviors.

Microsofts Maluuba takes a different approach to reinforcement learning, and used it to beat the game Ms. Pac-Man, according to research published June 14. The team quadrupled the previous high score on the game (by human or machine), achieving the maximum number of points possible.

When the agent (Ms. Pac-Man) starts to learn, it moves randomly; it knows nothing about the game board. As it discovers new rewards (the little pellets and fruit Ms. Pac-Man eats) it begins placing little algorithms in those spots, which continuously learn how best to avoid ghosts and get more points based on Ms. Pac-Mans interactions, according to the Maluuba research paper.

As the 163 potential algorithms are mapped, they continually send which movement they think would generate the highest reward to the agent, which averages the inputs and moves Ms. Pac-Man. Each time the agent dies, all the algorithms process what generated rewards. These helper algorithms were carefully crafted by humans to understand how to learn, however.

Instead of having one algorithm learn one complex problem, the AI distributes learning over many smaller algorithms, each tackling simpler problems, Maluuba says in a video. This research could be applied to other highly complex problems, like financial trading, according to the company.

But its worth noting that since more than 100 algorithms are being used to tell Ms. Pac-Man where to move and win the game, this technique is likely to be extremely computationally intensive, so its probably not ready for the Microsoft production line any time soon.

Go here to see the original:

The secret to training AI might be knowing how to say good job - Quartz

ICO launches guidance on AI and data protection – ComputerWeekly.com

The Information Commissioners Office (ICO) has published an 80-page guidance document for companies and other organisations about using artificial intelligence (AI) in line with data protection principles.

The guidance is the culmination of two years research and consultation by Reuben Binns, an associate professor in the department of Computer Science at the University of Oxford, and the ICOs AI team.

The guidance covers what the ICO thinks is best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code. It contains advice on how to interpret relevant law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.

It seeks to provide a framework for auditing AI, focusing on best practices for data protection compliance whether you design your own AI system, or implement one from a third party.

It embodies, it says, auditing tools and procedures that we will use in audits and investigations; detailed guidance on AI and data protection; and a toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems.

It is also an interactive document which invites further communication with the ICO.

This guidance is said to be aimed at two audiences: those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior management, and the ICO's own auditors; and technology specialists, including machine learning experts, data scientists, software developers and engineers, and cyber security and IT risk managers.

It points out two security risks that can be exacerbated by AI, namely the loss or misuse of the large amounts of personal data often required to train AI systems; and software vulnerabilities to be introduced as a result of the introduction of new AI-related code and infrastructure.

For, as the guidance document points out, the standard practices for developing and deploying AI involve, by necessity, processing large amounts of data. There is therefore an inherent risk that this fails to comply with the data minimisation principle.

This, according to the GDPR [the EU General Data Protection Regulation] as glossed by former Computer Weekly journalist Warwick Ashford, requires organisations not to hold data for any longer than absolutely necessary, and not to change the use of the data from the purpose for which it was originally collected, while at the same time they must delete any data at therequest of the data subject.

While the guidance document notes that data protection and AI ethics overlap, it does not seek to provide generic ethical or design principles for your use of AI.

What is AI, in the eyes of the ICO? We use the umbrella term AI because it has become a standard industry term for a range of technologies. One prominent area of AI is machine learning, which is the use of computational techniques to create (often complex) statistical models using (typically) large quantities of data. Those models can be used to make classifications or predictions about new data points. While not all AI involves ML, most of the recent interest in AI is driven by ML in some way, whether in image recognition, speech-to-text, or classifying credit risk.

This guidance therefore focuses on the data protection challenges that ML-based AI may present, while acknowledging that other kinds of AI may give rise to other data protection challenges.

Of particular interest to the ICO is the concept of explainability in AI. The guidance goes on: in collaboration with the Alan Turing Institute we have produced guidance on how organisations can best explain their use of AI to individuals. This resulted in the Explaining decisions made with AI guidance, which was published in May 2020.

The guidance contains commentary about the distinction between a controller and a processor. It says organisations that determine the purposes and means of processing will be controllers regardless of how they are described in any contract about processing services.

This could be potentially relevant to the controversy surrounding the involvement of US data analytics company Palantirs in the NHS Data Store project, where has been repeatedly stressed that the provider is merely a processor and not a controller which is the NHS in that contractual relationship.

The guidance also discusses such matters as bias in data sets leading to AIs making biased decisions, and offers this advice, among other pointers: In cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/overrepresented subsets of the population (eg adding more data points on loan applications from women).

In cases where the training data reflects past discrimination, you could either modify the data, change the learning process, or modify the model after training.

Simon McDougall, deputy commissioner of regulatory innovation and technology at the ICO,said of the guidance: Understanding how to assess compliance with data protection principles can be challenging in the context of AI. From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data. It is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.

The guidance contains recommendations on best practice and technical measures that organisations can use to mitigate those risks caused or exacerbated by the use of this technology. It is reflective of current AI practices and is practically applicable.

Continue reading here:

ICO launches guidance on AI and data protection - ComputerWeekly.com

Bank of England gets closer to blockchain, AI – FinanceFeeds (blog)

The latest PoCs covered: analysis of large-scale supervisory data sets; executing high-value payments across currencies and borders; identifying and applying cross-cutting legal themes from regulatory enforcement actions; and measuring performance on the Banks internal projects portfolio.

The Bank of England has moved closer to the latest achievements in the financial technology arena, as shown by the results of the third round of Proofs of Concept (POCs) completed by its FinTech Accelerator. In an announcementon Monday, the Bank said the latest PoCs covered four key areas of its work: analysis of large-scale supervisory data sets; executing high-value payments across currencies and borders; identifying and applying cross-cutting legal themes from regulatory enforcement actions; and measuring performance on the Banks internal projects portfolio.

An important step was taken towards the use of artificial intelligence (AI) solutions, as the Bank has collaborated with Mindbridge Ai, a machine learning and AI firm, to explore the analytical value of using AI tools to detect anomalies in supervisory data sets. Via the use of a sample set of anonymised reporting data, it was found that Mindbridges user interface is intuitive, allowing the user to explore a time series of each variable, while comparing the results to industry averages. This PoC allowed the Banks internal team of data scientists to compare and contrast their own findings and the underlying algorithms being used, providing a complementary layer to the Banks work.

An interesting development was seen in the Banks collaboration with Ripple in the area of distributed ledger technology (DLT). In this PoC, the Bank and Ripple examined how DLT could be used to model the synchronised movement of two different currencies across two different ledgers. This formed part of the Banks wider research into the future of high-value payments.

The Bank has already concluded that DLT is not sufficiently mature to support the core RTGS system, but the exercise with Ripple has reinforced the Banks intention to ensure its new RTGS system is compatible with DLT usage in the private sector. The Bank has also identified areas where it would like to conduct further exploratory work.

The Bank has also worked with Enforcd and this PoC has demonstrated how technology could potentially facilitate compliance and the development of best practice in some key areas of regulation.

A proof of concept with Experimentus, using their ORB tool, analyzed historic Bank of England projects and visualised how they had performed against a range of standard key performance indicators (KPIs). This PoC showed the Bank whether its existing test data were sufficient to carry out effective KPI reporting, and where further data collection might be necessary.

Talking of Blockchain and AI in the UK, lets say that these fintech areas prevailed in the second phase of the Financial Conduct Authority regulatory sandbox that allows firms to test innovative products and services in a live environment while making sure that consumers are protected in the right manner. BlockEx, for example, plans to test a bond origination, private placement and lifecycle management platform based on distributed ledger technology, whereas nViso will test an online platform providing advisors and clients behavioural assessment profiles generated by artificial intelligence and facial recognition. ZipZap develops a cross-border money remittance platform that picks the most efficient means for a payment to reach its destination, including via digital currencies.

Go here to see the original:

Bank of England gets closer to blockchain, AI - FinanceFeeds (blog)

The Man Behind Marcel, Publicis Groupe’s New AI Platform, Expected the Skeptics – AdAge.com (blog)

Chip Register. Credit: Publicis.Sapient

Two names were uttered more than others at Cannes last week. One is Arthur Sadoun, the new Publicis Groupe CEO who unexpectedly announced that his agency holding company will skip Cannes en masse next year. The other is Marcel, an AI system whose development Publicis will fund with the savings.

But there's another player in the drama, Chip Register, co-CEO of Publicis.Sapient and the architect of Marcel.

Register is nonplussed by the reaction to the announcement, which has included trolling by rival agencies on Twitter and sneering that Marcel is nothing more than an amped-up Alexa or publicity stunt executed by a newbie CEO trying to improve the bottom line.

"I expect and expected the skeptics," said Register, who works out of Arlington, Va., but lives in New Orleans. "That is always the case whenever you've got the idea and nerve to step out like this. My only comment to them is, 'See you at VivaTech.'"

VivaTech is Publicis' annual technology conference in Paris and where the company plans to debut Marcel next year.

Here's how Register describes what Marcel, named after Publicis founder Marcel Bleustein-Blanchet, will be able to ferret out. "In a group of 80,000 people that have 200 capabilities across 130 countries, where is the best talent to work on a project once you've received an RFP or a brief?" he said. "Where is the absolute best talent in the group to work on that and how can we assemble that team and allow that team to work and collaborate virtually to bring the best ideas and values we can to a client at a moment's notice?"

Marcel is "a transformation to go from a group to a platform," he said. "A bunch of organizations to a flat leveling of capability that can be compiled in new creative ways that can solve new and creative problems for our clients. What Marcel does is create the mechanism for that happen."

Register said Marcel will not be an enemy of creativity, but will facilitate it.

"There's been all sorts of speculators and commentators out there saying there's a trade-off between creativity and technology," Register said. "That is an absurd notion."

"The use of technology enables great creative work. It enables the connectivity of people," he added. "It enables teams to work and it enables ideas to generate, be shared globally and virtually, through the use of better insight in culture and the journey of human beings."

Publicis turned to Register due in large part to his role with Sapient before it became part of the holding company in a $3.7 billion acquisition in 2015. His expertise is building tech using Sapient's Global Distributive Delivery system, to which Publicis attributes Sapient's 32% growth rate from 2004 through 2007. Publicis.Sapient has nearly 23,000 employees, more than half of whom run that system from India and will play a large part in the development of Marcel. That's one secret of Marcel's deployment.

"We work in a very virtual way," Register said of Global Distributive Delivery. "It takes a project and it divides the requirements into the places across the whole world where the greatest talent exists to solve those problems."

Sadoun himself was in India a few weeks ago touring tech facilities operated by Register's team. It was there, insiders claim, that Sadoun's idea to ditch Cannes for Marcel was born.

Register and Publicis vigorously refute that.

"I'm not sure I can pin down a moment or an event that led to the idea," Register said. "We've been talking about how to do this for a while."

Register said he and key leaders met after Sadoun made his announcement in Cannes, adding that they "ideated 15 to 20 core competencies for the platform."

Marcel is being built internally because no one can understand the unique customization required to get the most out of its talent base other than Publicis, Register said. The company will likely work with a third-party platform in some capacity, though, to aid in the rollout.

"We buy lots of software from lots of companies that could play a role in the ultimate architecture of the products," Register said. "That's a foregone conclusion."

"But there is no off-the-shelf solution that is going to explode the value of Publicis Groupe," he added. "Fortunately, we are able to do that ourselves because we have a huge technology based enterprise that exists on a wide, global scale. There's a difference between being able to do Einstein's math and being able to split an atom; one is the ability to understand a problem and the other is ability to execute. And that is where I think we have a great shot at leading the transformation of our company."

FIVE THINGS YOU'LL BE ABLE TO ASK MARCEL

1. "Marcel, who is the CMO of Tesla and is anyone in the network connected to him or her? Please also check LinkedIn relationships."

2. "Marcel, do we have any Mumbai-based full-time, temporary, or contract employees with 5 to 7 years Java angular development experience?"

3. "Marcel, can you show me examples of great creative work we have done for luxury apparel clients?"

4. "Marcel, who won awards for creativity from our LA office?"

5. "Marcel, can you help me find a creative director in Chicago with healthcare experience?"

~ ~ ~ CORRECTION: An earlier version of this article misidentified the Publicis.Sapient system that will help develop Marcel. It is Global Distributive Delivery, not Global Distribution Delivery. The article also said Publicis.Sapient has 12,000 employees; the correct figure is 23,000.

Continue reading here:

The Man Behind Marcel, Publicis Groupe's New AI Platform, Expected the Skeptics - AdAge.com (blog)

Workfit raises $5.5 million seed round to be your AI meeting … – TechCrunch

Conversational AI is pushing deeper into enterprise with Workfit, a new startup promising to make conference call follow-ups and mid-meeting CRM updates as easy as playing a song or checking the weather on Google Home or Amazon Echo. Battery Ventures, Greycroft Partners, Salesforce Ventures and a number of angels joined together to finance a $5.5 million seed investment in the startup.

Workfits announcement is underscored by a general uptick in activity around conversational AI for enterprise. Amazons Alexa, perhaps the most enterprise-friendly of the popular conversational tools available today, boasts integrations with companies like Hipchat and Sisense for both team collaboration and data recall. Of course, the reality is that most meetings are not limited to a single conference room and even fewer have an Echo listening in.

Workfits assistant Evalistensin tobusiness meetings and lends managers a helping hand by highlighting important action items.Aside from just coordinating follow-ups, Eva will plug into your CRM du jour to allow for voice-driven updates. This means that you can update the status of a given sale and even pull or update data entries. Workfit integrates with major meeting hosting players like BlueJeans, WebEx and Zoom.

The Workfit platform

Other tools like the recently launched Chorus.ai and Cogitoarerespectively more squarely focused on using AI to boostsales and improvethe effectiveness of customer support. Though Chorus.ai will join conference calls with roughly the same mechanism as Workfit, and both will highlight key action items, Workfit has no problem playing anactive part inmeetings.

In lieu of fading to the background, the team behind Workfit wants enterprise users to lend the assistant a hand by explicitly calling out follow-ups. The workflow might sound unpolished, but the company argues that summarizing key points during a meeting is a best practice, regardless of the presence of an AI, to ensure all human participants are on the same page. Eva will do as much in the background as she can, but has no problem being recognized in the spirit of X.ais personal scheduling assistant Amy.

With consumer AI, if you play the wrong song thats ok, asserts Workfit CEOOmar Tawakol. But wrong info in a pipeline on Salesforce, thats not ok.

Workfit team left to right: Geish, David, Ahmad and Omar

This is the logic behind why the startup is focusing so heavily on Automatic Speech Recognition (ASR), even bringing on Ahmad Abdulkader, a former leader within Facebooks applied AI group. Though working on ASR in 2017 isnt necessarily sexy, there are some gains to be made by building a system for a specific use case in this situation a lot of meeting language.

When asked about sales, the Workfit team told TechCrunch that they hadnt secured any sales yet and that they still considered it early days. Tawakols previous experience driving data management platform BlueKai to a $400 million Oracle acquisition in 2014 undoubtedly helped streamline the fundraising process. Battery Ventures was an early investor in BlueKai.

From here the team wants individual project managers to lead the charge in driving adoption. Once Workfit sees that a group of users is developing at a given company, it will jump in to try to close an enterprise sale.

Read the original:

Workfit raises $5.5 million seed round to be your AI meeting ... - TechCrunch

Mark Cuban Agrees With Elon Musk, Says AI is "Changing Everything" – Inverse

Tesla and SpaceX CEO Elon Musk may sound like an alarmist when he talks about the threat that artificial intelligence (A.I.) poses to humans, but hes got an ally in billionaire Mark Cuban, owner of the basketball team Dallas Mavericks and a lead investor on Shark Tank. On Sunday at New York Citys OZY FEST, Cuban had similarly drastic language about the advancements in machine learning that are coming and even happening now; he said that A.I. is already changing everything.

Cuban described our current moment as a transitional period in which the stage is being set for aggressive technological development that will happen more quickly than either the rise of the internet or of smartphones and will cause serious disruption.

Theres going to be a lot of unemployed people replaced with technology, and if we dont start dealing with that now, were going to have some real problems, Cuban said. He continued, A lot of jobs that were very repetitive are going to get replaced by neural networks.

Cuban doesnt think that all of the changes will be uniformly bad, though: He also noted that the transition is going to change how people approach problem-solving, and that Companies are going to have to adjust, to learn how to acquire data and use that data in ways they never have before.

His opinions are not unlike those of Elon Musk, who called A.I. a fundamental risk to the existence of human civilization at the National Governors Association meeting this month. Musk advocated for governmental regulation of A.I., saying that a forthcoming massive replacement of human employees is the biggest threat A.I. poses to global society.

Cuban thinks that these changes are already occurring. Without question, machine learning, computer vision and neural networks are changing everything, Cuban said. However much change you saw over the past ten years with the Apple iPhone, thats nothing.

Cuban has sounded the alarm in the past, too. In February, he said that any person who doesnt learn about A.I. is going to be a dinosaur within 3 years, and again cautioned that major job losses will soon occur.

See the original post here:

Mark Cuban Agrees With Elon Musk, Says AI is "Changing Everything" - Inverse

CoVID-19 and the use of robots – AI Daily

Back to the OG video. Scientists at the University of Liverpool have unveiled a robotic colleague that has been working non-stop in their lab throughout lockdown. The 100,000 programmable inhuman researcher learns from its results to refine its experiments. "It can work autonomously, so I can run experiments from home," explained Benjamin Burger, PhD student at the University and one of the developers of the robots. Dr Burger jokingly added, "It doesn't get bored, doesn't get tired, works around the clock and doesn't need holidays." Such technology could make scientific discovery "a thousand-fold faster", scientists say. A new report by the Royal Society of Chemistry lays out a "post-COVID national research strategy", using robotics, AI and advanced computing as a part of a set of technologies that "must be urgently embraced" to assist socially distancing scientists continue their look for solutions to global challenges. Future science historians will mark the start of the 21st century as a time when robots took their place beside human scientists. Programmers have turned computers from extraordinarily powerful but fundamentally dumb tools, into tools with smarts. Artificially intelligent programs make sense of knowledge so complex that it defies human analysis. They even come up with hypotheses, the testable questions that drive science, on their own.

For better or worse the robots are about to replace many humans in their jobs, analysts say; coronavirus outbreak is just speeding up the method. "People usually say they need an individual's element to their interactions but Covid-19 has changed that," says Martin Ford, a futurist who has written about the ways robots are going to be integrated into the economy within the coming decades. "[Covid-19] goes to vary consumer preference and really open up new opportunities for automation." Companies large and little are expanding how they use robots to extend social distancing and reduce the quantity of staff that need to physically come to figure. Robots are also getting used to performing roles workers cannot do at home. Walmart is using robots to scrub their floors, fast-food chains like McDonald's have been testing robots as cooks and servers in a service where the health concern is highest. After all this, it is evident that the majority of the jobs that are available to the general people like us are temporary, insecure, and badly paid. Nevertheless, with the advent of using more robots in the workplace, there will be an unjust, unfair and unacceptable distribution of income. Just for the sake of health concerns, the use of robots increased exponentially. All of this is that version of future which haunts the experts of AI.

While automation is likely to foster overall economic prosperity, it comes at the price of increasing inequality. The COVID-19 pandemic is reinforcing both the trend towards automation and its effects. The main challenge here is to ensure that as many as possible will benefit from the positive economic and social effects of automation to prevent a situation in which a substantial part of society is disconnected from the gains brought by technological progress. There are still many things that they will never be able to do better than humans, and there are still more that they will not be able to do as cheaply. We are yet to discover the full range of these things, but we can already find out the key limitations to what robots and AI can do.

First, there appears to be a high quality in human intelligence that, for all its wonders, AI cannot match, namely its ability to influence the uncertain, the fuzzy, and the logically ambiguous.

Second, due to the innate nature of human intelligence, people are extremely flexible in being able to perform umpteen possible tasks, including those that were not foreseen at first.

Third, humans are social creatures instead of isolated individuals. Humans want to deal with other humans. Robots will never be better than humans at being human, and so I conclude- there is no risk for a post-pandemic near future.

Reference: 1. https://www.bbc.com/news/science-environment-53029854

2. https://www.bbc.com/news/technology-52340651

3. https://voxeu.org/article/covid-19-and-macroeconomic-effects-automation

4. Roger Bootle- The AI Economy Work, Wealth and Welfare in the Robot Age; Nicholas Brealey Publishing, Sept. 2019

Thumbnail credit: shutterstock.com

Continue reading here:

CoVID-19 and the use of robots - AI Daily

Remdesivir’s controversial cost, early vaccine data, and AI at the end of life – STAT

Whats a fair price for remdesivir? How do we know whether vaccines work? And does AI have a place in end-of-life care?

We discuss all that and more this week on The Readout LOUD, STATs biotech podcast. First, we dig into the long-awaited price for Gilead Sciences Covid-19 treatment and break down the disparate reactions from lawmakers, activists, and Wall Street analysts. Then, STATs Matthew Herper joins us to discuss some of the first detailed data on a potential vaccine for the novel coronavirus. Finally, we talk about a new use for AI: nudging clinicians to broach delicate conversations with patients about their end-of-life goals and wishes.

For more on what we cover, heres the remdesivir news; heres more on the vaccine data; heres the story on AI; and heres the latest in STATs coronavirus coverage.

advertisement

Well be back next Thursday evening and every Thursday evening so be sure to sign up onApple Podcasts,Stitcher,Google Play, or wherever you get your podcasts.

And if you have any feedback for us topics to cover, guests to invite, vocal tics to cease you can emailreadoutloud@statnews.com.

advertisement

Interested in sponsoring a future episode of The Readout LOUD? Email us atmarketing@statnews.com.

Read more from the original source:

Remdesivir's controversial cost, early vaccine data, and AI at the end of life - STAT

‘Liu Xiaobo should be a free man’: Ai Weiwei joins calls to release dying dissident – The Guardian

Ai Weiwei accused western governments of failing to speak up for activists such as Liu for fear of damaging economic ties with Beijing. Photograph: Matej Divizna/Getty Images

The Chinese artist Ai Weiwei has added his voice to growing calls for China to release its most famous political prisoner, the critically ill Nobel Peace Prize winner Liu Xiaobo.

Speaking for the first time about the plight of his longtime friend, who was recently diagnosed with late-stage liver cancer while serving an 11-year jail term, Ai urged Beijing to immediately free Liu, who was jailed in 2009 for his role in a pro-democracy manifesto called Charter 08.

I think the government should release him. This is a historic mistake, Ai told the Guardian from Berlin, where he now lives.

The government should just release him and have a better record because this is going to be remembered by the whole world what they are doing.

They [must] admit that this was a horrible mistake to sacrifice the best people in this nation the best minds in this nation and to put them in such a horrible situation. That is what they continue to do now and it is unacceptable.

Ai was speaking as Chinese president Xi Jinping came under intense pressure to let Liu, who doctors have said is close to death, leave the country for medical treatment.

Beijing has so far rebuffed calls from countries including the United States for the 61-year-old activist to be allowed to travel overseas, accusing them of meddling in its internal affairs.

Chinese doctors at the hospital where Liu has been receiving treatment since being granted medical parole several weeks ago claimed he was too unwell to be moved. However, that claim was contradicted on Sunday by two foreign specialists who were allowed to visit Liu in hospital, where he is reportedly under police guard.

While a degree of risk always exists in the movement of any patient, both physicians believe Liu can be safely transported with appropriate medical evacuation care and support, the German and US doctors said in a statement, adding that their hospitals were ready to offer Liu the best care possible.

That announcement sparked an immediate outcry, as friends, supporters and activists demanded Lius complete release.

Jared Genser, a US lawyer acting for Liu, said that if Xi refused to let the dying dissident seek potentially life-extending medical treatment abroad he would be viewed as having deliberately cut short a mans life.

My view is that if Xi doesnt do that then it will be viewed publicly as an extraordinarily callous and weak position for China to put itself in, Genser, who is known for his work with prisoners of conscience, including Aung San Suu Kyi, told the Guardian.

China can show its strength to the world and its security in its own governance by not being afraid of one man who has dared repeatedly to stand up to the one-party system, he added.

I hope and pray that we can succeed. We dont have a lot of time quite clearly so it is going to be a very difficult challenge. But I hope that President Xi will see that its in China interests to not be viewed as not only silencing a man but wilfully and intentionally shortening his life.

In a statement, a group of Chinese supporters said the doctors statement conclusively rebutted the official propaganda that it is unsafe for Liu Xiaobo to be transported overseas.

Liu Xiaobos life is now in imminent danger Any delay and obstruction are no different from a drawn-out murder, it said.

Amnesty International also appealed to Xi for Lius release. There is only one person in China that has the authority to rule on the fate of Liu Xiaobo and that is Xi Jinping, said Nicholas Bequelin, Amnesty Internationals East Asia director.

Ai, who has known Liu since the 1980s, said his friend, who Beijing claims tried to overthrow the Communist party, was unjustly imprisoned and now deserved the right to decide where he wanted to spend his final days.

He absolutely should be a free man and a free man should have a choice to make all the choices by himself, he said.

He should not have been sentenced. He should be completely out of jail, released without any conditions. He should be a free man, then he should make a free judgment about where to stay and where to get medical care, and who he wants to be associated with.

Ai also attacked what he described as the hypocrisy of western governments, which he accused of feigning concern for activists such as Liu, but had failed to speak up for them for fear of damaging economic ties with Beijing. To me, it is disgusting. For any Chinese [who] looks at that I mean, my God, just for the money, he said.

There are so many people, lawyers, or human rights defenders or activists, in jail, and many of them in secret detention without trial for years and they are all being mistreated, Ai said.

The artist accused western politicians of caring only about striking lucrative deals with Chinas authoritarian rulers.

Each of those deals sacrifices someone like Xiaobo. So dont pretend, when Liu Xiaobo is dying, or Liu Xiaobo [is in] such difficult circumstances, dont pretend anybody is innocent.

Visit link:

'Liu Xiaobo should be a free man': Ai Weiwei joins calls to release dying dissident - The Guardian

Why AI fell short in slowing the spread of COVID-19 – Healthcare IT News

This spring, much of the healthcare industry hoped that artificial intelligence could be a key tool in stemming the spread of the COVID-19 pandemic across the world.

But the results weren't just underwhelming. In some cases, they were "anti-constructive," said Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School, during a FutureMed presentation on Thursday.

"We in healthcare were shooting for the moon, but we hadn't gotten out of our own backyard," said Kohane.

HIMSS20 Digital

In the United Statesthere were several attempts to use aggregate data from electronic health records. Kohane used Epic as an example, pointing to its system to predict severity of disease based on admissions.

"It didn't perform very well at all," said Kohane.

According to Kohane, a lack of high-quality data contributed to the shortfall.

"Most of the data that was being shared for the first three months was literally just case counts and death counts," he said. "To the extent that there was sharing of clinical courses, it was from single institutions," rather than interstate efforts.

"We did not have a real collective intelligence," he said.

But hope isn't completely lost for AI's role in addressing the pandemic. Kohane noted that companies are using it to develop vaccines specifically, using large databases of protein interactions and docking simulations to figure out the best protein domain to block.

In December or sooner, he said, we'll see "the results of Phase 2 trials from purely machine-learned trials."

U.S. Food and Drug Administration Principal Deputy Commissioner Amy Abernethy said during the presentation that AI might be used to help sort through the available drugs and to help get data sets cleaned up "to better understand how drugs are performing."

Meanwhile, Eran Segal, a professor in the computer science department at the Weizmann Institute of Science, pointed to the use of AI in conjunction with surveys to help predict, based on reported symptoms, which individuals should be tested.

Ultimately, said Dr. Karen DeSalvo, chief health officer at Google Health and former National Coordinator for Health IT, those building AI tools to confront the pandemic must not replicate existing biases in medicine, a possibility that is a continued concern among many developers.

"There's a really important challenge to look at fairness: to make sure that whatever we are building is not going to exacerbate inequities in health outcomes," said DeSalvo.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichHealthcare IT News is a HIMSS Media publication.

See the article here:

Why AI fell short in slowing the spread of COVID-19 - Healthcare IT News

AI is still pretty dumb and like a 2-year-old – MedCity News


MedCity News
AI is still pretty dumb and like a 2-year-old
MedCity News
He described IBM Watson Health as some of the best computer science on the planet but noted that AI is heavily dependent on mammoth amounts of data. Here's how Ross captures the limitations of AI, adding that his view of the technology may result in ...

Go here to read the rest:

AI is still pretty dumb and like a 2-year-old - MedCity News