Artificial Intelligence Cold War on the horizon – POLITICO

While the U.S. has lacked central organizing of its AI, it has an advantage in its flexible tech industry, said Nand Mulchandani, the acting director of the U.S. Department of Defense Joint Artificial Intelligence Center. Mulchandani is skeptical of Chinas efforts at civil-military fusion, saying that governments are rarely able to direct early stage technology development.

Tensions over how to accelerate AI are driven by the prospect of a tech cold war between the U.S. and China, amid improving Chinese innovation and access to both capital and top foreign researchers. Theyve learned by studying our playbook, said Elsa B. Kania of the Center for a New American Security.

Many commentators in Washington and Beijing have accepted the fact that we are in a new type of Cold War, said Ulrik Vestergaard Knudsen, deputy secretary general of Organization for Economic Cooperation and Development (OECD), which is leading efforts to develop global AI cooperation. But he argued that we should not abandon hope of joining forces globally. Leading democracies want to keep the door open: Ami Appelbaum, chairman of Israels innovation authority, said we have to work globally and we have to work jointly. I wish also the Chinese and the Russians would join us. Eric Schmidt said coalitions and cooperation would be needed, but to beat China rather than to include them. "China is simply too big," he said. "There are too many smart people for us to do this on our own."

The invasive nature and the scale of many AI technologies mean that companies could be hindered in growing civilian markets, and the public could be skeptical of national security efforts, in the absence of clear frameworks for protecting privacy and other rights at home and abroad.

A Global Partnership on AI (GPAI), started by leaders of the Group of Seven (G7) countries and now managed by the OECD, has grown to include 13 countries including India. The U.S. is coordinating an AI Partnership for Defense, also among 13 democracies, while the OECD published a set of AI Principles in 2019 supported by 43 governments.

Knudsen said that it is important for AI global cooperation to move cautiously. Multilateralism and international cooperation are under strain, he said, making a global agreement on AI ethics difficult. But if you start with soft law, if you start with principles and let civil society and academics join the discussion, it is actually possible to reach consensus, he said.

Data and cultural dividing lines

Major divisions exist over how to handle data generated by AI processes. In Europe, we say that its the individual that owns the data. In China, its the state or the party. And then theres a divide in the rest of the world, said Knudsen. There is a right to privacy that accrues to everyone, according to Courtney Bowman, director of privacy and civil liberties engineering at data-mining and surveillance company Palantir Technologies. But we have to recognize that privacy does have a cultural dimension. There are different flavors, he said.

Most experts agree there is the scope to regulate how data is used in AI. Palantirs Bowman says that AI success isnt about unhindered access to the biggest datasets. To build competent, capable AI its not just a matter of pure data accumulation, of volume. It comes down to responsible practices that actually align very closely with good data science, he said.

The countries that get the best data sets will develop the best AI: no doubt about it, said Nand Mulchandani. But he said that partnerships are the way to get that data. Global partnerships are so incredibly important because they give access to global data, which in aggregate is better than even a huge dataset from within a single country such as China.

How can government boost AI?

Rep. Cathy McMorris Rodgers (R - WA) , a leading Republican voice on technology issues, wants the U.S. government to create a foundation for trust in domestic AI via measures such as a national privacy standard. We need to be putting some protections in place that are pro-consumer so that there will be trust, in pro-American technology, she said.

U.S. Rep. Pramila Jayapal (R-Wa.) wants both government regulation and private sector standards while AI technologies particularly facial recognition are still young. The thing about technology is, once it's out of the bottle, it's out of the bottle, she said. You can't really bring back the rights of [Michigan resident Robert Williams who was arrested based on a faulty ID by facial recognition software], or the rights of Uighurs in China, who are bearing the brunt of this discriminatory use of facial recognition technology. Some experts argue that while regulation is needed, it must be sector-specific, because AI is not a single concept, but a family of technologies, with each requiring a different regulatory approach.

Government has a role in making data widely available for the development of AI, so that smaller companies have a fair opportunity to research and innovate, said Charles Romine, Director of the Information Technology Laboratory (ITL) within the National Institute of Standards and Technology (NIST).

On the question of government AI funding, Elsa Kania said that its not possible to make direct comparisons between U.S. and Chinese government investments. The U.S. has more venture capital, for example, while eye-popping investment figures from Chinas central government dont mean an awful lot if they arent matched by investments in talent and education, she said. We shouldnt be trying to match China dollar-for-dollar if we can be investing smarter.

The rest is here:

Artificial Intelligence Cold War on the horizon - POLITICO

AI may take your job – in 120 years – BBC News


BBC News
AI may take your job - in 120 years
BBC News
In 45 years' time, though, half of jobs currently filled by humans will have been taken over by an artificial intelligence system, results indicate. The report, When will AI exceed human performance?, says AI will reshape transport, health, science and ...

and more »

Here is the original post:

AI may take your job - in 120 years - BBC News

Adobe CEO: Microsoft partnership will automate sales, marketing with AI – CNBC

For example, Adobe's Experience Cloud, which helps brands manage customer interactions and advertising, processes 100 trillion transactions every year.

Narayen said the data gathered from those transactions will in turn feed into Adobe Sensei, which will do things like transform paper documents into editable digital files, create predictive models, and change expressions in photographs with a few clicks.

"It's a way to really bring creativity to the masses. And it's a way to enable everybody to be a creator," Narayen said. "We partner with great companies like Nvidia who are able to process this in real time, but it's all the magic that's created by our product folks."

All this ties in to what Narayen dubbed Adobe's two tailwinds that helped the software giant deliver better-than-expected earnings on Tuesday: individual creativity and a changing business landscape.

"People want to create and businesses want to transform, and we are mission-critical to both of them. We are driving tremendous innovation and executing," Narayen said.

And whether that execution is proven by 49 percent growth in Adobe's Premiere Pro video editing platform or an 86 percent jump in recurring revenues, Narayen said knowing what creators want is the key to Adobe's success.

"I think using the right lens and unleashing innovation on our product development, that's how we do it," the CEO said. "If you're a creative professional, we're just as mission-critical as a Bloomberg terminal might be for somebody in the financial community. And on the enterprise side, when small and medium businesses want to create an online digital presence, and they want to have commerce as part of their future, they use us to enable themselves to have this online presence."

When Cramer asked whether Narayen communicated these sentiments to President Donald Trump at Monday's technology council meeting at the White House, the CEO responded diplomatically.

"Design and aesthetics have never been more important, and I think as it relates to modernizing government, all businesses are transforming so that the customer experience is front and center. There's no reason why the government shouldn't do exactly the same," Narayen said.

The Adobe chief added that when it came to the meeting's central topics, modernizing the government and enhancing the skills of the U.S. workforce, he emphasized STEAM over STEM, the well known acronym for the sciences, technology, engineering and mathematics, adding arts to the mix as an equally important skill set to master.

With regards to job creation, Narayen issued somewhat of a warning to the country's leaders, urging them to remain focused on the matter.

"If you're not careful, I think it impacts the competitiveness of our country vis--vis some of these other countries," the CEO said.

Questions for Cramer? Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer's world? Hit him up! Mad Money Twitter - Jim Cramer Twitter - Facebook - Instagram - Vine

Questions, comments, suggestions for the "Mad Money" website? madcap@cnbc.com

See original here:

Adobe CEO: Microsoft partnership will automate sales, marketing with AI - CNBC

Greater Acceptance of AI Has Resulted in Lower Satisfaction Levels – The Financial Brand

Subscribe to The Financial Brand via email for FREE!

The COVID-19 crisis has accelerated the use of digital technologies and has increased the application of artificial intelligence (AI) into all aspects of the consumer experience. As the pandemic continues to impact the way consumers interact with financial institutions and with each other, the demand for contactless or non-touch interfaces, such as chatbots, increases. This has forced organizations to find new ways to integrate advanced intelligence into the entire customer journey.

According to an Economist Intelligence Unit survey from March and April of 2020, 77% of bank executives believed the the ability to extract value from AI will sort the winners from the losers in banking. AI platforms were the second highest priority area of technology investment, behind only cybersecurity, according to the survey. The importance of AI adoption is only likely to increase in the post-pandemic era.

Unfortunately, the increased focus on the potential and use of AI has not been reflected in higher levels of satisfaction. Instead, satisfaction levels with AI have actually decreased since 2018.

Read More:

( sponsored content )

According to a Capgemini study conducted in April and May of this year, more than half of consumers (54%) have daily AI-enabled interactions with organizations, including chatbots, digital assistants, facial recognition or biometric scanners. This was a significant increase over 2018 (21%). Even after lockdowns are lifted, consumers say they will still be looking to make increased use of touchless interfaces, including voice interfaces, facial recognition, or apps.

From a sector perspective, automotive (64%) and public sector (62%) were strong performers, followed by banking and insurance (51%). According to the research, close to half (45%) of consumers prefer voice interfaces when engaging with organizations followed by 30% who prefer chat interfaces and 15% who prefer AI systems built in websites/apps. But the choice of AI interactions varies during different stages of the customer journey.

The Capgemini research found that 41% of consumers prefer AI-only interactions for researching and browsing, up from 25% in 2018. As a consumer moves forward in their journey, AI is preferred less, with more humanized experiences gaining favor. Part of the reason for the drop in AI preference is caused by the drop in trust in AI later in the journey.

Read More:

Banking transformed webinar

Creating a Better Business Banking Experience for the Post COVID-19 World

Phase 5 will reveal the findings from its first ever State of the US Business Banking Experience survey, a study which identifies and measures the critical factors that drive loyalty.

TUESDAY, August 4TH at 2pm (ET)

Without trust, the acceptance of artificial intelligence by consumers will lag. The good news is that trust in AI interactions is increasing overall. In fact, according to the Capgemini research, more than two-thirds of consumers (67%) trust personalized recommendations. In addition, the share of consumers who do not trust machines with the security and privacy of their personal data has dropped to 36%, down from 49% in 2018.

Part of the improvement in trust can be attributed to enhanced regulations, such as GDPR. In addition, trust has been positively impacted by an improvement in fairness and transparency by organizations. For instance, in 2018, only 13% of organizations informed consumers about the presence of AI, compared to 66% in 2020.

In many research studies conducted around AI and improved customer experiences, including the research done by the Digital Banking Report in 2019 and also in 2018, consumers indicated that they wanted AI to display human-like capabilities including human-like voice. personality or understanding. If interactions were more human-like, consumers stated they would be more likely to use these AI applications and have greater trust in the company.

While 64% of consumers believed that their AI interactions are more human-like (compared to 48% in 2018), the bar for satisfaction has gone up as well, indicating that consumers are increasing their expectations from AI engagements.

According to Capgemini, the four actions that are required to improve the humanization of AI experiences include:

Despite higher levels of trust and humanizing capabilities, Capgemini found that customer satisfaction with AI interactions has actually decreased in the past two years. According to their research, 57% of consumers were satisfied with AI interactions in 2020, compared to 69% in 2018.

Most of this shortfall can be explained by an increase in consumer expectations as they become acutely aware of the potential of AI across all industry sectors. Some consumers mentioned the lack of a wow factor that was expected. In several instances, consumers did not feel there was a tangible benefit from AI.

While banking and insurance performed better than the average of all sectors, the two financial services industries, looked at together, also fell from higher levels in 2018. Only 36% of consumers believed that AI reduced effort, with the same percentage believing that AI provided faster resolution of support issues. And, while banking and insurance did better than any other industry in the areas of privacy and security, other benefits were lacking.

To move to the next level of AI deployment, consumers must realize tangible benefits beyond expectations. This equates to moving beyond the basics of privacy and security (table stakes) to value propositions that include predictive solutions that save money, time and effort. These solutions also must be scalable to be meaningful to the consumer and the financial institution.

To deliver an AI experience that delights customers beyond their expectations, they must be humanized at the appropriate stage of the journey and contextualized for each consumer and interaction. It will not be easy to rise above consumers increasing expectations, but the outcomes will increase engagements, trust, loyalty and relationship value.

Read more:

Greater Acceptance of AI Has Resulted in Lower Satisfaction Levels - The Financial Brand

Andrew Ng will help you change the world with AI if you know calculus and Python – Quartz

If the next era of human progress is built using AI, who gets to engineer it? Who will have the coding skills to use the software for creating AI products, or even more importantly, the skills to write that software?

In an attempt to make the answer to those questions anyone who wants to, Andrew Ng is releasing a new set of courses teaching deep learning on Coursera, the online learning platform he co-founded in 2012. Coursera was originally set up to offer an online class in machine learning; deep learning is a variety of that, involving exceptionally large datasets. The original machine learning course attracted more than 2 million students, Ng tells MIT Tech Review.

Ng, who rose to notoriety as a Google Brain founder, Baidu chief scientist, and Stanford professor, claims that teaching people how to build AI using deep learning is the most effective way to build an AI-powered society. Just as every new [computer science] graduate now knows how to use the Cloud, every programmer in the future must know how to use AI, Ng wrote on Medium. There are millions of ways Deep Learning can be used to improve human life, so society needs millions of you from all around the world to build great AI systems.

Of course, this isnt the only way to study deep learning. Theres traditional academia, and other companies like Google have posted free online courses on competing online learning sites.

While Ngs new course makes it easier to learn deep learning than striking out on your own watching YouTube videos, Ng acknowledges that not everyone is equipped to take it. The course requires a working knowledge of calculus and Python, a popular coding language. The course description says, No experience necessary, but by the second week students are expected to submit code that expresses complex equations and reorders data for use in deep learning.

Investment in AI has boomed in the last few years, with somewhere between $26 billion and $39 billion poured into the field in 2016 alone, according to a McKinsey report.

I dont think every person on the planet needs to know deep learning. But if AI is the new electricity, look at the number of electrical engineers and electricians there are. Theres a huge workforce that needs to be built up for society to figure out how to do all of the wonderful stuff around us today, Ng told MIT Tech Review.

See the article here:

Andrew Ng will help you change the world with AI if you know calculus and Python - Quartz

DataRobot Becomes A Unicorn By Selling AI Toolkits To Harried Data Scientists – Forbes

"We lived and breathed data science," DataRobot CEO Jeremy Achin says of himself and his cofounder Tom de Godoy. "And we asked ourselves, 'How would we automate our jobs?'"

DataRobot wants to make machine learning so simple that a business analyst with basic training can run predictive models without breaking a sweat.

The Boston-based startup just raised a $206 million Series E funding round led by Sapphire Ventures to expand the business, which sells software that helps companies across industries develop and deploy in-house AI models. The billion-dollar valuation makes it the highest-ranking of the picks-and-shovels startups featured on Forbes inaugural AI 50 list (meaning the companies that provide tools to help their customers develop their own AI).

As companies of all sizes become eager to apply machine learning, which allows software to identify patterns and make predictions without needing explicit programming, to their business problems, a host of startups have emerged that promise to make the process easier and faster. The infrastructure startups featured on the listDataRobot, Domino Data Labs, Scale AI, DefinedCrowd, Noodle.ai and Algorithmiahave raised about $735 million in cumulative venture capital and represent only a portion of the larger space.

DataRobot tries to automate as much of the traditional job of data scientists as possible. The idea is that customers come to the service with data and a business question, and the DataRobot system will turn around accurate models for a given task. Laptop maker Lenovo, for example, tapped DataRobot to estimate retail demand in Brazil, while United Airlines wanted to predict which passengers might gate-check bags. The Philadelphia 76ers, meanwhile, used its system to improve its modeling process for season-ticket renewals.

You dont need all these different personasdata engineers, data scientists, application developers, et ceteraa business analyst can do the whole thing themselves, strategy exec Igor Taber says. DataRobot abstracts the underlying complexity, so we can shrink the time to production and seeing value from what could be years into weeks.

Taber discovered DataRobot as an investor at Intel Capital, which participated in the companys Series B round. After spending a few years on its board, he says, its explosive growth and CEO Jeremy Achins leadership convinced him to put my personal money where my mouth was and join DataRobot full-time at the beginning of 2019.

Achins been running the company since 2012, when he and cofounder Tom de Godoy quit their research and data modeling jobs at Travelers because they believed that the supply of people versed in data science wouldnt catch up to the demand of the following decades.

The company currently has hundreds of enterprise customers in industries like finance, healthcare, sports, retail, marketing and agriculture.

Customers have built more than 1.3 billion models through the platform. The funding round will be used to continue developing software and to target potential acquisitions. DataRobot has bought three smaller machine-learning startups in the past few years, including a machine-learning governance company called ParallelM in June, which spurred its latest release: a product that monitors a companys models for inconsistencies or biases.

Were automating more and more of the process, Achin says. We have so many ideas for different markets and different productsit feels like theres an infinite road map ahead.

Even with $431 million in funding so far, Achin says that another fundraise wouldnt be out of the question, particularly as more competitors launch their automated machine-learning products.

We were early, but others have started chasing us, he says. Theres still an opportunity to own the AI market, and I think were always going to be hungry for more capital.

View original post here:

DataRobot Becomes A Unicorn By Selling AI Toolkits To Harried Data Scientists - Forbes

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada – GlobeNewswire

KIBBUTZ SHEFAYIM, Israel and VANCOUVER, British Columbia, July 09, 2020 (GLOBE NEWSWIRE) -- Zebra Medical Vision (https://www.zebra-med.com/), the deep-learning medical imaging analytics company, announced today it has entered a strategic collaboration with TELUS Ventures, one of Canadas most active Corporate Venture Capital (CVC) funds. This collaboration includes an investment that will grow Zebra-Meds presence in North America and enable the company to expand its artificial intelligence (AI) solutions to new modalities and clinical care settings.

With five FDA clearances and Health Canada approvals, Zebra-Meds technology provides a fully automated analysis of images generated in the imaging system using clinically proven AI solutions trained on hundreds of millions of patient scans to identify acute medical findings and chronic diseases. Recently Zebra-Med joined the global battle against the Coronavirus pandemic, with its AI solution for COVID-19 detection and disease progression tracking.

This collaboration will help catalyze Zebra-Meds expansion into Canadas healthcare ecosystem, said Ohad Arazi, CEO at Zebra Medical Vision. Zebra-Med is deeply committed to enhancing care through the use of machine learning and artificial intelligence. We have already impacted millions of lives globally, and were honoured to launch this significant collaboration with TELUS Ventures, driving better care for Canadians.

TELUS Ventures focus has been on building a strong portfolio of investments to support TELUS Healths growth in the health technology market including digital solutions for preventive care and patient self-management. This strategy goes hand-in-hand with Zebra-Meds population health solutions. Screening for various conditions helps Zebra-Med and the medical team to identify missed care opportunities and incidental findings. Zebra-Med is the first AI start-up in medical imaging that has received FDA clearance for a population health solution, leveraging AI to stratify risk, improve patients quality of life, and reduce cost of care.

Supporting TELUS leadership in digital health solutions in Canada, we continue to invest in the growth of the health IT ecosystem by supporting the delivery of new technologies, like those being developed by Zebra Medical Vision, that aim to improve health outcomes for Canadians, said Rich Osborn, Managing Partner, TELUS Ventures. We are pleased to join a great roster of recent investors and complement our existing portfolio through this collaboration with a known leader in AI innovation supporting clinical efficacy and significantly advancing the detection of conditions through machine learning-based capabilities for medical imaging.

About TELUS Ventures

As the strategic investment arm of TELUS Corporation (TSX: T, NYSE: TU), TELUS Ventures was founded in 2001 and is one of Canadas most active corporate venture capital funds. TELUS Ventures has invested in over 70 companies since inception with a focus on innovative technologies such as Health Tech, IoT, AI and Security. TELUS Ventures is an active investment partner and supports its portfolio companies through mentoring; exposure to TELUS extensive network of business and co-investment partners; access to TELUS technologies and broadband networks; and by actively driving new solutions across the TELUS ecosystem.

For more information please visit: ventures.TELUS.com.

About Zebra Medical Vision

Zebra Medical Vision's Imaging Analytics Platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways to improve patient care. Zebra-Med is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, J&J, Dolby Ventures and leading AI researchers Prof Fei Fei Le, Prof Amnon Shashua and Richard Shocher. Zebra Medical Vision was named a Fast Company Top-5 AI and Machine Learning company. http://www.zebra-med.com

For media inquiries please contact:

Alona SteinReBlonde for Zebra Medical Vision alona@reblonde.com +972-50-778-2344

Jill YetmanTELUS Public Relationsjill.yetman@telus.com416-992-2639

Follow this link:

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada - GlobeNewswire

The Future of AI and CX in Today’s COVID-19 World – AiThority

The global coronavirus pandemic is dramatically changing our world, including the landscape of customer experience (CX) much faster than the marketing and media industries could have anticipated.

With people at home, brick-and-mortar businesses have to quickly adopt new digital strategies to provide their customers with what they need right now. In order to deliver on customer expectations, the best brands have strategies that continuously develop relationships through a series of thoughtful interactions, resulting in an increasingly hyper-personalized experience across the customer journey, which is usually backed by artificial intelligence (AI).

Companies who are already using AI with their CX efforts need to adjust their strategies to our worlds collective new normal. Customers experiences are underscored by anxiety, concern, stress, and confusion, and todays AI must be emotionally intelligent. With this new and ever-changing landscape in mind, the following areas are what marketing and customer experience leaders must do to shift accordingly.

Read Also:5G, AI And IoT : IBM And Verizon Business Close To Edge Of Virtually Mobile

Hyper-personalization is the CX term for it, but the root value is actually empathy. Human beings want to feel known its about trust and comfort (especially at a time like this). Businesses can (and should) make their customers feel known and valued with digital experiences. AI makes this possible across huge swaths of customers in a digital landscape.

Personalization tactics have grown well beyond simply using someones name or location in an email campaign.

By continuously developing a healthy mix of both profile data (name, age, preferences, etc.) and behavioral data (what the customer does at your various touchpoints), companies can send timely, personalized communication or create unique experiences that are specific and helpful to each customer.

A great example of a company collecting data to empower hyper-personalization is Spotify. The streaming music app used by millions regularly looks at data to automate song suggestions and create daily or weekly playlists. While other streaming services pair song suggestions based on your listening preferences, few are actually predicting that you will or wont like a new album (at least not with the success rate I find on Spotify).

Spotify also suggests playlists based on world events and situations that users are likely facing humanizing the experience. For example, the company released a COVID-19 quarantine playlist for those needing some upbeat music (or meditation, study music, etc.) in their lives. Spotifys ability to deliver on that experience and then to continually nurture a relationship with their customers is based entirely on their progressive use of data and AI.

Being able to collect, decode and leverage complex data sets is essential for meeting CX demands during this quarantine period.

Since personalization is core to a dynamic CX, companies need to consider new and interesting ways to connect the data they have and to continually refine CX profiles for accuracy. Customers data should be drawn from and influenced across the customer journey: from marketing and sales and customer retention to product management and customer support. The entire digital ecosystem of data should be a collaborative touchpoint between product development, marketing, and support.

Trust is an essential component of CX, particularly right now during this time of uncertainty. While customers are no doubt becoming more and more comfortable with the benefits of personalization, they get turned off if they think a company isnt being responsible for their data. Building AI solutions that allow users to progressively provide information in exchange for real value is paramount.

While the promise of AI around automation and personalization is exciting, the narrative a company builds around AI and CX strategies need to align closely with customer needs and expectations. Customers want and expect hyper-personalization already; they just dont want to think about what it took for a company to get there. Given our reliance on the digital world in our new reality, its more important than ever that companies are transparent and good stewards of your data.

AI also needs to be able to adapt to unprecedented circumstances and override some personalization settings in case of a crisis. Specifically, CX needs to include awareness of potential news events so that customers arent being served with distressing or inappropriate ads.

For example, takeout and delivery apps like GrubHub and Postmates have pop-up notifications about COVID-19, which also remind users about the impact this pandemic has on the entire restaurant industry (i.e., your order might take longer than usual due to staff shortages or certain restaurants that are not open, might not be accurately reflected in the app).

The old fashioned face-to-face, human-to-human customer service experience cant be replicated across millions of online customers. But in times like this, if companies want to grow and set themselves apart from others, AI needs to be used primarily as a tool for automating and analyzing customer data collection so the CX can be relevant and emotionally aware to todays ever-changing landscape.

This marriage of AI and CX will help companies develop a strategy for leveraging hyper-personalized data to give their customers what they truly want and need.

Share and Enjoy !

Original post:

The Future of AI and CX in Today's COVID-19 World - AiThority

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M – TechCrunch

Robotic process automation the ability to automate certain repetitive software-based tasks to free up people to focus on work that computers cannot do has become a major growth area in the world of IT. Today, a startup called Aisera that is coming out of stealth has taken this idea and supercharged it by using artificial intelligence to help not just workers with internal tasks, but in customer-facing environments, too.

Quietly operating under the radar since 2017, Aisera has picked up a significant list of customers, including Autodesk, Ciena, Unisys and McAfee covering a range of use cases from computer geeks with very complicated questions through to people who didnt grow up in the computer generation, says CEO Muddu Sudhakar, the serial entrepreneur (three previous startups, Kazeon, Cetas and Caspida, were respectively acquired by EMC, VMware and Splunk) who is Aiseras co-founder.

With growth of 350% year-on-year, the company is also announcing today that it has raised $50 million to date, including most recently a $20 million Series B led by NorwestVenture Partners with Menlo Ventures, True Ventures, Khosla Ventures, First Round Capital, Ram Shriram and Maynard Webb Investments also participating.

(No valuation is being disclosed, said Sudhakar.)

The crux of the problem that Aisera has set out to solve is that, while RPA has identified that there is a degree of repetition in certain back-office tasks which, if that work can be automated, can reduce operational costs and be more efficient for an organization the same can be said for a wide array of IT processes that cover sales, HR, customer care and more.

There have been some efforts made to apply AI to solving different aspects of these particular use cases, but one of the issues has been that there are few solutions that sit above an organizations software stack to work across everything that the organization uses, and does so in an unsupervised way that is, uses AI to learn processes without having an army of engineers alongside the program training it.

Aisera aims to be that platform, integrating with the most popular software packages (for example in service desk apps, it integrates with Salesforce, ServiceNow, Atlassian and BMC), providing tools to automatically resolve queries and complete tasks. Aisera is looking to add more categories as it grows: Sudhakar mentioned legal, finance and facilities management as three other areas its planning to target.

Matt Howard, the partner at Norwest that led its investment in Aisera, said one of the other things that stands out for him about the company is that its tools work across multiple channels, including email, voice-based calls and messaging, and can operate at scale, something that cant be said in actual fact for a lot of AI implementations.

I think a lot of companies have overstated when they implement machine learning. A lot of times its actually big data and predictive analytics. We have mislabeled a lot of this, he said in an interview. AI as a rule is hard to maintain if its unsupervised. It can work every well in a narrow use case, but it becomes a management nightmare when handling the stressthat comes with 15 million or 20 million queries. Currently Aisera said that it handles about 10 million people on its platform. With this round, Howard andJon Callaghan of True Ventures are both joining the board.

There is always a paradox of sorts in the world of AI, and in particular as it sits around and behind processes that have previously been done by humans. It is that AI-based assistants, as they get better, run the risk of ultimately making obsolete the workers theyre meant to help.

While that might be a long-term question that we will have to address as a society, for now, the reward/risk balance seems to tip more in the favour of reward for Aiseras customers. At Ciena, we want our employees to be productive, said Craig Williams, CIO at Ciena, in a statement. This means they shouldnt be trying to figure out how a ticketing tool works, nor should they be waiting around for a tech to fix their issues. We believe that 75 percent of all incidents can be resolved through Aiseras technology, and we believe we can apply Aisera across multiple platforms. Aisera doesnt just make great AI technology, they understand our problems and partner with us closely to achieve our mission.

And Sudhakar similar to the founders of startups that are would-be competitors like UiPath when asked the same kind of question doesnt feel that obsolescence is the end game, either.

There are billions of people in call centres today, he said in an interview. If I can automate [repetitive] functions they can focus on higher-level work, and thats what we wanted to do. Those trying to solve simple requests shouldnt. Its one example where AI can be put to good use. Help desk employees want to work and become programmers, they dont want to do mundane tasks. They want to move up in their careers, and this can help give them the roadmap to do it.

See the article here:

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M - TechCrunch

AI file extension – Open, view and convert .ai files

The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.

AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.

Simple .ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.

The ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.

If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that's not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of "code" in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.

MIME types: application/postscript

Here is the original post:

AI file extension - Open, view and convert .ai files

Metro Bank and Sensibill partner on AI money management | Technology & AI – FinTech Magazine – The FinTech & InsurTech Platform

UK-based Metro Bank has announced details of its collaboration with Canadian tech firm Sensibill to provide business customers with enhanced AI tools.

Specifically, new features like receipt management capabilities will be added to Metro Banks app, providing SMBs with a simple but powerful method of capturing and storing records of their transactions.

For users the process is simple: photographs of receipts are taken with a devices in-built camera and then AI (artificial intelligence) and ML (machine learning) software are used to auto-populate the users transaction history, including VAT.

With UK SMBs projected to lose up to 15 cumulative days per year while trying to balance company expenditure records (two hours each week), the utility of an easy, automated solution for businesses is clear.

Were thrilled to partner with Sensibill to provide our business customers with essential money management tools, easily accessible from our mobile app. These will empower SMBs to free up time in a way that wasnt possible before, to spend running and growing their businesses, said David Thomasson, Chief Commercial Officer at Metro Bank.

So many small businesses are facing uncertainty because of coronavirus. We want to keep delivering new tools for our customers that can make managing their money a little easier.

Helping customers build efficiency

The rollout of Sensibill comes following a successful trial period in 2019. Sensibill is dedicated to improving customer engagement and creating mutual understanding between them and the financial services institutions that serve them.

Winner of the Best Mobile Banking Innovation award at last years Financial Innovation Awards, the companys collaboration with Metro Bank signifies its potential for expanding even further across the UK finance market.

Regarding the announcement that Metro Bank considered the trial period successful, Corey Gross, co-founder and CEO, commented, [It] understands that small businesses and gig workers need a better, simpler way to track their finances and manage expenses.

By leveraging our solution, the banks small businesses can regain hours once lost to analysing paper receipts and run their businesses more effectively, which is especially critical in light of the pandemic.

This partnership reflects Metro Banks deep dedication to providing advanced technology and support to help the people they serve succeed financially, both now and in the future.

The upgraded app is currently only available on iOS. However, Metro Bank assures Android users that its new features will be made available to them in the coming weeks.

The rest is here:

Metro Bank and Sensibill partner on AI money management | Technology & AI - FinTech Magazine - The FinTech & InsurTech Platform

Engadget is testing all the major AI assistants – Engadget

Hardly a day goes by that we don't cover virtual assistants. If it's not news about Siri, there's some new development with Alexa, or Cortana or Google Assistant. Perhaps a new player, like Samsung, is wading into the space. Even Android creator Andy Rubin is considering building an assistant of his own. And his company probably isn't the only one that thinks there's room for another AI helper.

With virtual assistants becoming such an integral part of our lives (or at least our tech-news diets), we felt it was time to stop and take stock of everything that's happening here. For one week, we asked five Engadget reporters to live with one of the major assistants: Apple's Siri, Amazon's Alexa, the Google Assistant, Microsoft's Cortana and Samsung's Bixby. What you'll see on Engadget throughout the week aren't reviews, per se, nor did we endeavor to crown the "best" digital assistant. Not only is that a subjective question but, as it turns out, none of the assistants are as smart or reliable as we'd like.

In the absence of a winner, then, what we have is a state of the union: a picture of where AI helpers stand and where they're headed. Follow our series here. And, at the rate each of these assistants is maturing, don't be surprised if we revisit them sooner than later.

This week Engadget is examining each of the five major virtual assistants, taking stock of how far they've come and how far they still have to go. Find all our coverage here.

See the original post here:

Engadget is testing all the major AI assistants - Engadget

Indian, German engineers working on an AI-powered brain what does it mean for social (dis)order? – YourStory.com

Rolf Bulander, Chairman for Bosch Mobility, says societies must decide how artificial intelligence will be implemented in their cultures, especially because in 20 years time, 41 megacities will be home to 6 billion people.

Nearly 3,000 engineers from Bosch in both Stuttgart and Bengaluru have one thing in common: they are putting their brains figuratively speaking into a super brain. This brain can crunch 30 trillion data points per second and will process data three times faster than a human brain can. This brain powered by artificial intelligence has no reason to feel guilty about anything about daily life because it is designed tonot make mistakes. It is what Yuval Noah Harari predicted would be the next phase of evolution of homo sapiens being connected to all things around us. While the engineers are not going as far as putting little microchips in our brain yet, the AI-powered brain will start off in our cars and help protect our environment, along with offering us safety and stress-free driving.

The engineers from Bosch, together with Daimler, formed an alliance to put self-driving cars on the roads this year.

From automotive cloud suite to e-scooters, software connects people from home to work and helps them discover experiences around you, says Rolf Bulander, Chairman of the Mobility Services at Robert Bosch GmbH. He adds that the car will be the third living experience and will have gesture and voice control: The objective is to save lives because 90 percent of accidents are caused by human error and artificial intelligence (AI) will reduce this in automated and driverless cars.

Leaders at Bosch add that although they are likely to work with startups, there arent many who have made significant advances in R&D in AI. We are making our own investments in AI because of the capabilities we have built over time. I also see the startup market heating up but we have not seen many advances in AI from startups; there are very few of them out there, says Dirk Hoheisel, member of the board of management at Robert Bosch GmbH. The world, he believes, is becoming more collaborative thanks to AI.

There are fundamental questions to answer about the co-existence of human intelligence and artificial intelligence. With India, the story of AI one of opportunity and chaos. Can we mix the human quotient with AI and create a sustainable livelihood? AI makes living better, but many societies are not ready for that change. The fundamental question to ask is how society or different cultures will use AI. And they must decide for themselves the future of AI in their (respective) regions, explains Rolf.

The question about how different cultures will use AI is one of the most important questions for mankind, and Rolf points to Germany as an example: the German government has appointed a judge of the constitutional court to head an ethics committee on AI in cars. The aim is to find the answer to a fundamental question: how does the car decide whose life it has to save? Based on the age of the of the passengers, does it protect the younger person over the older one? The one who has a better chance of survival after an accident or the one who needs critical medical attention?

The future, though, cannot be about stopping technology. Companies like Bosch and others will push boundaries to make humans to reinvent themselves in relation to what technology can do.

Excerpt from:

Indian, German engineers working on an AI-powered brain what does it mean for social (dis)order? - YourStory.com

More than half of Europeans want to replace lawmakers with AI, study says – CNBC

People walking at Strandvagen in Stockholm.

JONATHAN NACKSTRAND

LONDON A study has found that most Europeans would like to see some of their members of parliament replaced by algorithms.

Researchers at IE University's Center for the Governance of Change asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an AI that would have access to their data.

The results, published Thursday, showed that despite AI's clear and obvious limitations, 51% of Europeans said they were in favor of such a move.

Oscar Jonsson, academic director at IE University's Center for the Governance of Change and one of the report's main researchers, told CNBC that there's been a "decades long decline of belief in democracy as a form of governance."

The reasons are likely linked to increased political polarization, filter bubbles and information splintering, he said. "Everyone's perception is that that politics is getting worse and obviously politicians are being blamed so I think it (the report) captures the general zeitgeist," Jonsson said. He added that the results aren't that surprising "given how many people know their MP, how many people have a relationship with their MP (and) how many people know what their MP is doing."

The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.

Not all countries like the idea of handing over control to machines, which can be hacked or act in ways that humans don't want them to. In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

Opinions also vary dramatically by generation, with younger people found to be significantly more open to the idea. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 were in support of the idea, whereas a majority of respondents above 55-years-old don't see it as a good idea.

Read the original:

More than half of Europeans want to replace lawmakers with AI, study says - CNBC

AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 – ZDNet

As the COVID-19 coronavirus outbreak continues to spread across the globe, companies and researchers are looking to use artificial intelligence as a way of addressing the challenges of the virus. Here are just some of the projects using AI to address the coronavirus outbreak.

Using AI to find drugs that target the virus

A number of research projects are using AI to identify drugs that were developed to fight other diseases but which could now be repurposed to take on coronavirus. By studying the molecular setup of existing drugs with AI, companies want to identify which ones might disrupt the way COVID-19 works.

BenevolentAI, a London-based drug-discovery company, began turning its attentions towards the coronavirus problem in late January. The company's AI-powered knowledge graph can digest large volumes of scientific literature and biomedical research to find links between the genetic and biological properties of diseases and the composition and action of drugs.

EE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

The company had previously been focused on chronic disease, rather than infections, but was able to retool the system to work on COVID-19 by feeding it the latest research on the virus. "Because of the amount of data that's being produced about COVID-19 and the capabilities we have in being able to machine-read large amounts of documents at scale, we were able to adapt [the knowledge graph] so to take into account the kinds of concepts that are more important in biology, as well as the latest information about COVID-19 itself," says Olly Oechsle, lead software engineer at BenevolentAI.

While a large body of biomedical research has built up around chronic diseases over decades, COVID-19 only has a few months' worth of studies attached to it. But researchers can use the information that they have to track down other viruses with similar elements, see how they function, and then work out which drugs could be used to inhibit the virus.

"The infection process of COVID-19 was identified relatively early on. It was found that the virus binds to a particular protein on the surface of cells called ACE2. And what we could with do with our knowledge graph is to look at the processes surrounding that entry of the virus and its replication, rather than anything specific in COVID-19 itself. That allows us to look back a lot more at the literature that concerns different coronaviruses, including SARS, etc. and all of the kinds of biology that goes on in that process of viruses being taken in cells," Oechsle says.

The system suggested a number of compounds that could potentially have an effect on COVID-19 including, most promisingly, a drug called Baricitinib. The drug is already licensed to treat rheumatoid arthritis. The properties of Baricitinib mean that it could potentially slow down the process of the virus being taken up into cells and reduce its ability to infect lung cells. More research and human trials will be needed to see whether the drug has the effects AI predicts.

Shedding light on the structure of COVID-19

DeepMind, the AI arm of Google's parent company Alphabet, is using data on genomes to predict organisms' protein structure, potentially shedding light on which drugs could work against COVID-19.

DeepMind has released a deep-learning library calledAlphaFold, which uses neural networks to predict how the proteins that make up an organism curve or crinkle, based on their genome. Protein structures determine the shape of receptors in an organism's cells. Once you know what shape the receptor is, it becomes possible to work out which drugs could bind to them and disrupt vital processes within the cells: in the case of COVID-19, disrupting how it binds to human cells or slowing the rate it reproduces, for example.

Aftertraining up AlphaFold on large genomic datasets, which demonstrate the links between an organism's genome and how its proteins are shaped, DeepMind set AlphaFold to work on COVID-19's genome.

"We emphasise that these structure predictions have not been experimentally verified, but hope they may contribute to the scientific community's interrogation of how the virus functions, and serve as a hypothesis generation platform for future experimental work in developing therapeutics," DeepMind said. Or, to put it another way, DeepMind hasn't tested out AlphaFold's predictions outside of a computer, but it's putting the results out there in case researchers can use them to develop treatments for COVID-19.

Detecting the outbreak and spread of new diseases

Artificial-intelligence systems were thought to be among the first to detect that the coronavirus outbreak, back when it was still localised to the Chinese city of Wuhan, could become a full-on global pandemic.

It's thought that AI-driven HealthMap, which is affiliated with the Boston Children's Hospital,picked up the growing clusterof unexplained pneumonia cases shortly before human researchers, although it only ranked the outbreak's seriousness as 'medium'.

"We identified the earliest signs of the outbreak by mining in Chinese language and local news media -- WeChat, Weibo -- to highlight the fact that you could use these tools to basically uncover what's happening in a population," John Brownstein, professor of Harvard Medical School and chief innovation officer at Boston Children's Hospital, told the Stanford Institute for Human-Centered Artificial Intelligence's COVID-19 and AI virtual conference.

Human epidemiologists at ProMed, an infectious-disease-reporting group, published their own alert just half an hour after HealthMap, and Brownstein also acknowledged the importance of human virologists in studying the spread of the outbreak.

"What we quickly realised was that as much it's easy to scrape the web to create a really detailed line list of cases around the world, you need an army of people, it can't just be done through machine learning and webscraping," he said. HealthMap also drew on the expertise of researchers from universities across the world, using "official and unofficial sources" to feed into theline list.

The data generated by HealthMap has been made public, to be combed through by scientists and researchers looking for links between the disease and certain populations, as well as containment measures. The data has already been combined with data on human movements, gleaned from Baidu,to see how population mobility and control measuresaffected the spread of the virus in China.

HealthMap has continued to track the spread of coronavirus throughout the outbreak, visualising itsspread across the world by time and location.

Spotting signs of a COVID-19 infection in medical images

Canadian startup DarwinAI has developed a neural network that can screen X-rays for signs of COVID-19 infection. While using swabs from patients is the default for testing for coronavirus, analysing chest X-rays could offer an alternative to hospitals that don't have enough staff or testing kits to process all their patients quickly.

DarwinAI released COVID-Net as an open-source system, and "the response has just been overwhelming", says DarwinAI CEO Sheldon Fernandez. More datasets of X-rays were contributed to train the system, which has now learnt from over 17,000 images, while researchers from Indonesia, Turkey, India and other countries are all now working on COVID-19. "Once you put it out there, you have 100 eyes on it very quickly, and they'll very quickly give you some low-hanging fruit on ways to make it better," Fernandez said.

The company is now working on turning COVID-Net from a technical implementation to a system that can be used by healthcare workers. It's also now developing a neural network for risk-stratifying patients that have contracted COVID-19 as a way of separating those with the virus who might be better suited to recovering at home in self-isolation, and those who would be better coming into hospital.

Monitoring how the virus and lockdown is affecting mental health

Johannes Eichstaedt, assistant professor in Stanford University's department of psychology, has been examining Twitter posts to estimate how COVID-19, and the changes that it's brought to the way we live our lives, is affecting our mental health.

Using AI-driven text analysis, Eichstaedt queried over two million tweets hashtagged with COVID-related terms during February and March, and combined it with other datasets on relevant factors including the number of cases, deaths, demographics and more, to illuminate the virus' effects on mental health.

The analysis showed that much of the COVID-19-related chat in urban areas was centred on adapting to living with, and preventing the spread of, the infection. Rural areas discussed adapting far less, which the psychologist attributed to the relative prevalence of the disease in urban areas compared to rural, meaning those in the country have had less exposure to the disease and its consequences.

SEE:Coronavirus: Business and technology in a pandemic

There are also differences in how the young and old are discussing COVID-19. "In older counties across the US, there's talk about Trump and the economic impact, whereas in young counties, it's much more problem-focused coping; the one language cluster that stand out there is that in counties that are younger, people talk about washing their hands," Eichstaedt said.

"We really need to measure the wellbeing impact of COVID-19, and we very quickly need to think about scalable mental healthcare and now is the time to mobilise resources to make that happen," Eichstaedt told the Stanford virtual conference.

Forecasting how coronavirus cases and deaths will spread across cities and why

Google-owned machine-learning community Kaggle is setting a number of COVID-19-related challenges to its members, includingforecasting the number of cases and fatalities by cityas a way of identifying exactly why some places are hit worse than others.

"The goal here isn't to build another epidemiological model there are lots of good epidemiological models out there. Actually, the reason we have launched this challenge is to encourage our community to play with the data and try and pick apart the factors that are driving difference in transmission rates across cities," Kaggle's CEO Anthony Goldbloom told the Stanford conference.

Currently, the community is working on a dataset of infections in 163 countries from two months of this year to develop models and interrogate the data for factors that predict spread.

Most of the community's models have been producing feature-importance plots to show which elements may be contributing to the differences in cases and fatalities. So far, said Goldbloom, latitude and longitude are showing up as having a bearing on COVID-19 spread. The next generation of machine-learning-driven feature-importance plots will tease out the real reasons for geographical variances.

"It's not the country that is the reason that transmission rates are different in different countries; rather, it's the policies in that country, or it's the cultural norms around hugging and kissing, or it's the temperature. We expect that as people iterate on their models, they'll bring in more granular datasets and we'll start to see these variable-importance plots becoming much more interesting and starting to pick apart the most important factors driving differences in transmission rates across different cities. This is one to watch," Goldbloom added.

Read the original here:

AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 - ZDNet

Pentagon AI center shifts focus to joint war-fighting operations – C4ISRNet

The Pentagons artificial intelligence hub is shifting its focus to enabling joint war-fighting operations, developing artificial intelligence tools that will be integrated into the Department of Defenses Joint All-Domain Command and Control efforts.

As we have matured, we are now devoting special focus on our joint war-fighting operation and its mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving Americas military and technological advantages over our strategic competitors, Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, told reporters July 8. The AI capabilities JAIC is developing as part of the joint war-fighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.

That marks a significant change from where JAIC stood more than a year ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoDs focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time.

JADC2 is not a single product. It is a collection of platforms that get stitched together woven together into effectively a platform. And JAIC is spending a lot of time and resources focused on building the AI component on top of JADC2, said the acting director.

According to Mulchandani, the fiscal 2020 spending on the joint war-fighting operations initiative is greater than JAIC spending on all other mission initiatives combined. In May, the organization awarded Booz Allen Hamilton a five-year, $800 million task order to support the joint war-fighting operations initiative. As Mulchandani acknowledged to reporters, that task order exceeds JAICs budget for the next few years and it will not be spending all of that money.

One example of the organizations joint war-fighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Armys Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.

Mulchandani added that JAIC was about to begin testing its new flagship joint war-fighting project, which he did not identify by name.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

We do have a project going on under joint war fighting which we are going to be actually go into testing, he said. They are very tactical edge AI is the way Id describe it. That work is going to be tested. Its actually promising work were very excited about it.

As I talked about the pivot from predictive maintenance and others to joint war fighting, that is probably the flagship project that were sort of thinking about and talking about that will go out there, he added.

While left unnamed, the acting director assured reporters that the project would involve human operators and full human control.

We believe that the current crop of AI systems today [...] are going to be cognitive assistance, he said. Those types of information overload cleanup are the types of products that were actually going to be investing in.

Cognitive assistance, JADC2, command and controlthese are all pieces, he added.

See more here:

Pentagon AI center shifts focus to joint war-fighting operations - C4ISRNet

The Transformational Role of AI in Finance – PaymentsJournal

The subject headline in this Finextra piece is highlighting an overview of some categorical use case scenarios where capabilities residing under the AI umbrella are having an impact on the delivery of financial services. Members of our commercial and separate emerging tech advisory services will have the benefit of deeper dives into some specific uses across retail and corporate banking:

More than 60 years have passed since artificial intelligence was a daring concept at Dartmouth ollege which only got half of the requested funding. Right now, AI is a $9.5 billion industry, projected to reach $118.6 billion by 2025, according toStatistaDue to its immediate applications in streamlining processes, improving customer care, and managing risks, it has been widely adopted by the frontrunners of the financial industry. From NLP to replace front desk and call center employees to robots analyzing transactions and loans, there is a way to use machine learning in the banking and payment sector.

The author points to four categories of current and future impact:

Finance is a sector that is a rather late adopter of new technologies due to regulatory and compliance requirements; yet it is also one highly interested in cutting costs. This puts AI companies in the position of having a harder time to enter this market. However, this market offers potentially high payoffs once the tech goes mainstream.

Following the space is part of our extensive coverage of fintechs as applications apply across financial services.

Overview by Steve Murphy, Director, Commercial and Enterprise Payments Advisory Service at Mercator Advisory Group

Summary

Article Name

The Transformational Role of AI in Finance

Description

highlighting an overview of some categorical use case scenarios where capabilities residing under the AI umbella are having some impact on the delivery of financial services.

Author

Steve Murphy

Publisher Name

PaymentsJournal

Publisher Logo

Go here to read the rest:

The Transformational Role of AI in Finance - PaymentsJournal

The Ouroboros, From Antiquity to AI – Gizmodo

The Ouroboroswhich symbolizes the cyclical nature of life and death and the divine essence that lives on foreverwas first recorded in the Egyptian Book of the Netherworld. Alchemists then adopted the symbol into their mystical work of physical and spiritual transformation. After chemistry supplanted its more mystical forebear, alchemy, the Ouroboros was largely forgotten. That is, until reemerging in the 19th century largely thanks to the psychologists Carl Jung. Today, the Ouroboros has taken on a new life in techs Ouroboros program, and has become integral to coding and our evolving understanding of artificial intelligence.

As a medievalist, the transformation of the Ouroboros from an ancient Egyptian mystical symbol to that of artificial intelligence is endlessly fascinating to me. Why has this symbol been reimagined so many times through the centuries? In tech, Ouroboros programs, like their name would suggest, have no beginning input and no ultimate output. In other words, they begin without any coder starting them. Theyre continuous, coding and coding forever seemingly on their own. So how did a mysterious symbol of a snake make its way from antiquity into modern technology?

The word Ouroboros is from ancient Greek, and means tail-devouring. The Egyptian origins of the Ouroboros are a little murkier. One of the first known precursors to the Ouroboros is found in the ancient Egyptian religious and funerary text, the Amduat. The important, early 15th century funerary text tells a story of resurrection that echoes across Gnostic and early Christian texts as well as in alchemy. In the Amduat, the deceased pharaoh travels with the sun god Ra through the realm of the dead known to the Egyptians as Duat. Every day after the sun sets in the West, Ra must travel through Duat to the East where the sun rises with Ras reemergence. Its believed that when a pharaoh dies they too make this journey with Ra eventually becoming one with the sun god and living on forever. The Amduat served as a sort of road map for the dead pharaoh, instructing them on how to make this journey with Ra. Its why the Amduat is often found carved into the walls of the pharaohs tomb. Like any good road trip, you want to keep a map close when traveling through the afterworld. The twelve hours of the night act as markers in the Amduats map.

Its in the sixth hour that one of the most significant moments in the journey occursthe pharaoh is met by Mehen, a huge coiled serpent. Mehen helps guide Ra and the pharaoh through the afterworld coiling around Ra and the pharaoh on the journey to protect them from all outside evils and lurking enemies. Mehens body not only acts a physical barrier of protection encircling Ra, but also a magical one as Egyptologist Peter A Piccione points out. Mehen is often seen as a connector between the physical and metaphysical linking him to Egyptian magical traditions. His association with magic and the liminal space between the real and the unreal eventually brings Mehen into the folds of alchemy.

In less esoteric circles, Mehen is also an ancient Egyptian board game, where a carved coiled serpent acts as the board.

Its about two hundred years later, in the 13th century, that Mehen transforms into the single, continuous circle of the Ouroboros. The early Ouroboros depiction can be found in none other than in King Tuts burial chamber, gilded in gold. In fact, not one but two Ourobori encircle the relief of a mummified figure, identified by scholar Alexandre Piankoff as King Tutankhamun. One encircles his head, shown below, and another encircles his feet.

Scholars believe that the encircling serpent is still a representation of Mehen, and pharaoh Tutankhamuns journey through the afterworld with Ra. The significance comes though in how Mehen is drawn in King Tuts burial chamber. Rather than being a squiggly line surrounding the pharaoh in earlier reliefs, this is the first time Mehen is shown as the Ouroboros is depicted in later centuriesas one continuous circle.

Sometimes we forget that the ancient world was full of folks going and coming, exchanging knowledge and culture along the way. The Egyptians didnt exist in a bubble, and already by the 2nd millennium BCE scholars know Egyptians and Greeks were rubbing shoulders. (The Egyptians, at that time, were a far more advanced civilization when compared to the Greeks.) Mehen morphed into the Greek Ouroboros, and got imported East via the Egyptian practice of alchemy.

Alchemy brought together scholars from various corners of the globe. Greeks, Egyptians, Jews, and othersfrom the peninsula all flocked to the Egyptian city of Alexandria to study the art of alchemy. Alchemy, with its elaborate experiments and mystical underpinnings, was at the cutting edge of research in the ancient world. By the early centuries of the Common Era, Alexandria was the epicenter of not only alchemy, but of math, history, philosophy, medicine, and many other disciplines.

The earliest known alchemical depiction of the Ouroboros is found in the third century text, The Chrysopoeia of Cleopatra. Here the Ouroboros encircles the words all is one. By the time the alchemist Cleopatra, not to be confused with that other Cleopatra who killed herself with the snakes and had that whole thing with Mark Antony, drew this Ouroboros, the Ouroboros was no longer a depiction of Mehen. While related to its origin as Mehen, the Ouroboros by this point had morphed into an altogether new symbol. Both Mehen and the Ouroboros relate to the understanding of time being cyclical. Mehen encircles Ra through the gods journey through the afterworld every night. The alchemical Ouroboros however no longer carries the protective and magical powers associated with Mehen.

In alchemy, the Ouroboros represents not only the cyclical nature of time and energy, but also the union of opposites necessary to yield the Philosophers Stone. The Philosophers Stone is the ultimate goal many alchemists worked towards. The Stone had the power to transmute anything into its highest form. It could transform lead to gold. It was the universal solvent and the elixir of life. It was the answer to anything alchemists worked to achieve in their laboratories. In fact, the Ouroboros itself can be a representation of the Philosophers Stone. No wonder then that the Ouroboros is at the heart of ancient alchemical study.

Outside of the Western world, the Ouroboros pops up almost simultaneously across the ancient world. In Hinduism mythology, a never-ending snake wraps around the world to keep it upright. In 2nd century yogic text, divine energy known as Kundalini is described as coiled serpent holding her tail in her mouth. In China, the Ouroboros represents the union of yin and yang. Even across the globe, Aztecs depicted the snake god Quetzalcoatl biting its own tail on the base of the Pyramid of the Feathered Serpent.

In the West, the Ouroboros traveled from the ancient world to the Gnostic, Christian, then Islamic worlds, and then on to Medieval and Renaissance Europe. During this time, the Ouroboros symbol was remixed several times. The 3rd century CE Gnostic text Pistis Sophia describes the Ouroboros as a twelve-part dragon. Perhaps a nod to the twelve hours of night associated with Mehen. Gnostics considered the Ouroboros to be a symbol of the eternal, never-ending soul.

Medieval christians, on the other hand, sometimes associated the Ouroboros with knowledge and the serpent who tempts Eve to eat from the Tree of knowledge. Yet, the Ouroboros also finds a home carved into the medieval English Church of St. Mary and St. David or in the 9th century Book of Kells, an Irish illuminated Gospel. So, the Christians couldnt really seem to make up their minds about the Ouroborosis it Satan disguised as a tree serpent or a holy symbol of Christ?

Even while some medieval Christians couldnt decide how they felt about the Ouroboros, the Ouroboros still had a rich life in alchemical laboratories of the period. Continuing the tradition of alchemy from the ancient world, medieval alchemists associated the Ouroboros with the Philosophers Stone as the union of opposites. For medieval alchemists, the Ouroboros symbolized the organizing of the worlds chaotic energy, known to the alchemists as First Matter or prima materia.

The Ouroboross symbolic life continues on through to the Enlightenment. But, with the decline of alchemy in the late 18th century, the Ouroboros was relegated to Romantic and Victorian sances and spiritualist meetings. It was still around. But, it was no longer a symbol at the heart of human existence, a symbol that spoke to lifes cyclical nature. Now, it was just a cool magical sign. That is until tech world came along.

Artificial Intelligence is all about creating a machine that can mimic the human brains ability for cognition. AI technology has already been proven to outperform humansin some very specific ways. World champion Go player Lee Sedol decided to retire after 24 years as Go champion after being defeated by an AI computer. Chatbots use Natural Language Processing (NLP) to field customer questions so well that customers cant even tell theyre talking to a robot. Smart programs outperform humans in trading stocks. In later stages, if that technology ever becomes possible, the goal for Artificial Intelligence development may be to create a machine with its own consciousness, but we are very far from that point in history.

Enter Ouroboros programs. These emerged from a type of code sequence known as a quine. A quine doesnt have any input, and its only output is its own source code. In other words, a quine is a type of code that has no beginning, creating an output seemingly on its own. A normal computer program is basically just a set of directions that then a computer follows. So, say youre a coder and you write a program that adds numbers. You still have to provide the numbers for the computer to add even after youre all done writing the code. Quines magically dont need any numbers to start adding away. The numbers, aka the input, arent necessary for quines to power up.

The name quine actually was coined in Douglas Hofstadters 1979 Pulitzer Prize winning book Gdel, Escher, Bach. The book is a non-fiction Alice in Wonderland-esque romp through symmetry, mathematics, and art, and in it Hofstadter uses the term quining to describe when an object/number/musical note refers back to itself indirectly. So, instead of saying, Im sarah, itd be the mathematical equivalent to saying Im a medievalist. This relates to tech quines because, going back to our calculator program, quines create their self-generated input using self-reference. They take something of their own code and copy it slightly differently, so they can continue to grow.

An Ouroboros program is similar to a quine, but in addition to having no input, it also has no output. In other words, Ouroboros programs have no beginning and no end. So, again going back to our calculator program, where quines create some final solution. They add whatever numbers and find a solution. Ouroboros programs would just keep adding and adding and adding until they miraculously got back to the same number they started at, and then would do it all again. So, just like the snake version of the Ouroboros, techs Ouroboros program eats itself (so to speak). Ouroboros programs are completely self-contained. Its why theyre sometimes called self-replicating programs or quine relays. They just go on and on and on, until eventually returning to its source code creating one big loop. Quines and Ouroboros programs are useful to coders, because coders can basically just leave them alone to do their thing. Since neither program requires an input, they can do a specified task seemingly on their own.

In addition to having no beginning or end, Ouroboros programs cycle through completely different coding languages. They might begin in language X, then transition to Y, then Z, and so on until coming back to language X. Coder Yusuke Endoh created an Ouroboros program that cycled through as many as 50 different coding languages. This has made Ouroboros programs increasingly important to the development and creation of different coding languages, like Java. It also allows the Ouroboros program to function in completely different coding languages moving from Python to Ruby like it was childs play. Its as if an Ouroboros program is immediately fluent.

As computer science researchers Dario Floreano and Claudio Mattiussi have explored in their book, Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies, computer scientists have looked to the origins of biological life to find clues on how to create artificial life. The origin of biological life, they and other computer scientists believe, could act as a blueprint to creating artificial life.

Biologists trace the origin of life on Earth to a simple molecule that four billion years ago learned how to replicate itself. Once molecule-based genetic variations learned how to replicate themselves, they started competing in Darwins fun game of natural selection. The variations that were able to survive and copy themselves the best continued to replicate. The variations that werent as prolific were voted off the prehistoric island. Eventually, the first cell was formed, followed by the first organisms, then the dinosaurs, and then us humans. And, thats creation in a nutshell.

As Floreano and Mattiussi discuss in the preface of their book, mainstream AI research hasnt focused on humans own origin story to create artificial life. Mainstream AI is very good at creating algorithms and devices to solve problems even more quickly than humans. Take my earlier example of Go player Lee Sodel moved to retire because AI could problem-solve its way to victory far better than himself.

But, starting in the 1980s, AI researchers began looking to develop more human-like AI. By the turn of the millennia, this new type of AI research solidified as new artificial intelligence. The aim of AI was broadened from problem-solving to exploring cognition and other organic processes. In their article Neural Network Quines, Oscar Chang and Hod Lipson of Columbia Universitys Data Science Institute explore how Ouroboros programs and quines, similar to the first self-replicating cell, could be the first step towards developing this new, conscious AI. In addition, self-replicating programs could make AI even more human-like.

For instance, AI created using self-replicating programs, like the Ouroboros program or quines, could in theory repair or heal themselves. By replicating undamaged code to replace damaged code, quine-based AI could heal itself much like you and I can. As French mathematician David Madore explains, quines, and Ouroboros programs by extension, can repair damaged code through a process known as bootstrapping. In bootstrapping, a quine can basically hit a coder version of a restart button on its own. In other word, the quine pulls itself up by its bootstraps and starts over.

Computer scientists have also taught machines to identify sound, text, and images through whats known as deep learning models. Deep learning models are based upon programs that learn much like our brains. Computer scientists build deep learning models using neural network architecture that are borrowed directly from neurology and often employ Ouroboros programs. Neural network architectures are basically a collection of quines that work together. This creates a much stronger system. In the same way neurons fire to other neurons in our brains, these quine neural networks do the same thing. Quines work with other quines to process information more quickly.

Oscar Chang and Hod Lipson of Columbia University have in fact written about the importance of self-replication in AI. In a recent article, they looked specifically at neural network quines. Neural network quines can self-replicate and build upon what they already know, allowing AI to learn faster. Perhaps even faster than humansat least, eventually.

The Ouroboros program is in many ways the nexus of tech and theology. As professor of theology and computer science at St. Johns University in Minnesota, Noreen Herzfeld puts, it AI begs the question, what is life? How do we define it? How do we know if weve found it? What is the nature of consciousness? These philosophical questions so at the heart of AI are the same questions that religions and spiritual traditions have tried to answer for millennia, as Herzfeld points out. This is no coincidence.

In the past, religion and science intermingled more fluidly than they do today. Religion informed science, and science informed religion. Alchemy, a precursor to modern-day chemistry, was in many ways its own religion. Then, when the Enlightenment came along, science and religion were separated from each other. But, today innovations like the Ouroboros program ask us to ponder those religious and spiritual questions in a very immediate way. You cant build artificial consciousness if you dont first understand what consciousness is.

And, at the heart of AI research and its future is an ancient spiritual symbol of the universe, the Ouroboros. With its connection to ancient Egyptian religion and alchemy, the Ouroboros was and is a religious and spiritual symbol. And now, its a term applied to a coding program that could potentially eventually lead to a new kind of consciousness. Thats not an accident.

In this one symbol, religion and science again intermingle. The Ouroboros is a symbol of life and death, of time. And, perhaps thats what consciousness is all about. Because whats more human than pondering the cycles of life and death, and our place within them? And, how cool is it that, as we move towards creating artificial life, the Ouroboros symbol will be at the literal center of whatever new life we create?

Sarah Durn is a freelance writer, actor, and medievalist based in New Orleans, LA. She is the author of an upcoming book on alchemy to be published in Spring 2020.

View original post here:

The Ouroboros, From Antiquity to AI - Gizmodo

Robotics and AI leaders spearheading the battle with COVID-19 – ShareCafe

Alex Cooks13 May 2020 bloghighlighted the role of robotics and artificial intelligence (A.I.) technologies in fighting the spread of COVID-19.

In todays post, we look behind the ticker of theBetaShares Global Robotics and Artificial Intelligence ETF (ASX: RBTZ)at some of the leading companies in this space, and how they have contributed to fighting the pandemic, or are well-placed to benefit from economic, social and geo-political shifts borne out of the crisis.

The most visually obvious contribution of robotics and A.I. to combating COVID-19 has been the development of autonomous robots in healthcare such asOmrons LD-UVC, shown in Figure 1 below. Omron makes up 4.5% of RBTZs index (as at 21 August 2020). Their ground-breaking LD-UVC disinfects a particular premises by eliminating 99.9% of bacteria and viruses, both airborne and droplet, with a precise dosage of UVC energy1.

Figure 1: The LD UVC, developed by Omron Asia Pacific, in conjunction with Techmetics Robotics

Reducing the risk of human exposure to the coronavirus is one application of robotics, while scaling up our capacity for clinical testing is another critical element of the fight.

Swiss Healthcare company,Tecan Group, which makes up 5.3% of RBTZs index (as at 21 August 2020), is a market leader in laboratory instruments, reagents and smart consumables used to automate diagnostic workflow in life sciences and clinical testing laboratories.

Tecan has experienced strong demand for its products to help in the global fight against the coronavirus pandemic, resulting in a substantial increase in sales and a surge in orders in the first half of 2020.

Automation is critical for countries attempting to scale up their COVID-19 testing capacity. Tecan is aiming to double production of its laboratory automation solutions and disposable pipette tip products, and has accessed emergency stockpiles to keep up with the massive demand2.

Californian companyNvidiamakes up 9.4% of the index which RBTZ aims to track (as at 21 August 2020), making it the Funds largest holding. Nvidia is at the forefront of deep learning, artificial intelligence, and accelerated analytics. Nvidia was able to design and build the worlds seventh fastest supercomputer in three weeks, a task that normally takes many months, to be used by the U.S. Argonne National Laboratory to research ways to stop the coronavirus3.

Supercomputers are proving to be a critical tool in many facets of responding to the disease, including predicting the spread of the virus, optimising contact tracing, allocating resources and providing decisions for physicians, designing vaccines and developing rapid testing tools.

Then there are companies and products that are helping us adapt to a post-COVID world and beyond.

Keyence Corporation, from Japan, positioned itself at the forefront of several key trends in an era of increasing factory automation. In the wake of the COVID-19 crisis, factories have never faced such an urgent need to replace humans with machines to keep production lines running.

Keyence specialises in automation systems for manufacturing, food processing and pharma machine vision systems, sensors, laser markers, measuring instruments and digital microscopes. Think precision tools and quality control sensors that eliminate or detect infinitesimal assembly-line mistakes, improving throughput, and reducing wastage and costly shutdowns.

Its focus on product innovation and direct-sales model give it a competitive advantage, making it better able to adapt to new manufacturing processes and workflows while introducing high-value client solutions.

Keyence has maintained an operating profit margin >50%, has no net debt and managed to increase its dividend for the 2020 financial year, to become Japans third-largest company by market value4.

One unfortunate consequence of the virus crisis has been the straining of international relations and a deterioration of the rules-based order.AeroVironmentis a global leader in unmanned aircraft systems, or drones, and tactical missile systems. It is the number one supplier of small drones to the U.S. military. The Australian Defence Force is also an AeroVironment customer5, with spending on drone and military technology expected to increase after the release of the 2020 Defence Strategic Update in July6.

Beyond weapons systems, AeroVironment is also leading the evolution in stratospheric unmanned flight with the development of the Sunglider solar-powered high-altitude pseudo-satellite (HAPS), currently undergoing testing at Spaceport America in New Mexico. AeroVironment recently announced it was building a drone helicopter that will be deployed to Mars along with NASAs Perseverence rover in 20217. The Mars Helicopter will be the first aircraft to attempt controlled flight on another planet, in its mission searching for signs of habitable conditions and evidence of past microbial life.

A simple and cost-effective method of accessing the dynamic and fast-growing robotics and A.I. thematic is available on the ASX through theBetaShares Global Robotic and Artificial Intelligence ETF (ASX: RBTZ). The Fund invests in companies from across the globe involved in:

This includes exposure to the companies mentioned in this article, and other leaders expected to benefit from the increased adoption and utilisation of Robotics and A.I. Over the 12 months to 31 July 2020, RBTZ returned 23.7%, outperforming the broad global MSCI World Index (AUD) shares benchmark by 20.6%8.

There are risks associated with an investment in the Fund, including concentration risk, robotics and A.I. companies risk, smaller companies risk and currency risk. For more information on risks and other features of the Fund, please see the Product Disclosure Statement, available atwww.betashares.com.au.

ENDNOTES

More:

Robotics and AI leaders spearheading the battle with COVID-19 - ShareCafe

AI SciFi Short Rise Is Being Turned Into a Movie – Gizmodo

Photo courtesy Concept Rise

Rise, the impressive robot uprising short film starring the late Anton Yelchin, is being adapted into a movie... with the original director on board to helm the production.

The five-minute film takes an updated version of the special effects from A.I. with the storyline of The Second Renaissance from The Animatrix. Its all about a dystopian future where artificially intelligent robots are hunted and killed, after the government determined they were becoming too emotional and, therefore, human. Unfortunately, its not working, as Yelchins A.I. helps trigger a war for the future of their species.

David Karlak, who directed the original short, has signed on to direct the feature-length adaptation. Its being produced by Johnny Lin (American Made) and Brian Oliver (Hacksaw Ridge, Black Swan), with original writers Patrick Melton and Marcus Dunstan returning to pen the script. No word who would replace Yelchin, who sadly passed away last year, but I am hoping Rufus Sewell (The Man in the High Castle) reprises his role as the government interrogator. Ill watch him in anything.

You can watch the original short film below.

[The Hollywood Reporter]

More:

AI SciFi Short Rise Is Being Turned Into a Movie - Gizmodo