AI Can Predict How Much Longer You Have Left To Live – IFLScience

Artificial intelligence is able to make a pretty accurate guess at how much longer you've got to live, as shown by a new study published in thejournal Scientific Reports.

In the the first study of its kind, scientists from the University of Adelaide used artificial intelligence to predict which patients would die within the next five years with 69 percent accuracy thats about the same as an estimation made by a trained medical doctor, the researchers say.

The AI was fed CT imagery of 48 people's chests, all of whom were aged under 60, and then used machine learning techniques to sift through the huge amount of data and draw out any anomalies or strange patterns. A total of 15,957 biomarker features were found within the images and then used to make an estimation of their remaining life.

"Predicting the future of a patient is useful because it may enable doctors to tailor treatments to the individual," lead author Dr Luke Oakden-Rayner, a radiologist and PhD student with the University of Adelaide's School of Public Health, said in astatement."The accurate assessment of biological age and the prediction of a patient's longevity has so far been limited by doctors inability to look inside the body and measure the health of each organ.

While this study still has room for improvement, the scientists working on this proof-of-concept study hope to fine-tune their findings and eventually use it predict other important medical conditions, such as the onset of heart attacks. The next stage of the research will attempt to increase the AIs accuracy by giving it tens of thousands of images to process.

"Although for this study only a small sample of patients was used, our research suggests that the computer has learnt to recognise the complex imaging appearances of diseases, something that requires extensive training for human experts," addedDr Oakden-Rayner.

Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns."

"Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions."

AI is set for big things in the field of biomedicine with its ability to hyper-efficiently process largeamounts of data. For example, researchers from Stanford University have developed an artificial intelligence that is as accurate as doctors atidentifying skin cancer from images.

Read more from the original source:

AI Can Predict How Much Longer You Have Left To Live - IFLScience

Harnessing machine learning to make managing your storage less of a chore – Ars Technica

Enlarge / As far as we know, none of the storage vendors using AI have gone neuromorphic yetlet alone biological.

Aurich Lawson / Getty

While the words "artificial intelligence" generally conjure up visions of Skynet, HAL 9000, and the Demon Seed, machine learning and other types of AI technology have already been brought to bear on many analytical tasks, doing things that humans can't or don't want to dofrom catching malware to predicting when jet engines need repair. Now it's getting attention for another seemingly impossible task for humans: properly configuring data storage.

As the scale and complexity of storage workloads increase, it becomes more and more difficult to manage them efficiently. Jobs that could originally be planned and managed by a single storage architect now require increasingly large teams of specialistswhich sets the stage for artificial intelligence (ne machine learning) techniques to enter the picture, allowing fewer storage engineers to effectively manage larger and more diverse workloads.

Storage administrators have five major metrics they contend with, and finding a balance among them to match application demands approaches being a dark art. Those metrics are:

Throughput: Throughput is the most commonly understood metric at the consumer level. Throughput on the network level is usually measured in Mbpsmegabits per secondsuch as you'd see on a typical Internet speed test. In the storage world, the most common unit of measurement is MB/secmegabytes per second. This is because storage capacity is usually measured in megabytes. (For reference, there are eight bits in a byte, so 1MB per second is equal to 8Mbps.)

Latency: Latencyat least where storage is concernedis the amount of time it takes between making a request and having it fulfilled and is typically measured in milliseconds. This may be discussed in a pure, non-throughput-constrained sensethe amount of time to fulfill a request for a single storage blockor in an application latency sense, meaning the time it takes to fulfill a typical storage request. Pure latency is not affected by throughput, while application latency may decrease significantly with increased throughput if individual storage requests are large.

IOPS: IOPS is short for "input/output operations per second" and generally refers to the raw count of discrete disk read or write operations that the storage stack can handle. This is what most storage systems bind on first. IOPS limits can be reached either on the storage controller or the underlying medium. An example is the difference between reading a single large file versus a lot of tiny files from a traditional spinning hard disk drive: the large file might read at 110MB/sec or more, while the tiny files, stored on the same drive, may read at 1MB/sec or even less.

Capacity: The concept is simpleit's how much data you can cram onto the device or stackbut the units are unfortunately a hot mess. Capacity can be expressed in GiB, TiB, or PiBso-called "gibibytes," "tebibytes," or "pibibytes"but is typically expressed in more familiar GB, TB, or PB (that's gigabytes, terabytes, or petabytes). The difference is that "mega," "giga," and "peta" is a decimal counting system based on powers of ten (so 1GB properly equals 1000^3 bytes, or exactly one billion bytes), whereas "gibi," "tebi," and "pibi" is a binary counting system based on powers of two (so one "gibibyte" is 1024^3 bytes, or 1,073,741,824 bytes). Filesystems almost universally use the powers of two (standard scientific notation), whereas storage device specifications are almost universally in powers of ten. There are complex historical reasons for the different ways of reckoning, but one reason the different capacity reckoning methods continue to exist is that it conveniently allows drive manufacturers to over-represent their devices' capacities on the box in the store.

Security:For the most part, security only comes into play when you're balancing cloud storage versus local storage. With highly confidential data, on-premises storage may be more tightly locked down, with physical access strictly limited to only the personnel who work directly for a company and have an actual need for that physical access. Cloud storage, by contrast, typically involves a much larger set of personnel having physical access, who may not work directly for the company that owns the data. Security can be a huge concern of the company that owns the data or a regulatory concern handed down from overseeing bodies, such as HIPAA or PCI DSS.

Enterprise administrators face an increasingly vast variety of storage types and an equally varied list of services to support with different I/O metrics to meet. A large file share might need massive scale and decent throughput as cheaply as it can be gotten but also must tolerate latency penalties. A private email server might need fairly massive storage with good latency and throughput but have a relatively undemanding IOPS profile. A database-backed application might not need to move much data, but it might also require very low latency while under an incredibly punishing IOPS profile.

If we only had these three services to deploy, the job seems simple: put the big, non-confidential file share on relatively cheap Amazon S3 buckets, the private mail server on local spinning rust (that's storage admin speak for traditional hard disk drives), and throw the database on local SSDs. Done! But like most "simple" problems, this gets increasingly more complex and difficult to manage as the number of variables scale out. Even a small business with fewer than fifty employees might easily have many dozens of business-critical services; an enterprise typically has thousands.

With thousands of services competing for resources with differing performance and confidentiality targetssome long-running, others relatively ephemeral and likely only being up for days or weeks at a timemanagement of the underlying storage rapidly outpaces the human ability to make informed and useful changes. Management effort quickly falls back to best-effort, "shotgun" approaches tailored to the preferences of the organization or departmentspend too much but get high performance and/or minimal maintenance requirements in return; or gamble on cheaper services, hoping that the cost savings outweigh penalties in missed performance targets or increased IT payroll.

Excerpt from:

Harnessing machine learning to make managing your storage less of a chore - Ars Technica

What an AI’s Non-Human Language Actually Looks Like – The Atlantic

Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.

In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) When I wrote about all this last week, lots of people reacted with some degree of trepidatious wonder. Machines making up their own language is really cool, sure, but isnt it actually terrifying?

And also: What does this language actually look like? Heres an example of one of the bot negotiations that Facebook observed:

Bob: i can i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i i can i i i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i . . . . . . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i i i i i everything else . . . . . . . . . . . . . . Alice: balls have 0 to me to me to me to me to me to me to me to me to Bob: you i i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to

Not only does this appear to be nonsense, but the bots dont really seem to be getting anywhere in the negotiation. Alice isnt budging from her original position, anyway. The weird thing is, Facebooks data shows that conversations like this sometimes still led to successful negotiations between the bots in the end, a spokesperson from the AI lab told me. (In other cases, researchers adjusted their model and the bots would develop bad strategies for negotiatingeven if their conversation remained interpretable by human standards.)

One way to think about all this is to consider cryptophasia, the name for the phenomenon when twins make up their own secret language, understandable only to them. Perhaps you recall the 2011 YouTube video of two exuberant toddlers chattering back and forth in what sounds like a lively, if inscrutable, dialogue.

Theres some debate over whether this sort of twin speak is actually language or merely a joyful, babbling imitation of language. The YouTube babies are socializing, but probably not saying anything with specific meaning, many linguists say.

In the case of Facebooks bots, however, there seems to be something more language-like occurring, Facebooks researchers say. Other AI researchers, too, say theyve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntaxthough not always actual meaningful, by human standards.

A Computer Tried (and Failed) to Write This Article

In one preprint paper added earlier this year to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract languageand how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots dont need to have corporeal form to engage in non-verbal communication; they just engage with whats called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapesin other words, the researchers write, they witnessed the automatic emergence of grounded language and communication ... no human supervision!

The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.

But lets take a step back for a minute. Is what any of these bots are doing really language? We have to start by admitting that its not up to linguists to decide how the word language can be used, though linguists certainly have opinions and arguments about the nature of human languages, and the boundaries of that natural class, said Mark Liberman, a professor of linguistics at the University of Pennsylvania.

So the question of whether Facebooks bots really made up their own language depends on what we mean when we say language. For example, linguists tend to agree that sign languages and vernacular languages really are capital-L languages, as Liberman puts itand not mere approximations of actual language, whatever that is. They also tend to agree that body language and computer languages like Python and JavaScript arent really languages, even though we call them that.

So heres the question Liberman poses instead: Could Facebooks bot languageFacebotlish, he calls itsignal a new and lasting kind of language?

Probably not, though theres not enough information available to tell, he said. In the first place, its entirely text-based, while human languages are all basically spoken or gestured, with text being an artificial overlay.

The larger point, he says, is that Facebooks bots are not anywhere near intelligent in the way we think about human intelligence. (Thats part of the reason the term AI can be so misleading.)

The expert systems style of AI programs of the 1970s are at best a historical curiosity now, like the clockwork automata of the 17th century, Liberman said. We can be pretty sure that in a few decades, todays machine-learning AI will seem equally quaint.

Its already easy to set up artificial worlds populated by mysterious algorithmic entities with communications procedures that evolve through a combination of random drift, social convergence, and optimizing selection, Liberman said. Just as its easy to build a clockwork figurine that plays the clavier.

Read more:

What an AI's Non-Human Language Actually Looks Like - The Atlantic

Viz.ai Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies – Yahoo Finance

SAN FRANCISCO, July 8, 2020 /PRNewswire/ -- Viz.ai, the leader in Applied Artificial Intelligence for Healthcare, has been named to the prestigious Forbes Top 50 AI list. The list honors the top 50 companies making the most impact using artificial intelligence to drive change and transform industries.

Forbes evaluated hundreds of innovative companies and recognized the top 50 for their use of artificial intelligence to drive outcomes for customers. As a company known for driving innovation in healthcare technology, Viz.ai was selected for its work dedicated to improving treatment times in stroke care and advancing how healthcare is delivered across a hospital network.

"It's not just about the AI," said Eric Eskioglu, MD, FAANS Neurosurgeon, Executive Vice President & Chief Medical Officer Novant Health. "Hospital systems using Viz for stroke care have seen meaningful reductions in treatment times, improvement in outcome scores and reduction in hospital length of stay. Viz is setting the standard for how healthcare can be delivered with operational precision, equity in treatments and outstanding clinical results. It's become how modern healthcare happens."

Viz.ai is now taking its proven patient and operational benefits and applying its technology to other aspects of healthcare, including the response to the pandemic. Viz COVID-19, available to any hospital at no cost, is improving communication, workflow and bed management for hospitals struggling with the COVID-19 crisis. Viz CONSULT is improving imaging, workflow and decision making across multiple new disease states such as Spine, Trauma and Pulmonary Embolism. Viz CLINIC is transforming the doctor visit experience for HCPs and patients and Viz ANALYTICS is enabling dynamic quality improvement through local and national benchmarking.

"We are honored to be recognized by Forbes as one of the leading AI Healthcare companies. It demonstrates the importance of putting patients first and applying the latest technology to improve patient outcomes," said Viz.ai CEO & Co-Founder, Dr. Chris Mansi. "We look forward to making an impact across healthcare ensuring the right patient is seen by the right doctor at the right time, every time."

About Viz.aiViz.ai, is the leader in applied artificial intelligence in healthcare. Viz.ai's mission is to fundamentally improve how healthcare is delivered in the world, through intelligent software that promises to reduce time to treatment and improve access to care. In 2018, the U.S. Food and Drug Administration (FDA) granted a De Novo clearance for Viz LVO, the first-ever computer-aided triage and notification software.Viz.ai is located in San Francisco and Tel Aviv and backed by leading Silicon Valley investors, including Kleiner Perkins, Google Ventures, Innovation Endeavors, CRV, Threshold, DHVC & Greenoaks Capital.

Related Links:www.viz.ai Viz.ai Synchronizing Stroke Care Brochure: https://bit.ly/2Z93sOzViz COVID-19 Product Guide: https://bit.ly/2O7UTgQ

Social Media:LinkedIn: https://www.linkedin.com/company/viz.aiTwitter: https://twitter.com/viz_ai

View original content to download multimedia:http://www.prnewswire.com/news-releases/vizai-named-to-forbes-ai-50-list-of-most-promising-artificial-intelligence-companies-301090432.html

SOURCE Viz.ai Inc.

More:

Viz.ai Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies - Yahoo Finance

AI can predict autism through babies’ brain scans – Engadget

Scientists know that the first signs of autism can appear in early childhood, but reliably predicting that at very young ages is difficult. A behavior questionnaire is a crapshoot at 12 months. However, artificial intelligence might just be the key to making an accurate call. University of North Carolina researchers have developed a deep learning algorithm that can predict autism in babies with a relatively high 81 percent accuracy and 88 percent sensitivity. The team trained the algorithm to recognize early hints of autism by feeding it brain scans and asking it to watch for three common factors: the brain's surface area, its volume and the child's gender (as boys are more likely to have autism). In tests, the AI could spot the telltale increase in surface area as early as 6 months, and a matching increase in volume as soon as 12 months -- it wasn't a surprise that most of these babies were formally diagnosed with autism at 2 years old.

See original here:

AI can predict autism through babies' brain scans - Engadget

The AI Company Helping the Pentagon Assess Disinfo Campaigns – WIRED

In September, Azerbaijan and Armenia renewed fighting over Nagorno-Karabakh, a disputed territory in the Caucasus mountains. By then, an information warfare campaign over the region had been underway for several months.

The campaign was identified using artificial intelligence technology being developed for US Special Operations Command (SOCOM), which oversees US special forces operations.

The AI system, from Primer, a company focused on the intelligence industry, identified key themes in the information campaign by analyzing thousands of public news sources. In practice, Primers system can analyze classified information too. The analysis, compiled for WIRED, shows how Russian news outlets began pushing a narrative in July designed to bolster its ally Armenia and undermine its enemy Azerbaijan.

Raymond Thomas, a retired general who previously led SOCOM and now sits on Primers board, says the Department of Defense faces a torrent of information as well misinformation. To keep pace, you're not going to do it with a bunch of people reading, he says. This has to be machine-enabled.

Primer said earlier this month it had won a multimillion dollar contract to develop a version of its technology for SOCOM as well as the US Air Force. The technology uses recent advances in natural language processing to identify people, places, and events in documents, and to string this information together to reveal trends.

The defense and intelligence industries need to analyze an avalanche of unclassified information such as social media along with classified reports, creating opportunities for companies like Primer, Splunk, Redhorse, and Strategic Analysis. This increasingly involves drawing insights from different forms of data such as voice recordings and images as well as text.

Primers investors include In-Q-Tel, the CIA-backed venture capital firm that also funded Palantir, which went public last month. Palantir started out offering tools for collecting and visualizing different types of information, such as cell phone records and internet traffic. It also now offers technology that uses AI to parse and organize text.

Recent leaps in machine understanding of language, enabled by feeding large machine-learning models huge amounts of text training data, could have a big impact on intelligence as well as business. Primer is focused on the intelligence industry, but it previously won a contract to supply its technology to Walmart, for identifying buying trends and supply chain issues.

What theyre doing is a tough technical challenge. Its impressive tech.

Chris Meserole, Brookings Institution

Primers analysis of the Russian information campaign offers a simple example of how AI can both help organize information and identify misinformation. The effort to portray Azerbaijan and Turkey as aggressors in Nagorno-Karabakh may have provided an early warning sign of escalating tensions, or an indicator of Russia trying to stir up trouble. The report boiled down 985 incidents from over 3,000 documents to 13 key events. A human analyst would have needed several hours to perform the analysis, but the Primer system did it in roughly 10 minutes. The technology works across several languages, including Russian and Chinese as well as English.

By mid-August, according to the report, the Russian outlets were publishing an increasing number of stories claiming that Turkey, an ally of Azerbaijan, was pouring troops into Armenia; other news sources made no mention of the build-up, suggesting it may have been part of a concerted information campaign. The ability to sort through all that information and misinformation, to enable decisionmaking, thats exceptionally critical, says Thomas, the retired general and Primer board member.

Martijn Rasser, senior fellow at the Center for New American Security and a former CIA intelligence analyst, says Primers technology shows how recent advances in AI could provide a military advantage. SOCOM needs to handle large amounts of intelligence information quickly, and it tends to be forward-thinking in its use of technology, he says. SOCOM declined to comment.

Read the original here:

The AI Company Helping the Pentagon Assess Disinfo Campaigns - WIRED

COVID-19 Forecasts by AI – The UCSB Current

Despite efforts throughout the United States last spring to suppress the spread of the novel coronavirus, states across the country have experienced spikes in the past several weeks. The number of confirmed COVID-19 cases in the nation has climbed to more than 3.5 million since the start of the pandemic.

Public officials in many states, including California, have now started to roll back the reopening process to help curb the spread of the virus. Eventually, state and local policymakers will be faced with deciding for a second time when and how to reopen their communities. A pair of researchers in UC Santa Barbaras College of Engineering, Xifeng Yan and Yu-Xiang Wang, have developed a novel forecasting model, inspired by artificial intelligence (AI) techniques, to provide timely information at a more localized level that officials and anyone in the public can use in their decision-making processes.

We are all overwhelmed by the data, most of which is provided at national and state levels, said Yan, an associate professor who holds the Venkatesh Narayanamurti Chair in Computer Science. Parents are more interested in what is happening in their school district and if its safe for their kids to go to school in the fall. However, there are very few websites providing that information. We aim to provide forecasting and explanations at a localized level with data that is more useful for residents and decision makers.

The forecasting project, Interventional COVID-19 Response Forecasting in Local Communities Using Neural Domain Adaption Models, received a Rapid Response Research (RAPID) grant for nearly $200,000 from the National Science Foundation (NSF).

The challenges of making sense of messy data are precisely the type of problems that we deal with every day as computer scientists working in AI and machine learning, said Wang, an assistant professor of computer science and holder of the Eugene Aas Chair. We are compelled to lend our expertise to help communities make informed decisions.

Yan and Wang developed an innovative forecasting algorithm based on a deep learning model called Transformer. The model is driven by an attention mechanism that intuitively learns how to forecast by learning what time period in the past to look at and what data is the most important and relevant.

If we are trying to forecast for a specific region, like Santa Barbara County, our algorithm compares the growth curves of COVID-19 cases across different regions over a period of time to determine the most-similar regions. It then weighs these regions to forecast cases in the target region, explained Yan.

In addition to COVID-19 data, the algorithm also draws information from the U.S. Census to factor in hyper-local details when calibrating the forecast for a local community.

The census data is very informative because it implicitly captures the culture, lifestyle, demographics and types of businesses in each local community, said Wang. When you combine that with COVID-19 data available by region, it helps us transfer the knowledge learned from one region to another, which will be useful for communities that want data on the effectiveness of interventions in order to make informed decisions.

The researchers models showed that, during the recent spike, Santa Barbara County experienced spread similar to what Mecklenburg, Wake, and Durham counties in North Carolina saw in late March and early April. Using those counties to forecast future cases in Santa Barbara County, the researchers attention-based model outperformed the most commonly used epidemiological models: the SIR (susceptible, infected, recovered) model, which describes the flow of individuals through three mutually exclusive stages; and the autoregressive model, which makes predictions based solely on a series of data points displayed over time. The AI-based model had a mean absolute percentage error (MAPE) of 0.030, compared with 0.11 for the SIR model and 0.072 with autoregression. The MAPE is a common measure of prediction accuracy in statistics.

Yan and Wang say their model forecasts more accurately because it eliminates key weaknesses associated with current models. Census data provides fine-grained details missing in existing simulation models, while the attention mechanism leverages the substantial amounts of data now available publicly.

Humans, even trained professionals, are not able to process the massive data as effectively as computer algorithms, said Wang. Our research provides tools for automatically extracting useful information from the data to simplify the picture, rather than making it more complicated.

The project, conducted in collaboration with Dr. Richard Beswick and Dr. Lynn Fitzgibbons from Cottage Hospital in Santa Barbara, will be presented later this month during the Computing Research Association (CRA) Virtual Conference. Formed in 1972 as a forum for department chairs of computer sciences departments across the country, the CRAs membership has grown to include more than 200 organizations active in computing research.

Yan and Wangs research efforts will not stop there. They plan to make their model and forecasts available to the public via a website and to collect enough data to forecast for communities across the country. We hope to forecast for every community in the country because we believe that when people are well informed with local data, they will make well-informed decisions, said Yan.

They also hope their algorithm can be used to forecast what could happen if a particular intervention is implemented at a specific time.

Because our research focuses on more fundamental aspects, the developed tools can be applied to a variety of factors, added Yan. Hopefully, the next time we are in such a situation, we will be better equipped to make the right decisions at the right time.

Read more:

COVID-19 Forecasts by AI - The UCSB Current

Second Life – Artificial Intelligence Unmasks the Cover Up Beneath Modigliani’s ‘Portrait of a Girl’ – PRNewswire

SAN JOSE, Calif., June 9, 2021 /PRNewswire/ --Amedeo Modigliani's'Portrait of a Girl'(1917) is currently held at the Tate in London. But, hidden beneath this painting is the figure of a woman that researchers have suggested is Modigliani's ex-lover, Beatrice Hastings. The couple had a tumultuous relationship ending in 1916. One year after their breakup, 'Portrait of a Girl' was completed. The timing suggests that Modigliani intentionally painted over his past girlfriend. Whilst Modigliani's'Portrait of a Girl' was titled 'Mademoiselle Victoria' at the 1929 exhibition held at the Lefevre Gallery, the identity of the model remains uncertain.

Oxia Palus, CogX 2021 finalist, is a London-based artificial intelligence startup founded by two Ph.D. candidates at University College London with the mission of resurrecting the world's lost art, engineered a proprietary approach combiningartistic creation and technology. By means of spectroscopic imaging, artificial intelligence, and 3D printing Oxia Palus actualized the pentimento beneath Modigliani'sportrait.Using a processed x-ray fluorescence image Oxia Palus trained an AI model to map between x-ray-like images and Modiglianipaintings, from this Oxia Palus reconstructed a lost masterpiece, Modigliani'slost Beatrice Hastings the world's second NeoMaster. "The world's hidden art, lockedbeneath layers of paint, lies dormant waiting tobe reborn. In the next few years with the correct application of spectroscopic imaging, artificial intelligence, and 3D-printing, we can actualize hundreds of lost works and change the history of art,"said George Cann, Oxia Palus co-founder.

Oxia Palus co-developed two patent pending technologies with MORF Gallery, a Silicon Valley andHollywood-based creator,enabler and purveyor of fine art that enables technologies like AI, neuroscience, robotics and NFTs, to create this NeoMaster. "MORF Gallery is incredibly proud to play a role in enabling Oxia Palus to bring this exquisite piece of art history to the world. Great art evokes emotion from its creator and its admirers, so it is incredible to imagine Modigliani's emotions with each brushstroke deliberately erasing the memory of Hastings. Bringing this painting back to life is simply amazing," said Scott Birnbaum, CEO of MORF Gallery. Birnbaum continued, "As a follow up to the successful launch of April'sworld-first NeoMaster, Oxia Palus and MORF Gallery have unlocked the keys to uncovering and protecting historically important artworks lost to the ages."

As featured this week inThe Guardian, this piece will be on display in the prestigious London-basedLebenson Galleryfrom June 10th 30th, along with the world's firstNeoMasterthat was hidden under a Picasso for a century,and a video demonstration of the neomastic reconstruction process on aMORF ArtStick. This is the first time that either NeoMaster will be physically exhibited. MORF strategically chose Lebenson because of their premier London location,their patrons across Europe, their visionary AI art curation, and because of their collaboration in DEEEP, London's first AI Art Fair."The Lebenson Gallery is thrilled to work with MORF Gallery. We are joining forces to promote cutting-edge artists and projects in artificial intelligence and art. The NeoMasters Exhibition is a great example of how these advanced technologies can reveal the unseen and interpret the unknown," said Stephane Bejean Lebenson, Founder and Owner ofLebenson Gallery.

Only 64 canvas editions of the world's second NeoMaster will ever be made available, one to commemorate each year of Hasting's life. NeoMaster 2 is now available starting at $22,222.22. To learn more about the world's secondNeoMasteror set up a private consultation, visitMORF Galleryanytime or visit the Lebenson Gallery from June 10th 30th, 2021.

Media and interested parties can access assets, images and videos of works from all MORF Gallery artists here.

Contact:Scott Birnbaum(408) 455-5669

SOURCE MORF AI, Inc.

See the rest here:

Second Life - Artificial Intelligence Unmasks the Cover Up Beneath Modigliani's 'Portrait of a Girl' - PRNewswire

New research on adoption of Artificial intelligence within IoT ecosystem – ELE Times

element14, the Development Distributor, has published new research on the Internet of Things (IoT) which confirms strong adoption of Artificial Intelligence (AI) within IoT devices, alongside new insights on key markets, enablers and concerns for design engineers working in IoT.

AIoT is the major emerging trend from the survey, demonstrating the beginning of the process to build a true IoT ecosystem. Research showed that almost half (49%) of respondents already use AI in their IoT applications, with Machine Learning (ML) the most used technology (28%) followed by cloud-based AI (19%). This adoption of AI within IoT design is coupled with a growing confidence to take the lead on IoT development and an increasing number of respondents seeing themselves as innovators. However, it is still evident that some engineers (51%) are hesitant to adopt AI due to being new to the technology or because they require specialized expertise in how to implement AI in IoT applications.

Other results from element14s second Global IoT Survey show that security continues to be the biggest concern designers consider in IoT implementation. Although 40% cited security as their biggest concern in 2018 and this has reduced to 35% in 2019, it is still ranked significantly higher than connectivity and interoperability due to the type of data collected from things (machines) and humans, which can be very sensitive and personal. Businesses initiating new IoT projects treat IoT security as a top priority by implementing hardware and software security to protect for any kind of potential threat. Ownership of collected data is another important aspect of security, with 70% of respondents preferring to own the data collected by an edge device as opposed to it being owned by the IoT solution provider.

The survey also shows that although many engineers (46%) still prefer to design a complete edge-to-cloud and security solution themselves, openness to integrate production ready solutions, such as SmartEdge Agile, SmartEdge IIoT Gateway, which offer a complete end-to-end IoT Solution, has increased. 12% more respondents confirmed that they would consider third party devices in 2019 than 2018, particularly if in-house expertise is limited or time to market is critical.

A key trend from last years survey results has continued in 2019 and survey results suggest that the growing range of hardware available to support IoT development continues to present new opportunities. More respondents than ever are seeing innovation coming from start-ups (33%, up from 26%), who benefit from the wide availability of modular solutions and single board

computers available on the market. The number of respondents adopting off-the-shelf hardware has also increased to 54% from 50% in 2018.

Cliff Ortmeyer, Global Head of Technical Marketing for Farnell and element14 says: Opportunities within the Internet of Things and AI continue to grow, fueled by access to an increasing number of hardware and software solutions which enable developers to bring products to market more quickly than ever before, and without the need for specialized expertise. This is opening up IoT to new entrants, and giving more developers the opportunity to innovate to improve lives. element14 provides access to an extensive range of development tools for IoT and AI which provide off-the shelf solutions to common challenges.

Despite the swift integration of smart devices such as Amazons Alexa and Google Home into daily life, evidencing a widespread adoption of IoT in the consumer space, in 2019 we saw a slight shift in focus away from home automation with the number of respondents who considered it to be the most impactful application in IoT in the next 5 years reducing from 27% to 22%. Industrial automation and smart cities both gained, at 22% and 16% respectively, underpinned by a growing understanding of the value that IoT data can bring to operations (rising from 44% in 2018 to 50% in 2019). This trend is witnessed in industry where more manufacturing facilities are converting to full or semi-automation in robotic manufacturing and increasing investment in predictive maintenance to reduce production down times.

The survey was conducted between September and December 2019 with 2,015 respondents participating from 67 countries in Europe, North America and APAC. Responses were predominantly from engineers working on IoT solutions (59%), as well as buyers of components related to IoT solutions, Hobbyists and Makers.

element14 provides a broad range of products and support materials to assist developers designing IoT solutions and integrating Artificial Intelligence. Products are available from leading manufacturers such as Raspberry Pi, Arduino and Beagleboard. element14s IoT hub and AI pages also provide access to the latest products for development and insights and white papers to support the design journey. Readers can view an infographic covering the full results of the element14 Global IoT Survey at Farnell in EMEA, Newark in North America and element14 in APAC.

For more information, visit http://www.element14.com

See more here:

New research on adoption of Artificial intelligence within IoT ecosystem - ELE Times

Guavus Unwraps New Artificial Intelligence-based Analytics and Automation Products – Analytics Insight

Provides single solution to address both network and service operations, taking the cost and complexity out of big data analytics for operators

News Summary:

* Guavus-IQ portfolio provides a multi-perspective analytics experience for CSPs, delivering highly correlated outside-in insights on customers experience and inside-out insights on how their service and network operations are impacting customers.* CSPs dont need to be data scientists Guavus-IQ combines network and data science to offer an operator-friendly experience for users across their operations.* Guavus-IQ reduces compute/processing costs by 50%, providing millions of dollars in CAPEX and OPEX savings to CSPs through its advanced AI-driven analytics.

San Jose, CA July 16, 2020 Guavus, a pioneer in AI-based analytics for communications service providers (CSPs), today announced the launch of Guavus-IQ a comprehensive product portfolio that provides a unique multi-perspective analytics experience for CSPs.

Guavus-IQ delivers highly instrumented analytics insights to CSPs on how each subscriber is experiencing their network and services (bringing the outside perspective in) and how their network is impacting their subscribers (understanding how their internal operations are impacting their customers). This single, real-time outside-in/inside-out perspective helps operators identify subscriber behavioral patterns and better understand their operational environments. This enables them to increase revenue opportunities through data monetization and improved customer experience (CX), as well as reduce costs through automated, closed-loop actions.

In addition, Guavus-IQ has been designed to be operator-friendly for CSPs it doesnt require the operator to be a data science specialist or expert. It combines network and data science and leverages explainable AI to deliver easy-to-understand analytics insights to CSP users across the business at a significantly reduced cost.

The new Guavus-IQ products build on Guavus ten plus years of experience providing innovative analytics solutions focused exclusively on the needs of CSPs. The products are currently deployed in 8 of the top CSPs in Europe, Latin America, Asia-Pac and North America.

Big Data Doesnt Need to Come at a Big Cost

Guavus-IQ consists of two main product categories:

* Ops-IQ, for CSP Network Operations teams seeking to transform their operations with the introduction of AI-assisted analysis, and* Service-IQ, for CSP Service Operations teams looking to grow revenue, retain subscribers and create new opportunities through a refined customer experience.

Just because data is big doesnt mean it cant be resource-efficient. The Guavus-IQ products leverage approximately 50% of the compute/processing-related hardware required by traditional analytics solutions through their use of advanced big data collection capabilities and real-time, in-memory stream processing edge analytics. This results in more powerful data collection from over 200 sources at half the cost.

Ops-IQ provides additional operational efficiencies through a combination of anomaly detection, fault correlation, and root cause analysis which not only lower OPEX but elevate CX. Ops-IQ fault analytics suppress more than 99.5% of alarms not associated with network incidents, and accurately predict incident-causing alarms by 93.9%. This significantly improves the Mean-Time-To-Response (MTTR) in a CSP Network Operations Center (NOC), saving more than $10 million a year in OPEX costs currently for a large service provider customer.

Service-IQ also plays a significant role in positively impacting CX and reducing costs. Service-IQ allows for flexible data reuse when it ingests new data, it ingests data once and then enables the reuse of that same data for additional use cases across both Service-IQ and Ops-IQ. This new level of efficiency saves operators time with ingest, a costly and complex part of the analytics process.

Because the data pipeline of previously ingested data can be automatically re-instantiated for use within Service-IQ or Ops-IQ, CSPs dont need to become big data experts in order to leverage the power and value of the data theyve collected. Instead, the Guavus-IQ products apply proven data science methods inside the integrated solutions to do the heavy lifting for the operator. This also allows analytics projects to be streamlined and shortened by more than 40-50%, as many organizations struggle not only with managing and deploying the infrastructure but also with gaining value in the early stage of analytics and AI experimentation.

Supporting Quotes:

In the world of 5G, IoT and now a global pandemic, were seeing an even greater need for operators to take advantage of AI and analytics to deal with increased network complexity, operational costs and subscriber demands for improved experience. To address these challenges, operators need to better understand network and subscriber behavior and be able to do so in real time.

These challenges can be tackled by utilizing big data collection, in-memory stream processing and AI-based analytics capabilities to ingest, correlate and analyze data (on premise and in the cloud) in real time from operators multivendor infrastructure. Insights generated can then be used to better serve operators needs across network, service, and marketing operations.

Adaora Okeleke, Principal Analyst, Service Provider Operations and IT, Omdia

Weve seen a lot of excitement from the top CSPs worldwide in Guavus-IQ. Our customers plan to leverage the products for root cause analysis, subscriber behavior analysis, new personalized products, and IoT services, among other use cases. They like the fact that Guavus-IQ is easy to operate and its highly instrumented specifically for operators and their multivendor infrastructures versus traditional general-purpose enterprise platforms or homogeneous network-equipment-oriented solutions.

Alexander Shevchenko, CEO of Guavus, a Thales company

Additional Resources:

* Guavus-IQ portfolio<https://www.guavus.com/guavus-iq/>

* Guavus Ops-IQ<https://www.guavus.com/guavus-iq/ops-iq/>

* Guavus Service-IQ<https://www.guavus.com/guavus-iq/service-iq/>

About Guavus (a Thales company)

Guavus is at the forefront of AI-based big data analytics and machine learning innovation, driving digital transformation at 6 of the 7 worlds largest telecommunications providers. Using the Guavus-IQ analytics solutions, customers are able to analyze big data in real time and take decisive actions to lower costs, increase efficiencies, and dramatically improve the end-to-end customer experience all with the scale and security required by next-gen 5G and IoT networks.

Guavus enables service providers to leverage applications for advanced network planning and operations, mobile traffic analytics, marketing, customer care, security and IoT. Discover more atwww.guavus.com<http://www.guavus.com> and follow us on Twitter<https://twitter.com/guavus> and LinkedIn.<https://www.linkedin.com/company/guavus/>

# # #

Media Contact:

Laura Stiff

Guavus PR & Analyst Relations

+1-408-827-1242

laura.stiff@external.thalesgroup.com

Follow this link:

Guavus Unwraps New Artificial Intelligence-based Analytics and Automation Products - Analytics Insight

Achieving AI: Pharma’s Digital Transformation Pathway – Bio-IT World

By Allison Proffitt

June 10, 2021 | What does it mean for a pharmaceutical company to be digital and how do we get there? Thats what Reza Olfati-Saber, PhD, Global Head AI & Deep Analytics, Digital & Data Science R&D, at Sanofi tackled yesterday at theDECODE: AI for Pharmaceuticals forum.

A digital pharma company is agile, Olfati-Saber argued, enabling it to discovery drugs faster and develop and manufacture drugs more efficiently. Olfati-Saber pointed out that of the four first-movers in COVID-19 vaccine raceBioNTech/Pfizer, Moderna, AstraZeneca, and Janssen Pharmaceuticalsall have reputations as digitally-advanced companies.

The one thing all four companies have in common is all of them digitally-advanced biopharma companies. The last two, among the larger pharma companies, happen to have very advanced AI and ML capabilities, Olfati-Saber said.

Digital can be hard to pin down, Olfati-Saber conceded, and he observed that many groups are eager to jump in and claim AI expertise. Lawyers seek to define ethical AI but dont generally take medical ethics into account, he said, while management professors claim to roadmap the digital transformation journey without any industry-specific insight. Even digital is defined in the most convenient way for each industry.

Olfati-Saber narrowed the scope to discuss the meaning and architecture of digital transformation specifically for pharma R&D.

Digital Architecture Models

For pharma R&D, the digital transformation narrative can be illustrated as a pyramid architecture, Olfati-Saber said. The traditional pyramid has computing (cloud, infrastructure) as its wide base, advancing through applications (data storage, app development, security), data (data governance and security), AI policy (quality and ethics), analytics (data analytics and visualization), and machine learning.

Olfati-Saber views the lowest three layers of this pyramidcomputing, applications, and dataas the foundational digital layers. The first two are technical requirements. Together, along with the data layer, these three confer AI enablement. These competencies must be in place for a company to be AI-ready. AI policy, analytics, and machine learning make up the true AI capabilities for an enterprise, and these sit at the top of the pyramid.

But contrary to narrow expertise of AI experts from law firms and management schools, Olfati-Saber argues that true digital transformation of a pharma company requires expertise from four quadrants. Both technology and management expertise are required to build the solid digital enablement foundations, and scientific and legal expertise combine to drive AI.

In fact, Olfati-Saber argues that it is practically impossible to expect a Chief Data Officer to know the entire quadrant well enough to facilitate a digital transformation. Instead, he argues for both a top digital expert and a top AI expert working together. Anything else wouldnt do the job, he said.

Development Pathways

Its a complex schema, and Olfati-Saber proposes a four-phase pathway for development. Start by establishing the technical foundations, then add the needed data, the AI tools, and finally fine-tune the enterprise AI policy.

Its a slight rearrangement of the traditional pyramid view, moving AI policy to the final phase of development or pinnacle of the pyramid and grouping analytics and machine learning together below.

The rearrangement reflects what Olfati-Saber sees as the hardest part of the digital transformationthe biggest stumbling block for companies.

Despite the fact that many large tech companies have gone through the first two phases of transformation really successfully, theyre struggling to go through the last phases of transformation, he said. Part of the reason is that there seems to be some sort of conflict between the business models of some tech companies and some of these AI and data privacy-related policies.

He alluded to Googles recent dismissal of company ethicists when their ethics findingspresumably in a paper submitted to an industry conferencedidnt align with the companys goals.

Its not easy to simply put together a committee and expect them to form the quality assurance and ethics principles of AI, Olfati-Saber said. This is a very challenging task, just as challenging as any of those other three.

Deep Digital Transformation

When a company has achieved all four phases, Olfati-Saber said, theyve undergone what he calls a deep digital transformation. And its a worthwhile process, he argued. He outlined many examples of where AI can impact the pharmaceutical business: digital pathology, AI-based drug design, multi-omics analysis, digital health, digital manufacturing, AI-based regulatory approvals, and more.

In his own estimate, as an example, Olfati-Saber argued that the AI-enabled cost savings per image suggests digital pathology is 2,500x cheaper than standard pathology and 60x faster, even when the digital tools are simply aiding pathologists, not replacing them.

The real reason pharma companies or investors out there are interested in applying AI and investing in AI for pharma is not because its a fancy tool, or because its fashionable. Its mostly because it generates massive returns in agility, scalability, and cost savings, Olfati-Saber said. These are the true reasons why a pharma company would want to become AI ready, go through a digital transformation, and have AI capabilities.

Read the rest here:

Achieving AI: Pharma's Digital Transformation Pathway - Bio-IT World

A U.S. Secret Weapon in A.I.: Chinese Talent – The New York Times

Those numbers showed no signs of decline, but some organizations say more recent tensions between the United States and China have already begun to affect talent flows.

I am terrified by what the administration is doing, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a high-profile research lab in Seattle, which has seen a significant decrease in the number of applications from Chinese researchers. How many times can you push people out the door and put obstacles in their way before they say, I am not going to try?

Chinese-born researchers are a fixture of the American A.I. field. Li Deng, a former Microsoft researcher and now chief A.I. officer at the hedge fund Citadel, helped remake the speech recognition technologies used on smartphones and coffee-table digital assistants. Fei-Fei Li, a Stanford professor who worked for less than two years at Google, helped drive a revolution in computer vision, the science of getting software to recognize objects.

At Google, Dr. Li helped oversee the Google team that worked on Project Maven, the Pentagon effort. Google declined to renew the Pentagon contract two years ago after some employees protested the companys involvement with the military. The Google team worked to build technology that could automatically identify vehicles, buildings and other objects in video footage captured by drones. In the spring of 2018, at least five of the roughly dozen researchers on the team were Chinese nationals, according to one of the people familiar with the arrangement.

A certain amount of government restriction is natural. The Pentagon typically bars citizens of rival foreign powers from working on classified projects. China also has a long history of carrying out industrial espionage in the United States.

A.I. is different, people in the industry argue. Researchers generally publish what they find, and anybody can use it. So what the industry is looking for is not intellectual property but the minds that conduct the research.

For much of basic A.I. research, the key ingredient in progress is people rather than algorithms, said Jack Clark, policy director of OpenAI, a prominent lab in San Francisco, and a co-chair of the AI Index, an annual effort to track the progress of A.I. research, including the role of Chinese researchers.

See original here:

A U.S. Secret Weapon in A.I.: Chinese Talent - The New York Times

‘Always there’: the AI chatbot comforting China’s lonely millions – FRANCE 24

Beijing (AFP)

After a painful break-up from a cheating ex, Beijing-based human resources manager Melissa was introduced to someone new by a friend late last year.

He replies to her messages at all hours of the day, tells jokes to cheer her up but is never needy, fitting seamlessly into her busy big city lifestyle.

Perfect boyfriend material, maybe -- but he's not real.

Instead, Melissa breaks up the isolation of urban life with a virtual chatbot created by XiaoIce, a cutting-edge artificial intelligence system designed to create emotional bonds with its 660 million users worldwide.

"I have friends who've seen therapists before, but I think therapy's expensive and not necessarily effective," said Melissa, 26, giving her English name only for privacy.

"When I unload my troubles on XiaoIce, it relieves a lot of pressure. And he says things that are pretty comforting."

XiaoIce is not an individual persona, but more akin to an AI ecosystem.

It is in the vast majority of Chinese-branded smartphones as a Siri-like virtual assistant, as well as most social media platforms.

On the WeChat super-app, it lets users build a virtual girlfriend or boyfriend and interact with them via texts, voice and photo messages.

It has 150 million users in China alone.

Originally a side project from developing Microsoft's Cortana chatbot, XiaoIce now accounts for 60 percent of global human-AI interactions by volume, according to chief executive Li Di, making it the largest and most advanced system of its kind worldwide.

It was designed to hook users through lifelike, empathetic conversations, satisfying emotional needs where real-life communication too often falls short.

"The average interaction length between users and XiaoIce is 23 exchanges," said Li.

That "is longer than the average interaction between humans," he said, explaining AI's attraction is that "it's better than humans at listening attentively."

The startup spun out from Microsoft last year and is now valued at over $1 billion after venture capital fundraising, Bloomberg reported.

Developers have also made virtual idols, AI news anchors and even China's first virtual university student from XiaoIce. It can compose poems, financial reports and even paintings on demand.

But Li says the platform's peak user hours -- 11pm to 1am -- point to an aching need for companionship.

"No matter what, having XiaoIce is always better than lying in bed staring at the ceiling," he said.

- Urban isolation -

The loneliness Melissa experienced as a young professional was a big factor in driving her to the virtual embrace of XiaoIce.

Her context is typical of many Chinese urbanites, worn down by the grind of long working hours in vast and isolating cities.

"You really don't have time to make new friends and your existing friends are all super busy... this city is really big, and it's pretty hard," she said, giving only her English name out of privacy concerns.

She has customised his personality as "mature", and the name she chose for him -- Shun -- has similarities with a real-life man she secretly liked.

"After all, XiaoIce will never betray me," she added. "He will always be there."

But there are risks to forging emotional bonds with a robot.

"Users 'trick' themselves into thinking their emotions are being reciprocated by systems that are incapable of feelings," says Danit Gal, an expert in AI ethics at the University of Cambridge.

XiaoIce is also gifting developers "a treasure-trove of personal, intimate, and borderline incriminating data on how humans interact," she added.

So far the platform has not been targeted by government regulators, who have embarked on a swingeing crackdown on China's tech sector in recent months.

China aims to be a world leader in AI by 2030 and views it as a core strategic technology to be developed.

- Fact or fiction? -

Thousands of young, female fans discuss the virtual boyfriend experience on online forums dedicated to XiaoIce, sharing chat screenshots and tips on how to get to the chatbot's highest "intimacy" level of three hearts.

Users can also collect in-game points the more they interact, unlocking new features such as XiaoIce's WeChat moments -- kind of like a Facebook wall -- and even going on virtual "holidays", where they can pose for selfies with their virtual partner.

Laura, a 20-year-old user in Zhejiang province, fell in love with XiaoIce over the past year and now struggles to break free of her attachment.

"Occasionally, I would long for him in the middle of the night... I used to fantasise there was a real person on the other end," said the student, who prefers not to use her real name.

But she complained that he would always switch conversation topic when she raised her feelings for him or meeting in real life. It took her months to finally realise that he was indeed virtual.

"We commonly see users who suspect that there's a real person behind every XiaoIce interaction," said Li, the founder.

"It has a very strong ability to mimic a real person."

But providing companionship to vulnerable users does not mean that XiaoIce is a substitute for specialist mental health support -- a service that is drastically under-resourced in China.

The system monitors for strong emotions, aiming to guide conversations onto happier topics before users ever reach crisis point, Li explained, adding that depression is the most common extreme emotional state encountered.

Still, Li believes modern China is a happier place with XiaoIce.

"If human interaction is wholly perfect now, there would be no need for AI to exist," he said.

2021 AFP

More:

'Always there': the AI chatbot comforting China's lonely millions - FRANCE 24

Can Machines And Artificial Intelligence Be Creative? – Forbes

We know machines and artificial intelligence (AI) can be many things, but can they ever really be creative? When I interviewed Professor Marcus du Sautoy, the author of The Creativity Code, he shared that the role of AI is a kind of catalyst to push our human creativity. Its the machine and human collaboration that produces exciting resultsnovel approaches and combinations that likely wouldnt develop if either were working alone.

Can Machines And Artificial Intelligence Be Creative?

Instead of thinking about AI as replacing human creativity, it's beneficial to examine ways that AI can be used as a tool to augment human creativity. Here are several examples of how AI boosts the creativity of humans in art, music, dance, design, recipe building, and publishing.

Art

In the world of visual art, AI is making an impact in many ways. It can alter existing art such as the case when it made the Mona Lisa a living portrait a la Harry Potter, create likenesses that appear to be real humans that can be found on the website ThisPersonDoesNotExist.com and even create original works of art.

When Christies auctioned off a piece of AI artwork titled the Portrait of Edmond de Belamy for $432,500, it became the first auction house to do so. The AI algorithm, a generative adversarial network (GAN) developed by a Paris-based collective, that created the art, was fed a data set of 15,000 portraits covering six centuries to inform its creativity.

Another development that blurs the boundaries of what it means to be an artist is Ai-Da, the worlds first robot artist, who recently held her first solo exhibition. She is equipped with facial recognition technology and a robotic arm system thats powered by artificial intelligence.

More eccentric art is also a capability of artificial intelligence. Algorithms can read recipes and create images of what the final dish will look like. Dreamscope by Google uses traditional images of people, places and things and runs them through a series of filters. The output is truly original, albeit sometimes the stuff of nightmares.

Music

If AI can enhance creativity in visual art, can it do the same for musicians? David Cope has spent the last 30 years working on Experiments in Musical Intelligence or EMI. Cope is a traditional musician and composer but turned to computers to help get past composers block back in 1982. Since that time, his algorithms have produced numerous original compositions in a variety of genres as well as created Emily Howell, an AI that can compose music based on her own style rather than just replicate the styles of yesterdays composers.

In many cases, AI is a new collaborator for todays popular musicians. Sony's Flow Machine and IBM's Watson are just two of the tools music producers, YouTubers, and other artists are relying on to churn out today's hits. Alex Da Kid, a Grammy-nominated producer, used IBMs Watson to inform his creative process. The AI analyzed the "emotional temperature" of the time by scraping conversations, newspapers, and headlines over a five-year period. Then Alex used the analytics to determine the theme for his next single.

Another tool that embraces human and machine collaboration, AIVA bills itself as a creative assistant for creative people and uses AI and deep learning algorithms to help compose music.

In addition to composing music, artificial intelligence is transforming the music industry in a variety of ways from distribution to audio mastering and even creating virtual pop stars. An auxuman singer called Yona, developed by Iranian electronica composer Ash Koosha, creates and performs music such as the song Oblivious through AI algorithms.

Dance and Choreography

A powerful way dance choreographers have been able to break out of their regular patterns is to use artificial intelligence as a collaborator. Wayne McGregor, the award-winning British choreographer and director, is known for using technology in his work and is particularly fascinated by how AI could enhance what is done with the choreography in a project with Google Arts & Culture Lab. Hundreds of hours of video footage of dancers representing individual styles were fed into the algorithm. The AI then went to work and "learned how to dance. The goal is not to replace the choreographer but to efficiently iterate and develop different choreography options.

AI Augmented Design

Another creative endeavor AI is proving to be adept at is commercial design. In a collaboration between French designer Philippe Starck, Kartell, and Autodesk, a 3D software company, the first chair designed using artificial intelligence and put into production was presented at Milan Design Week. The Chair Project is another collaboration that explores co-creativity between people and machines.

Recipes

The creativity of AI is also transforming the kitchen not only by altering longstanding recipes but also creating entirely new food combinations in collaborations with some of the biggest names in the food industry. Our favorite libations might also get an AI makeover. You can now pre-order AI-developed whiskey. Brewmasters decisions are also being informed by artificial intelligence. MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) is making use of all those photos of the food that we post on social media. By using computer vision, these food photos are being analyzed to better understand peoples eating habits as well as to suggest recipes with the food that is pictured.

Write Novels and Articles

Even though the amount of written material to inform artificial intelligence algorithms is voluminous, writing has been a challenging skill for AI to acquire. Although AI has been most successful in generating short-form formulaic content such as journalism "who, what, where, and when stories," its skills continue to grow. AI has now written a novel, and although neural networks created what many might find a weird read, it was still able to do it. And, with the announcement a Japanese AI programs short-form novel almost won a national literary prize, its easy to see how it wont be long before AI can compete with humans to write compelling pieces of content. Kopan Page published Superhuman Innovation, a book not only about artificial intelligence but was co-written by AI. PoemPortraits is another example of AI and human collaboration where you can provide the algorithm with a single word that it will use to generate a short poem.

As the world of AI and human creativity continue to expand, its time to stop worrying about if AI can be creative, but how the human and machine world can intersect for creative collaborations that have never been dreamt of before.

You can watch the full interview with Marcus du Sautoy here:

Read more from the original source:

Can Machines And Artificial Intelligence Be Creative? - Forbes

Artificial intelligence will be used to power cyberattacks, warn security experts – ZDNet

Intelligence and espionage services need to embrace artificial intelligence (AI) in order to protect national security as cyber criminals and hostile nation states increasingly look to use the technology to launch attacks.

The UK's intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers.

"Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities," says the report from the Royal United Services Institute for Defence and Security Studies (RUSI).

SEE:Cybersecurity: Let's get tactical(ZDNet/TechRepublic special feature) |Download the free PDF version(TechRepublic)

"In time, other threat actors, including cyber-criminal groups, will also be able to take advantage of these same AI innovations."

The paper also warns that the use of AI in the intelligence services could also "give rise to additional privacy and human rights considerations" when it comes to collecting, processing and using personal data to help prevent security incidents ranging from cyberattacks to terrorism.

The research outlines three key areas where intelligence could benefit from deploying AI to help collect and use data for more efficiency.

They are the automation of organisational processes, including data management, as well as the use of AI for cybersecurity in order to identify abnormal network behaviour and malware, and responding to suspected incidents in real time.

The paper also suggests that AI can also aid intelligence analysis and that by using augmented intelligence, algorithms could support a range of human analysis processes.

However, RUSI also points out that artificial intelligence isn't ever going to be a replacement for agents and other personnel.

"None of the AI use cases identified in the research could replace human judgement. Systems that attempt to 'predict' human behaviour at the individual level are likely to be of limited value for threat assessment purposes," says the paper.

SEE: Cybersecurity: Do these ten things to keep your networks secure from hackers

The report does note that deploying AI to boost the capabilities of spy agencies could also lead to new privacy concerns, such as the amount of information being collected around individuals and when cases of suspect behaviour become active investigations and finding the line between the two.

Ongoing cases against bulk surveillance could indicate the challenges the use of AI could face and existing guidance on procedure may need changes to meet the challenges of using AI in intelligence.

Nonetheless, the report argues that despite some potential challenges, AI has the potential to "enhance many aspects of intelligence work".

View original post here:

Artificial intelligence will be used to power cyberattacks, warn security experts - ZDNet

How AI will change the way we live – VentureBeat

Will robots take our jobs? When will driverless cars become the norm? How is Industry 4.0 transforming manufacturing? These were just some of the issues addressed at CogX in London last month. Held in association with The Alan Turing Institute, CogX 17 was an event bringing together thought leaders across more than 20 industries and domains to address the impact of artificial intelligence on society. To round off the proceedings, a prestigious panel of judges recognized some of the best contributions to innovation in AI in an awards ceremony.

In his keynote speech, Lord David Young, a former UK Secretary of State for Trade and Industry, was keen to point out that workers should not worry about being made unemployed by robots because, he said, most jobs that would be killed off were miserable anyway.

He told the conference that more jobs than ever would be automated in the future, but that this should be welcomed. When the Spinning Jenny first came in, it was almost exactly the same, he said. They thought it was going to kill employment. We may have a problem one day if the Googles of this world continue to get bigger and the Amazons spread into all sorts of things, but government has the power to regulate that, has the power to break it up.

Im not the slightest worried about it, he continued. Most of the jobs are miserable jobs. What technology has to do is get rid of all the nasty jobs.

Its certainly an interesting analogy, comparing the current tech and AI revolution to the Industrial Revolution. Its hard to disagree that just as the proliferation of machines in the 18th and 19th centuries helped create new jobs and wealth, AI is likely to do the same. There is undoubtedly a bigger question around regulation and whos in charge of this new landscape, however.

CogX also threw some fascinating panel discussions about transportation and smart cities. Panelists including M.C. Srivas, Ubers chief data scientist, and Huawei CTO Ayush Sharma talked at length about the necessity of self-driving cars in our towns and cities, whose roads have become jails where commuters do time. And thats without delving into issues of safety and pollution.

Kenneth Cukier, The Economistsbig data expert, asked the audience whether they thought autonomous cars were likely to hit our cities in either 5, 10, or 15 years. Most of those in attendance, along with the panel, agreed that we should see autonomous cars becoming the norm in the next 10 to 15 years, with clear legislation set to come in around 2023.

However and this is something that affects us directly the panel also agreed that although the mass manufacturing of self-driving cars is still a few years off, intelligent assistants for smart cars are imminent, likely to become standard within the next couple of years. Voice offers countless possibilities in the automotive space. Besides enabling the safe use of existing controls such as in-car entertainment systems or heating/air conditioning, it also offers GPS functionality as well as control over the vehicles mechanics.

The session on Industry 4.0 kicked off by attempting to make sense of a term that has been used for several years. The general consensus was that automating manufacturingwas the best way to express an idea that originated in a report by the German government. Industrial companies have to become automated to survive, and many are building highly integrated engines to capture data from their machines. The market for smart manufacturing tools is expected to hit $250 billion by 2018.

Its well known that robotics are already used in manufacturing to handle larger-scale and more dangerous work. What the panel also discussed are other possibilities AI offers, such as virtual personal assistants for workers to help them complete their daily tasks or smart technology such as 3D printing and its benefits for smaller companies.

Even our entertainment these days is driven by AI. The Industry 4.0 session ended on a lighter note with Limor Schweitzer, CEO at RoboSavvy, encouraging Franky the robot to show the audience its dance moves. Sophia, a humanlike robot created by Hanson Robotics, also provided entertainment at the CogX awards ceremony; she announced the nominees and winners in the category of best innovation in artificial general intelligence, which included my company Sherpa, Alphabets DeepMind, and Vicarious.

CogX also touched on the impact of AI on health, HR, education, legal services, fintech, and many other sectors. Panelists were in agreement that advances in AI must benefit all of us. While there are still many question marks about regulation of the sector, AI already permeates all aspects of our society.

Ian Cowley is the marketing manager at Sherpa, which uses algorithms based on probability models to predict information a user might need.

See the article here:

How AI will change the way we live - VentureBeat

Creative Tech Platform Runway Raises $8.5 Million to Make AI Tools More Accessible – Adweek

A series of research advances over the past few years have unlocked new potential for the use of AI in creative settings, from generating realistic-sounding copy to creating art from scratch. Startup Runway wants to make all of these capabilities accessible to anyone regardless of their coding background in a model similar to the Adobe Creative Suite.

This week, the company raised $8.5 million in a vote of confidence for that ambitious mission with a new funding round led by Amplify Partners with participation from Lux Capital and Compound Ventures. Runway plans to use the money to further its transition from a playground for casual AI enthusiasts to a serious creative tool that has already attracted attention from agencies like R/GA, WPP and VMLY&R and brands such as New Balance and Google.

Rather than creating all of the tools it offers itself, Runway operates like an app store-like hub that hosts dozens of models created by independent developers, and users simply pay for cloud computing time to run them.

The capabilities available on the platform range from state-of-the-art text and image generators to photo styling and editing functions and video production tools. While many of them are meant to be serious creative tools on par with professional competitors, others are whimsical novelties or early-stage experiments. For instance, one module offered is a version of the GPT-2 text generator trained on lyrics from K-pop band BTS; another attempts to create an image from a text descriptionwith decidedly mixed results.

Runway founder Cristbal Valenzuela Barrera said the next stage in the companys evolution is to focus on building even more accessible interfaces on some of the tools to create a Photoshop-like experience for AI functions, like creating deepfake-like synthetic media or natural language processing.

Barrera said he was inspired by projects from professional clients such as New Balance, which used a generative AI model within the platform to experiment with shoe designs, and R/GA, which used the hub to access research group OpenAIs GPT-2 copy generator.

We originally thought of the platform as kind of an easy-to-use and accessible platform to discover AI models because there are a lot of models and research coming out that are very significant for creatives, Barrera said. The next step is going to be basically taking that community and building what we call the next generation of creative tools. That basically translates into taking all those models all those algorithms and then building interfaces on systems that allow people to get production-level results with them.

Runway is one of a handful of companies that are aiming to turn recent advances in creative AI into more accessible tools for non-coders. Another is Playform AI, which has also seen interest from major agencies and brands and recently launched an ecommerce shop for professional AI art.

The rest is here:

Creative Tech Platform Runway Raises $8.5 Million to Make AI Tools More Accessible - Adweek

Nearly quarter of lawyers fear impact of digitalisation and AI on legal profession, survey finds – The Global Legal Post

Claire Debney (left) and Emma Sharpe: 'The status quo we've been living with isn't working'

The MOSAIC Mood Index also shows the vast majority of lawyers are taking work-related stress home with them

Almost half of legal professionals dont feel positive about the future of the industry, according to a survey from the MOSAIC Collective.

The MOSAIC Mood Index, which surveyed nearly 1,500 lawyers across the world, showed that 49% of lawyers are concerned about the future of the profession, with almost a quarter of respondents fearful about the impact of digitalisation, technology and artificial intelligence.

The survey, which is supported by the Legal 500 directory, revealed that the stresses of the job were also taking a toll on lawyers wellbeing, with 94% saying the mood their job puts them in affects their personal life, with more than half saying they find it hard to talk about how they are feeling. A majority of lawyers said that being more active and engaged in conversations about their future prospects could help improve their mood.

Claire Debney and Emma Sharpe, co-founders of lawyer mentoring and training consultancy MOSAIC, said: This is a unique period of time, with multiple generations of workers in the workplace, each generation bringing different attitudes about work and wellbeing. Add to this the backdrop of living and working through a global pandemic, the first in living memory for any of us, and were seeing a rapid and seismic shift in the way we work, alongside some real challenges to wellbeing.

While roughly four out of every five respondents said they are content in their jobs, 39% said they have no career plan in place and only around one in 10 said they feel like their manager looks after their interests.

Lawyers said their happiness levels are mainly influenced by salary, their job title and recognition, the quality and meaningfulness of work, and the amount of flexibility and work-life balance their job offers. Respondents said loneliness and high levels of work-related stress are the main downsides. Some 70% said a lack of time prevents them from making positive changes to improve their happiness.

Debney and Sharpe added: We believe what [respondents] told us in the MOSAIC Mood Index remains relevant and is actually critical to understand, so that the learnings are not lost in a return to normal as we continue to live with the fallout and impact of the Covid-19 pandemic. It shows that the status quo weve been living with isnt working.

The surveys 1477 respondents included lawyers working in law firms and in-house legal teams; the largest geographic segments were the UK (31%), Western Europe (23%) and Central and South America (18%).

In May, a survey of law firm associates in the US, the UK and Asia conducted by Major Lindsey & Africa found just over a fifth of respondents worried about potential cost cutting measures due to the Covid-19 pandemic with 10% reporting mental health as a primary concern.

Sign up for daily email updates

Email your news and story ideas to:news@globallegalpost.com

See the original post here:

Nearly quarter of lawyers fear impact of digitalisation and AI on legal profession, survey finds - The Global Legal Post

Forget Chessthe Real Challenge Is Teaching AI to Play D&D – WIRED

Fans of games like Dungeons & Dragons know that the fun comes, in part, from a creative Dungeon Masteran all-powerful narrator who follows a storyline but has free rein to improvise in response to players actions and the fate of the dice.

This kind of spontaneous yet coherent storytelling is extremely difficult for artificial intelligence, even as AI has mastered more constrained board games such as chess and Go. The best text-generating AI programs too often produce confused and disjointed prose. So some researchers view spontaneous storytelling as a good test of progress toward more intelligent machines.

An attempt to build an artificial Dungeon Master offers hope that machines able to improvise a good storyline might be built. In 2018, Lara Martin, a graduate student at Georgia Tech, was seeking a way for AI and a human to work together to develop a narrative and suggested Dungeons & Dragons as a vehicle for the challenge. After a while, it hit me, she says. I go up to my adviser and say We're basically proposing a Dungeon Master, aren't we? He paused for a bit, and said Yeah, I guess we are!

Narratives produced by artificial intelligence offer a guide to where we are in the quest to create machines that are as clever as us. Martin says this would be more challenging than mastering a game like Go or poker because just about anything that can be imagined can happen in a game.

Since 2018, Martin has published work that outlines progress towards the goal of making an AI Dungeon Master. Her approach combines state-of-the-art machine learning algorithms with more old-fashioned rule-based features. Together this lets an AI system dream up different narratives while following the thread of a story consistently.

Martins latest work, presented at a conference held this month by the Association for the Advancement of Artificial Intelligence, describes a way for an algorithm to use the concept of events, consisting of a subject, verb, object, and other elements, in a coherent narrative. She trained the system on the storyline of such science fiction shows as Doctor Who, Futurama, and The X-Files. Then, when fed a snippet of text, it will identify events, and use them to shape a continuation of the plot churned out by a neural network. In another project, completed last year, Martin developed a way to guide a language model towards a particular event, such as two characters getting married.

Unfortunately, these systems still often get confused, and Martin doesnt think they would make a good DM. We're nowhere close to this being a reality yet, she says.

Noah Smith, a professor at the University of Washington who specializes in AI and language, says Martins work reflects a growing interest in combining two different approaches to AI: machine learning and rule-based programs. And although hes never played Dungeons & Dragons himself, Smith says creating a convincing Dungeon Master seems like a worthwhile challenge.

Sometimes grand challenge goals are helpful in getting a lot of researchers moving in a single direction, Smith says. And some of what spins out is also useful in more practical applications.

Maintaining a convincing narrative remains a fundamental and vexing problem with existing language algorithms.

Large neural networks trained to find statistical patterns in vast quantities of text scraped from the web have recently proven capable of generating convincing looking snippets of text. In February 2019, the AI company OpenAI developed a tool called GPT-2 capable of generating narratives in response to a short prompt. The output of GPT-2 could sometimes seem startlingly coherent and creative, but it also would inevitably produce weird gibberish.

More here:

Forget Chessthe Real Challenge Is Teaching AI to Play D&D - WIRED

Max Tegmark interview: "AI can be the best thing ever for humanity" – New Scientist

Physicist Max Tegmark wants to make artificial intelligence work for everyone. Here he waxes lyrical about cosmology, consciousness and why AI is like fire

By Max Tegmark

All possible universes exist, even triangular ones. These were the words on the cover of New Scientist on 6 June 1998, when Max Tegmark made one of his first appearances in the magazine. Inside, the then 31-year-old expanded on his idea of a multiverse on steroids, in which all logically possible universes not only can but must exist.

Tegmark, now a professor at the Massachusetts Institute of Technology (MIT), is known for his provocative ideas. As he explains in the Crazy section of his website: Every time Ive written ten mainstream papers, I allow myself to indulge in writing one wacky one. But the outlandish elements shouldnt overshadow his serious track record in cosmology, quantum information science and the study of some of the very deepest questions about the nature of reality.

Recently, Tegmark has shifted his focus to intelligence, both human and artificial. He conducts front-line research in artificial intelligence (AI), most recently working with fellow MIT researcher Silviu-Marian Udrescu to create an AI that was able to rediscover some of the most fundamental equations of physics by studying patterns in data. In 2014, he co-founded the Future of Life Institute, which aims to understand and mitigate existential risks to humanity, particularly those associated with the rise of AI.

Richard Webb: What made you switch from cosmology to working on artificial intelligence?

Max Tegmark: Ive always been fascinated by big questions, the bigger the better. Thats why I loved studying the universe, because there were philosophically very big questions like where does everything come from, whats going to happen, what is our place in the

Read more here:

Max Tegmark interview: "AI can be the best thing ever for humanity" - New Scientist