Cleveland Clinic and IBM Begin Installation of IBM Quantum System One – Cleveland Clinic Newsroom

Cleveland Clinicand IBM have begundeployment of the first private sector onsite,IBM-managedquantum computer in the United States.The IBM Quantum Systemis to be located on Cleveland Clinics main campus in Cleveland.

The first quantum computer in healthcare, anticipated to be completed in early 2023, is a key part of the two organizations10-year partnership aimed at fundamentally advancing the pace of biomedical research through high-performance computing. Announced in 2021, the Cleveland Clinic-IBM Discovery Accelerator is a joint center that leverages Cleveland Clinics medical expertise with the technology expertise of IBM, including its leadership in quantum computing.

The current pace of scientific discovery is unacceptably slow, while our research needs are growing exponentially, said Lara Jehi, M.D., Cleveland Clinics Chief Research Information Officer. We cannot afford to continue to spend a decade or more going from a research idea in a lab to therapies on the market. Quantum offers a future to transform this pace, particularly in drug discovery and machine learning.

A step change in the way we solve scientific problems is on the horizon, said Ruoyi Zhou, Director, Ph.D., IBM Research Cleveland Clinic Partnership. At IBM, were more motivated than ever to create with Cleveland Clinic and others lasting communities of discovery and harness the power of quantum computing, AI and hybrid cloud to usher in a new era of accelerated discovery in healthcare and life sciences.

The Discovery Accelerator at Cleveland Clinic draws upon a variety of IBMs latest advancements in high performance computing, including:

Lara Jehi, M.D., and Ruoyi Zhou, Ph.D., at the site of the IBM Quantum System One on Cleveland Clinics main campus. (Courtesy: Cleveland Clinic/IBM)

The Discovery Accelerator also serves as the technology foundation for Cleveland Clinics Global Center for Pathogen Research & Human Health, part of the Cleveland Innovation District. The center, supported by a $500 million investment from the State of Ohio, Jobs Ohio and Cleveland Clinic, brings together a team focused on studying, preparing and protecting against emerging pathogens and virus-related diseases. Through Discovery Accelerator, researchers are leveraging advanced computational technology to expedite critical research into treatments and vaccines.

Together, the teams have already begun several collaborative projects that benefit from the new computational power. The Discovery Accelerator projects include a research study developing a quantum computing method to screen and optimize drugs targeted to specific proteins; improving a prediction model for cardiovascular risk following non-cardiac surgery; and using artificial intelligence to search genome sequencing findings and large drug-target databases to find effective, existing drugs that could help patients with Alzheimers and other diseases.

A significant part of the collaboration is a focus on educating the workforce of the future and creating jobs to grow the economy. An innovative educational curriculum has been designed for participants from high school to professional level, offering training and certification programs in data science, machine learning and quantum computing to build the skilled workforce needed for cutting-edge computational research of the future.

Read more here:
Cleveland Clinic and IBM Begin Installation of IBM Quantum System One - Cleveland Clinic Newsroom

CEO Jack Hidary on SandboxAQ’s Ambitions and Near-term Milestones – HPCwire

Spun out from Google last March, SandboxAQ is a fascinating, well-funded start-up targeting the intersection of AI and quantum technology. As the world enters the third quantum revolution, AI + Quantum software will address significant business and scientific challenges, is the companys broad self-described mission. Part software company, part investor, SandboxAQ foresees a blended classical computing-quantum computing landscape with AI infused throughout.

Its developing product portfolio comprises enterprise software for assessing and managing cryptography/data security in the so-called post-quantum era. NIST, of course, released its first official post-quantum algorithms in July and SandboxAQ is one of 12 companies selected to participate in its newproject Migration to Post Quantum Cryptography to build and commercialize tools. SandboxAQs AQ Analyzer product, says the company, is already available and being used by a few marquee customers.

Then theres SandboxAQs Strategic Investment Program, announced in August, which acquires or invests in technology companies of interest. So far, it has acquired one company (Cryptosense) and invested in two others (evolutionQ, and Qunnect).

Last week, HPCwire talked with SandboxAQ CEO Jack Hidary about the companys products and strategy. One has the sense that SandboxAQs aspirations are broad, and with nine figure funding, it has the wherewithal to pivot or expand. The A in the name stands for AI and the Q stands for quantum. One area not on the current agenda: building a quantum computer.

We want to sit above that layer. All these [qubit] technologies ion trap, and NV center (nitrogen vacancy center), neutral atoms, superconducting, photonic are very interesting and we encourage and mentor a lot of these companies who are quantum computing hardware companies. But we are not going to be building one because we really see our value as a layer on top of those computing [blocks], said Hidary. Google, of course, has another group working on quantum hardware.

Hidary joined Google in 2016 as Sandbox group director. A self-described serial entrepreneur, Hidarys varied experience includes founding EarthWeb, being a trustee of the XPrize Foundation, and running for Mayor in New York City in 2013. While at Google Sandbox, he wrote a textbook Quantum Computing: An Applied Approach.

I was recruited in to start a new division to focus on the use of AI and ultimately also quantum in solving really hard problems in the world. We realized that we needed to be multi-platform and focus on all the clouds and to do [other] kinds of stuff so we ended up spinning out earlier this year, said Hidary.

Eric Schmidt joined us about three and a half years ago as he wrapped up his chairmanship at Alphabet (Google parent company). He got really into what were doing, looking at the impact that scaled computation can have both on the AI side and the quantum side. He became chairman of SandboxAQ. I became CEO. Weve other backers like Marc Benioff from Salesforce and T. Rowe Price and Guggenheim, who are very long-term investors. What youll notice here thats interesting is we dont have short-term VCs. Wehave really long term investors who are here for 10 to 15 years.

The immediate focus is on post quantum cryptography tools delivered mostly by a SaaS model. By now were all familiar with the threat that fault-tolerant quantum computers will be able to crack conventionally encrypted (RSA) data using Shors algorithm. While fault-tolerant quantum computers are still many years away, the National Institute of Standards and Technology (NIST) and others, including SandboxAQ, have warned against Store Now/Decrypt Later attacks. (See HPCwire article, The Race to Ensure Post Quantum Data Security).

What adversaries are doing now is siphoning off information over VPNs. Theyre not cracking into your network. Theyre just doing it over VPNs, siphoning that information. They cant read it today, because its RSA protected, but theyll store it and read it in a number of years when they can, he said. The good news is you dont have to scrap your hardware. You could just upgrade the software. But thats still a monumental challenge. As you can imagine, for all the datacenters and the high-performance computing centers this is a non-trivial operation to do all that.

A big part of the problem is simply finding where encryption code is in existing infrastructure. That, in turn, has prompted calls for what is being called crypto-agility a comprehensive yet modular approach that allows easy swapping in-and-out cryptography code.

We want crypto-agility, and what we find is large corporations, large organizations, and large governments dont have crypto-agility. What were hoping is to develop tools to implement this idea. For example, as a first step to crypto-agility, were trying to see if people even have an MRI (discovery metaphor) machine for use on their own cybersecurity, and they really dont when it comes to encryption. Theres no diagnostic tools that these companies are using to find where their [encryption] footprint is or if they are encrypting everything appropriately. Maybe some stuff is not even being encrypted, said Hidary, who favors the MRI metaphor for a discovery tool.

No doubt, the need to modernize encryption/decryption methods and tools represents a huge problem and a huge market.

Without getting into technical details, Hidary said SandboxAQ is leveraging technology from its recent Cryptosense acquisition and internally developed technologies to develop a product portfolio planned to broadly encompass cryptography assessment, deployment and management. Its core current product is AQ Analyzer.

The idea, says Hidary returning to the MRI metaphor, is to take an MRI scan of inside the organization on-premise, cloud, private cloud, and so forth and this feeds into compliance vulnerabilities and post-quantum analysis. Its not just a quantum thing. Its about your general vulnerabilities on encryption. Overall, it happens to be that post quantum is helped by this, but this is a bigger issue. Then that feeds into your general sysops, network ops, and management tools that youre using.

AQ Analyzer, he says, is enterprise software that starts the process for organizations to become crypto-agile. Its now being used at large banks and telcos, and also by Mount Sinai Hospital. Healthcare replete with sensitive information is another early target for SandboxAQ. Long-term the idea is for Sandbox software tools to be able to automate much of the crypto management process from assessment to deployment through ongoing monitoring and management.

Thats the whole crypto-agility ballgame, says Hidary.

The business model, says Hidary, is carbon copy of Salesforce.coms SaaS model. Broadly, SandboxAQ uses a three-prong go-to-market via direct sales, global systems integrators in May it began programs with Ernst & Young (EY) and Deloitte and strategic partners/resellers. Vodafone and SoftBank are among the latter. Even though these are still early days for SandboxAQ as an independent entity, its moving fast, having benefitted from years of development inside Google. AQ Analyzer, said Hidary, is in general availability.

Were doing extremely well in banks and financial institutions. Theyre typically early adopters of cybersecurity because of the regulatory and compliance environment, and the trust they have with their customers, said Hidary.

Looking at near-term milestones, he said, Wed like to see a more global footprint of banks. Well be back in Europe soon now that we have Cryptosense (UK and Paris-based), and we have a local strong team in Europe. Weve had a lot of traction in the U.S. and the Canadian markets. So thats one key milestone over the next 18 months or so. Second, wed like to see [more adoption] into healthcare and telcos. We have Vodafone and Softbank mobile, on the telco side. We have Mount Sinai, wed like to see if that can be extended into additional players in those two spaces. The fourth vertical well probably go into is the energy grid. These are all critical infrastructure pieces of our society the financial structure of our society, energy, healthcare and the medical centers, the telecommunications grid.

While SandboxAQs AQ Analyzer is the companys first offering, its worth noting that the company aggressively looking for niches it can serve. For example, the company is keeping close tab on efforts to build a quantum internet.

Theres going to be a parallel quantum coherent internet to connect for distributed quantum computing, said Hidary. So nothing to do with cyber at all.

Our vision of the future that we share with I think everyone in the industry is that quantum does not take over classical, said Hidary. Its a mesh, a hybridization of CPU, GPU and quantum processing units. And the program, the code, in Python for example: part of it runs on CPUs, part of it on GPUs, and then yes, part of it will run on a QPU. In that mesh, youd want to have access both to the traditional Internet TCP IP today, but you also want to be able to connect over a quantum coherence intranet. So thats Qunnect.

Qunnect, of course, is one of the companies SandboxAQ has invested in and it is working on hardware (quantum memory and repeaters) to enable a quantum internet. Like dealing with post quantum cryptography, outfitting the quantum internet is likely to be as huge business. Looking at SandboxAQ, just seven months after being spun out from Google, the scope of its ambitions is hard to pin down.

Stay tuned.

See more here:
CEO Jack Hidary on SandboxAQ's Ambitions and Near-term Milestones - HPCwire

The world, and todays employees, need quantum computing more than ever – VentureBeat

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.

Quantum computing can soon address many of the worlds toughest, most urgent problems.

Thats why the semiconductor legislation Congress just passed is part of a $280 billion package that will, among other things, direct federal research dollars toward quantum computing.

Quantum computing will soon be able to:

The economy and the environment are clearly two top federal government agenda items.Congress in July was poised to pass the most ambitious climate bill in U.S. history. The New York Times said that the bill would pump hundreds of billions of dollars into low-carbon energy technologies like wind turbines, solar panels and electric vehicles and would put the United States on track to slash its greenhouse gas emissions to roughly 40% below 2005 levels by 2030. This could help to further advance and accelerate the adoption of quantum computing.

Low-Code/No-Code Summit

Join todays leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Because quantum technology can solve many previously unsolvable problems, a long list of the worlds leading businesses including BMW and Volkswagen, FedEx, Mastercard and Wells Fargo, and Merck and Roche are making significant quantum investments. These businesses understand that transformation via quantum computing, which is quickly advancing with breakthrough technologies, is coming soon. They want to be ready when that happens.

Its wise for businesses to invest in quantum computing because the risk is low and the payoff is going to be huge. As BCG notes: No one can afford to sit on the sidelines as this transformative technology accelerates toward several critical milestones.

The reality is that quantum computing is coming, and its likely not going to be a standalone technology. It will be tied to the rest of the IT infrastructure supercomputers, CPUs and GPUs.

This is why companies like Hewlett Packard Enterprise are thinking about how to integrate quantum computing into the fabric of the IT infrastructure. Its also why Terra Quantum AG is building hybrid data centers that combine the power of quantum and classical computing.

Amid these changes, employees should start now to get prepared. There is going to be a tidal wave of need for both quantum Ph.D.s and for other talent such as skilled quantum software developers to contribute to quantum efforts.

Earning a doctorate in a field relevant to quantum computing requires a multi-year commitment. But obtaining valuable quantum computing skills doesnt require a developer to go back to college, take out a student loan or spend years studying.

With modern tools that abstract the complexity of quantum software and circuit creation, developers no longer require Ph.D.-level knowledge to contribute to the quantum revolution, enabling a more diverse workforce to help businesses achieve quantum advantage. Just look at the winners in the coding competition that my company staged. Some of these winners were recent high school graduates, and they delivered highly innovative solutions.

Leading the software stack, quantum algorithm design platforms allow developers to design sophisticated quantum circuits that could not be created otherwise. Rather than defining tedious low-level gate connections, this approach uses high-level functional models and automatically searches millions of circuit configurations to find an implementation that fits resource considerations, designer-supplied constraints and the target hardware platform. New tools like Nvidias QODA also empower developers by making quantum programming similar to how classical programming is done.

Developers will want to familiarize themselves with quantum computing, whichwill be an integral arrow in their metaphorical quiver of engineering skills. People who add quantum skills to their classical programming and data center skills will position themselves to make more money and be more appealing to employers in the long term.

Many companies and countries are experimenting with and adopting quantum computing. They understand that quantum computing is evolving rapidly and is the way of the future.

Whether you are a business leader or a developer, its important to understand that quantum computing is moving forward. The train is leaving the station will you be on board?

Erik Garcell is technical marketing manager at Classiq.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the original post:
The world, and todays employees, need quantum computing more than ever - VentureBeat

New laboratory to explore the quantum mysteries of nuclear materials – EurekAlert

Replete with tunneling particles, electron wells, charmed quarks and zombie cats, quantum mechanics takes everything Sir Isaac Newton taught about physics and throws it out the window.

Every day, researchers discover new details about the laws that govern the tiniest building blocks of the universe. These details not only increase scientific understanding of quantum physics, but they also hold the potential to unlock a host of technologies, from quantum computers to lasers to next-generation solar cells.

But theres one area that remains a mystery even in this most mysterious of sciences: the quantum mechanics of nuclear fuels.

Until now, most fundamental scientific research of quantum mechanics has focused on elements such as silicon because these materials are relatively inexpensive, easy to obtain and easy to work with.

Now, Idaho National Laboratory researchers are planning to explore the frontiers of quantum mechanics with a new synthesis laboratory that can work with radioactive elements such as uranium and thorium.

An announcement about the new laboratory appears online in the journalNature Communications.

Uranium and thorium, which are part of a larger group of elements called actinides, are used as fuels in nuclear power reactors because they can undergo nuclear fission under certain conditions.

However, the unique properties of these elements, especially the arrangement of their electrons, also means they could exhibit interesting quantum mechanical properties.

In particular, the behavior of particles in special, extremely thin materials made from actinides could increase our understanding of phenomena such as quantum wells and quantum tunneling (see sidebar).

To study these properties, a team of researchers has built a laboratory around molecular beam epitaxy (MBE), a process that creates ultra-thin layers of materials with a high degree of purity and control.

The MBE technique itself is not new, said Krzysztof Gofryk, a scientist at INL. Its widely used. Whats new is that were applying this method to actinide materials uranium and thorium. Right now, this capability doesnt exist anywhere else in the world that we know of.

The INL team is conducting fundamental research science for the sake of knowledge but the practical applications of these materials could make for some important technological breakthroughs.

At this point, we are not interested in building a new qubit [the basis of quantum computing], but we are thinking about which materials might be useful for that, Gofryk said. Some of these materials could be potentially interesting for new memory banks and spin-based transistors, for instance.

Memory banks and transistors are both important components of computers.

To understand how researchers make these very thin materials, imagine an empty ball pit at a fast-food restaurant. Blue and red balls are thrown in the pit one at a time until they make a single layer on the floor. But that layer isnt a random assortment of balls. Instead, they arrange themselves into a pattern.

During the MBE process, the empty ball pit is a vacuum chamber, and the balls are highly pure elements, such as nitrogen and uranium, that are heated until individual atoms can escape into the chamber.

The floor of our imaginary ball pit is, in reality, a charged substrate that attracts the individual atoms. On the substrate, atoms order themselves to create a wafer of very thin material in this case, uranium nitride.

Back in the ball pit, weve created layer of blue and red balls arranged in a pattern. Now we make another layer of green and orange balls on top of the first layer.

To study the quantum properties of these materials, Gofryk and his team will join two dissimilar wafers of material into a sandwich called a heterostructure. For instance, the thin layer of uranium nitride might be joined to a thin layer of another material such as gallium arsenide, a semiconductor. At the junction between the two different materials, interesting quantum mechanical properties can be observed.

We can make sandwiches of these materials from a variety of elements, Gofryk said. We have lots of flexibility. We are trying to think about the novel structures we can create with maybe some predicted quantum properties.

We want to look at electronic properties, structural properties, thermal properties and how electrons are transported through the layers, he continued. What will happen if you lower the temperature and apply a magnetic field? Will it cause electrons to behave in certain way?

INL is one of the few places where researchers can work with uranium and thorium for this type of science. The amounts of the radioactive materials and the consequent safety concerns will be comparable to the radioactivity found in an everyday smoke alarm.

INL is the perfect place for this research because were interested in this kind of physics and chemistry, Gofryk said.

In the end, Gofryk hopes the laboratory will result in breakthroughs that help attract attention from potential collaborators as well as recruit new employees to the laboratory.

These actinides have such special properties, he said. Were hoping we can discover some new phenomena or new physics that hasnt been found before.

In 1900, German physicist Max Planck first described how light emitted from heated objects, such as the filament in a light bulb, behaved like particles.

Since then, numerous scientists including Albert Einstein and Niels Bohr have explored and expanded upon Plancks discovery to develop the field of physics known as quantum mechanics. In short, quantum mechanics describes the behavior of atoms and subatomic particles.

Quantum mechanics is different than regular physics, in part, because subatomic particles simultaneously have characteristics of both particles and waves, and their energy and movement occur in discrete amounts called quanta.

More than 120 years later, quantum mechanics plays a key role in numerous practical applications, especially lasers and transistors a key component of modern electronic devices. Quantum mechanics also promises to serve as the basis for the next generation of computers, known as quantum computers, which will be much more powerful at solving certain types of calculations.

Uranium, thorium and the other actinides have something in common that makes them interesting for quantum mechanics: the arrangement of their electrons.

Electrons do not orbit around the nucleus the way the earth orbits the sun. Rather, they zip around somewhat randomly. But we can define areas where there is a high probability of finding electrons. These clouds of probability are called orbitals.

For the smallest atoms, these orbitals are simple spheres surrounding the nucleus. However, as the atoms get larger and contain more electrons, orbitals begin to take on strange and complex shapes.

In very large atoms like uranium and thorium (92 and 90 electrons respectively), the outermost orbitals are a complex assortment of party balloon, jelly bean, dumbbell and hula hoop shapes. The electrons in these orbitals are high energy. While scientists can guess at their quantum properties, nobody knows for sure how they will behave in the real world.

Quantum tunneling is a key part of any number of phenomena, including nuclear fusion in stars, mutations in DNA and diodes in electronic devices.

To understand quantum tunneling, imagine a toddler rolling a ball at a mountain. In this analogy, the ball is a particle. The mountain is a barrier, most likely a semiconductor material. In classical physics, theres no chance the ball has enough energy to pass over the mountain.

But in the quantum realm, subatomic particles have properties of both particles and waves. The waves peak represents the highest probability of finding the particle. Thanks to a quirk of quantum mechanics, while most of the wave bounces off the barrier, a small part of that wave travels through if the barrier is thin enough.

For a single particle, the small amplitude of this wave means there is a very small chance of the particle making it to the other side of the barrier.

However, when large numbers of waves are travelling at a barrier, the probability increases, and sometimes a particle makes it through. This is quantum tunneling.

Quantum wells are also important, especially for devices such as light emitting diodes (LEDs) and lasers.

Like quantum tunneling, to build quantum wells, you need alternating layers of very thin (10 nanometers) material where one layer is a barrier.

While electrons normally travel in three dimensions, quantum wells trap electrons in two dimensions within a barrier that is, for practical purposes, impossible to overcome. These electrons exist at specific energies say the precise energies needed to generate specific wavelengths of light.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov.Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

Here is the original post:
New laboratory to explore the quantum mysteries of nuclear materials - EurekAlert

Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing – Business Wire

DUBAI, United Arab of Emirates--(BUSINESS WIRE)--Breakthroughs in quantum computing will enable humans to cure diseases like cancer, Alzheimers, and Parkinsons as easily as we treat the common cold.

That was one of the major insights to emerge from the Dubai Future Forum, with renowned theoretical physicist Dr. Michio Kaku telling the worlds largest gathering of futurists that humanity should brace itself for major transformations in healthcare.

The forum concluded with a call for governments to institutionalize foresight and engrain it within decision making.

Taking place in Dubai, UAE at the Museum of the Future, Amy Webb, CEO of Future Today Institute, criticized nations for being too pre-occupied with the present and too focused on creating white papers, reports and policy recommendations instead of action.

Nowism is a virus. Corporations and governments are infected, she said.

One panel session heard how humans could be ready to test life on the Moon in just 15 years and be ready for life on Mars in another decade. Sharing his predictions for the future, Dr. Kaku also said there is a very good chance humans will pick up a signal from another intelligent life form this century.

Dr. Jamie Metzl, Founder and Chair, OneShared.World, urged people to eat more lab-grown meat to combat global warming and food insecurity.

If we are treating them like a means to an end of our nutrition, wouldnt it be better instead of growing the animal, to grow the meat? he said.

Among the 70 speakers participating in sessions were several UAE ministers. HE Mohammad Al Gergawi, UAE Minister of Cabinet Affairs, Vice Chairman, Board of Trustees and Managing Director of the Dubai Future Foundation, said ministers around the world should think of themselves as designers of the future. Our stakeholders are 7.98 billion people around the world, he noted.

Dubais approach to foresight was lauded by delegates, including HE Omar Sultan Al Olama, UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, who said: What makes our city and nation successful is not natural resources, but a unique ability to embrace all ideas and individuals.

More than 30 sessions covered topics including immortality, AI sentience, climate change, terraforming, genome sequencing, legislation, and the energy transition.

*Source: AETOSWire

Originally posted here:
Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing - Business Wire

What is Artificial Intelligence (AI)? | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

A number of definitions of artificial intelligence (AI) have surfaced over the last few decades. John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM)): "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the artificial intelligence conversation began with Alan Turing's 1950 work "Computing Machinery and Intelligence" (PDF, 89.8 KB) (link resides outside of IBM). In this paper, Turing, often referred to as the "father of computer science", asks the following question: "Can machines think?"From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publication, it remains an important part of the history of AI.

One of the leading AI textbooks is Artificial Intelligence: A Modern Approach(link resides outside IBM, [PDF, 20.9 MB]), by Stuart Russell and Peter Norvig. In the book, they delve into four potential goals or definitions of AI, which differentiate computer systems as follows:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

In its simplest form, artificial intelligence is a field that combines computer science and robust datasets to enable problem-solving. Expert systems, an early successful application of AI, aimed to copy a humans decision-making process. In the early days, it was time-consuming to extract and codify the humans knowledge.

AI today includes the sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms that typically make predictions or classifications based on input data. Machine learning has improved the quality of some expert systems, and made it easier to create them.

Today, AI plays an often invisible role in everyday life, powering search engines, product recommendations, and speech recognition systems.

There is a lot of hype about AI development, which is to be expected of any emerging technology. As noted in Gartners hype cycle (link resides outside IBM), product innovations like self-driving cars and personal assistants follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes (01:08:15) (link resides outside IBM) in his 2019 MIT lecture, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations continue around AI ethics, we can see the initial glimpses of the trough of disillusionment. Read more about where IBM stands on AI ethics here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some powerful applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial General Intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, AI researchers are exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. Deep learning can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in the same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.

Deep learning (like some machine learning) uses neural networks. The deep in a deep learning algorithm refers to a neural network with more than three layers, including the input and output layers. This is generally represented using the following diagram:

The rise of deep learning has been one of the most significant breakthroughs in AI in recent years, because it has reduced the manual effort involved in building AI systems. Deep learning was in part enabled by big data and cloud architectures, making it possible to access huge amounts of data and processing power for training AI solutions.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

Computer vision: This AI technology enables computers to derive meaningful information from digital images, videos, and other visual inputs, and then take the appropriate action. Powered by convolutional neural networks, computer vision has applications in photo tagging on social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.

Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This approach is used by online retailers to make relevant product recommendations to customers during the checkout process.

Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

Fraud detection: Banks and other financial institutions can use machine learning to spot suspicious transactions. Supervised learning can train a model using information about known fraudulent transactions. Anomaly detection can identify transactions that look atypical and deserve further investigation.

Since the advent of electronic computing, some important events and milestones in the evolution of artificial intelligence include the following:

While Artificial General Intelligence remains a long way off, more and more businesses will adopt AI in the short term to solve specific challenges. Gartner predicts (link resides outside IBM) that 50% of enterprises will have platforms to operationalize AI by 2025 (a sharp increase from 10% in 2020).

Knowledge graphs are an emerging technology within AI. They can encapsulate associations between pieces of information and drive upsell strategies, recommendation engines, and personalized medicine. Natural language processing (NLP) applications are also expected to increase in sophistication, enabling more intuitive interactions between humans and machines.

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Go here to read the rest:

What is Artificial Intelligence (AI)? | IBM

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions – Health Imaging

"I think AI is still in its relatively early phase of adoption," Lehman said. "We do have some centers that are not academic centers that are very forward thinking and really wanting to bring AI into their practices. However, we are also seeing a story that is very familiar when we are bringing computer-aided detection (CAD) into both academic and community centers. The technology is being incorporated into clinical care, but we are still studying what the actual outcomes are on patients who are being screened with mammography where AI tools are or are not being used."

This includes AI for automated detection of breast cancer lesions and flagging these to show the areas of interest on mammogram images, or to flag studies that need closer attention. AI also can take a first pass look at mammograms to determine if they appear to be normal, so radiologists can prioritize which exams need to be read first and which may be more complex.

This technology will likely become more important as the number of breast imaging exams switches over from traditional four-image mammogram studies to much larger 3D mammogram digital breast tomosythesis (DBT) exams of 50 or more images that are more time consuming to read. AI is already being used to flag images that deserve a closer look in these datasets.

AI is also finding use as an automated way to grade breast density to help eliminate the variation of grading the same patient by human readers.

However, the most exciting area of AI for breast imaging is in the potential of radiomics, where the AI will view medical imaging in ways that human readers cannot to identify very complex and small patterns that will help better assess patient risk scores, or what the best outcomes will be based on various cancer treatments.

"What I am really excited about is the domain where investigators are considering the power of artificial intelligence to do things that humans cannot or are not very good at, and then to allow the humans to really focus on those tasks where humans excel. As of today, these AI tools have not even really scratched the surface," Lehman explained.

She said this area of research using radiomics moves beyond training AI to look at images like a human radiologist and to instead pull out details that are usually hidden from the human eye. This includes rapid computer segmentation and analysis of the morphology of disease or tissue patterns seen in images, looking for minute regional structures that can be detected by AI.

"This is not to train AI to look at mammograms like I do, but to train the AI to look for patterns and signals that my human eyes and human brain cannot detect or process," Lehman said.

She said today, we are just scratching the surface of the data potential of AI analysis of cancers in imaging. Deeply embedded patterns within cancers on imaging may be able to tell us a lot about which concerns will or will not respond to different drugs or therapies. AI may be able to tell us this from a much deeper analysis of the imaging, including the subtypes of that particular cancer. This would enable much better tailored, personalized medicine and treatments for each patient.

Read the rest here:

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions - Health Imaging

‘State of AI in the Enterprise’ Fifth Edition Uncovers Four Key Actions to Maximize AI Value – PR Newswire

Research reveals the key actions leaders can take to accelerate AI outcomes

NEW YORK, Oct. 18, 2022 /PRNewswire/ --

Key takeaways

Why this matters

The Deloitte AI Institute's fifth edition of the "State of AI in the Enterprise" survey, conducted between April and May 2022, provides organizations with a roadmap to navigate lagging AI outcomes. Twenty-nine percent more respondents surveyed classify as underachievers this year, yet 79% of respondents say they've fully deployed three or more types of AI. It is clear despite rapid advancement in the AI market that organizations are struggling to turn implementation into scalable transformation. This year's report digs deeper into the actions that lead to successful outcomes providing leaders with a guide to overcome roadblocks and drive business results with AI.

The report surveyed 2,620 executives from 13 countries across the globe, outlining detailed recommendations for leaders to cultivate an AI-ready enterprise and improve outcomes for their AI efforts. Similar to last year's report, Deloitte grouped responding organizations into four profiles Transformers, Pathseekers, Starters and Underachievers based on how many types of AI applications they have deployed full-scale and the number of outcomes achieved to a high degree. The findings in the report aim to help companies overcome deployment and adoption challenges to become AI-fueled organizations that realize value and drive transformational outcomes from AI.

Key quotes

"Amid unprecedented disruption in the global economy and society at large, it is clear today's AI race is no longer about just adopting AI but instead driving outcomes and unleashing the power of AI to transform business from the inside out. This year's report provides a clear roadmap for business leaders looking to apply next-level human cognition and drive value at scale across their enterprise."

Costi Perricos, Deloitte Global AI and Data leader

"Since 2017, we have been tracking the advancement of AI as industries navigate the "Age of With." The fifth edition of our annual report outlines how AI can propel businesses beyond automating processes for efficiency to redesigning work itself. While organizations face the challenge of middling results, it is clear successful AI transformation requires strong leadership and focused investment, a through-line consistently evident in our annual research."

Beena Ammanath, executive director of the Deloitte AI Institute, Deloitte Consulting LLP

Four key actions powering widespread value from AI

Based on Deloitte's analysis of the behaviors and responses of high- and low-outcome organizations, the report identifies four key actions leaders can take now to improve outcomes for their AI efforts.

Action 1: Invest in Leadership and Culture

When it comes to successful AI deployment and adoption, leadership and culture matter. The workforce is increasingly optimistic, and leaders should do more to harness that optimism for culture change, establishing new ways of working to drive greater business results with AI.

Action 2: Transform Operations

An organization's ability to build and deploy AI ethically and at scale depends on how well they have redesigned their operations to accommodate the unique demands of new technologies.

Action 3: Orchestrate Tech and Talent

Technology and talent acquisition are no longer separate. Organizations need to strategize their approach to AI based on the skillsets they have available, whether they derive from humans or pre-packaged solutions.

Action 4: Select Use Cases that Accelerate Outcomes

The report found that selecting the right use cases to fuel an organization's AI journey depends largely on the value-drivers for the business based on sector and industry. Starting with use cases that are easier to achieve or have a faster or higher return on investment can create momentum for further investment and make it easier to drive internal cultural and organizational changes that accelerate the benefits of AI.

Connect with us:@Deloitte, @DeloitteAI, @beena_ammanath

TheDeloitte AI Institutesupports the positive growth and development of AI through engaged conversations and innovative research. It also focuses on building ecosystem relationships that help advance human-machine collaboration in the "Age of With," a world where humans work side-by-side with machines.

About DeloitteDeloitte provides industry-leading audit, consulting, tax and advisory services to many of the world's most admired brands, including nearly 90% of the Fortune 500 and more than 7,000 private companies.Our people come together for the greater good and work across the industry sectors that drive and shape today's marketplace delivering measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to see challenges as opportunities to transform and thrive, and help lead the way toward a stronger economy and a healthier society. Deloitte is proud to be part of the largest global professional services network serving our clients in the markets that are most important to them.Building on more than 175 years of service, our network of member firms spans more than 150 countries and territories. Learn how Deloitte's approximately 415,000 people worldwide connect for impact at http://www.deloitte.com.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see http://www.deloitte.com/aboutto learn more about our global network of member firms.

SOURCE Deloitte Consulting LLP

Go here to read the rest:

'State of AI in the Enterprise' Fifth Edition Uncovers Four Key Actions to Maximize AI Value - PR Newswire

Has There Been A Second AI Big Bang? – Forbes

Aleksa Gordic, an AI researcher with DeepMind

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how todays cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsofts Hololens team. Remarkably, his AI expertise is self-taught so there is hope for all of us yet!

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled Attention is All You Need. This title referred to the fact that Transformers can pay attention simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.

Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldnt even try.

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, man is to king as woman is to queen. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes wearing a slipper than to the vector that denotes wearing a dog.

There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.

The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. If I wasnt surprised, it would mean I could predict the future, which I cant. He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.

See the rest here:

Has There Been A Second AI Big Bang? - Forbes

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

What You Should Know:

Eyenuk, Inc., a global artificial intelligence (AI) digital health company and the leader in real-world applications for AI Eye Screening and AI Predictive Biomarkers, today announced it has secured $26 million in a Series A financing round, bringing the Companys total funding to over $43 million.

The capital raise was led by AXA IM Alts and was joined by new and existing investors including T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and KOFA Healthcare.

Accelerating Global Access to AI-Powered Eye-Screening Technology

Eyenuk, Inc. is a global artificial intelligence (AI) digital health company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk, and Alzheimers disease.

Eyenuk will use the capital to expand its AI product platform with additional disease indications and advanced care coordination and to accelerate the platforms global commercialization and adoption.

We are thrilled that AXA IM Alts, T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and our other new and existing investors have joined us in furthering our mission of using AI to screen every eye in the world to help eliminate preventable vision loss and transition the world to predictive and preventative healthcare, said Eyenuk CEO and Founder Kaushal Solanki, Ph.D. Our Series A fundraise validates the strong market performance of the EyeArt system and provides us with critical resources as we expand our platform capabilities this year to include solutions for detecting additional diseases.

Todays announcement follows the Sept. 29, 2022 publication of a major peer-reviewed study in Ophthalmology Science, a publication of the American Academy of Ophthalmology. The study found that the EyeArt AI system is far more sensitive in identifying referable diabetic retinopathy than dilated eye exams by ophthalmologists and retina specialists.

Eyenuk is leading the way in harnessing the power of AI to eliminate preventable blindness globally, through its versatile digital health platform that enables automated AI diagnosis and coordination of care. Eyenuks flagship EyeArt AI system has been more broadly adopted worldwide than any other autonomous AI technology for ophthalmology. Since its FDA clearance in 2020, the EyeArt system has been used in over 200 locations in 18 countries, including 14 U.S. states, to screen over 60,000 patients and counting. It is the first and only technology to be cleared by the FDA for autonomous detection of both referable and vision-threatening diabetic retinopathy without any eye care specialist involvement.

The EyeArt system is reimbursed by Medicare in the US, and has regulatory approvals globally, including CE Marking, Health Canada license, and approvals in multiple markets in Latin America and the Middle East.

View post:

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

Samsung Electronics Explores Future of AI Research

Under the themes Shaping the Future with AI and Semiconductor and Scaling AI for the Real World, renowned experts will share the latest AI research achievements

Samsung Electronics today announced that it will host the Samsung AI Forum 2022 from November 8 to 9.

The Samsung AI Forum, now in its sixth year, is a place for exchanging technological advances with world-renowned AI scholars and experts, sharing the latest AI research achievements and exploring future research direction.

This years forum will be held in-person for the first time in three years and will also be live-streamed on Samsung Electronics YouTube channel.

Those who are interested in the event can register to participate in the forum from October 18 to the day of the event on the Samsung AI Forum website. Registered participants will be able to receive a detailed program agenda and submit questions online.

Day one will be hosted by Samsung Advanced Institute of Technology (SAIT) under the theme Shaping the Future with AI and Semiconductor. Participants will discuss the current status and research direction on AI that will lead the future of innovations in other fields including semiconductors and materials.

Jong-Hee (JH) Han, Vice Chairman, CEO and Head of Device eXperience (DX) Division at Samsung Electronics, will start the forum by giving the opening remarks, followed by a keynote speech from Professor Yoshua Bengio of the University of Montreal, Canada. Afterward, technology sessions, such as AI for R&D Innovation, Recent Advances of AI Algorithms and Large Scale Computing for AI and HPC will be held.

During each technology session, renowned AI experts and the AI research leaders at SAIT will be on stage to share their findings.Minjoon Seo, Professor at KAIST, and Hyunoh Song, Professor at the Seoul National University, will introduce the latest research achievements on AI algorithms, and the former IBM and Intel Fellow Alan Gara, who is one of the leading researchers on supercomputers, will make a presentation on the evolution of computing and the future of AI. AI research leaders at SAIT including Changkyu Choi, Executive Vice President and Head of SAITs AI Research Center, will share the status and vision of Samsungs research on AI.

This years AI forum will be prepared to be a place to discuss the direction of AI research to create a better future by applying AI technology to various fields, especially semiconductor, in the future. said Gyo-Young Jin, President and the Head of SAIT as well as Co-chair of the Samsung AI Forum.

The Samsung AI Researcher of the Year awards, which were established to discover excelling rising researchers in the field of AI, will also be presented during the forum. In addition, various programs, including poster presentations of excellent research papers, introduction of the SAIT, exhibition of its research projects and networking event for researchers and students in the field of AI will be held to accelerate active research in AI.

Day two of the forum will be hosted by Samsung Research under the theme Scaling AI for the Real World. Participants will share the direction of future AI technological advancement that will have an important impact on our lives, such as hyperscale AI, digital human and robotics technology, which are the latest trending topics.

Sebastian Seung, President and Head of Samsung Research, will start with a welcoming remark and a keynote speech on Evolutionary approach to brain-inspired learning algorithms.

Daniel Lee, Executive Vice President and Head of Samsung Researchs Global AI Center, will give a presentation on current status of Samsung Researchs AI research, which will be followed by invited talks by AI experts, including the heads of Global Research Institutes.

Terrence Sejnowski, Professor at the University of California San Diego and founder of NeurIPS (The Conference and Workshop on Neural Information Processing Systems), one of the most prestigious international conferences on AI, will speak on whether large language models have intelligent, and Dr. Johannes Gehrke, Head of Microsoft Research Lab, will explain the core technology of hyperscale AI and research directions of Microsofts next-generation AI.

Afterwards, Dieter Fox, Senior Director of Robotics Research at NVIDIA, will give a presentation on robot technology that controls objects without an explicit model and Seungwon Hwang, Professor at the Seoul National University, will share knowledge on robust natural language processing technology.

Furthermore, Daniel Lee will moderate the panel discussion on the latest AI trends and the future outlook with fellow speakers. There will also be times allotted for presentation and demonstration of the latest research status by the researchers at Samsung Researchs AI Research Center.

This years Samsung AI Forum will be a place for participants to better understand various AI researches currently underway in terms of Scaling AI for the Real World to increase the value of our lives, said Dr. Sebastian Seung, President and Head of Samsung Research. We hope many people, who are interested in the field of AI, will participate in this years forum, which will be held both online and in person.

Read more:

Samsung Electronics Explores Future of AI Research

Nouriel Roubini: Why AI poses a threat to millions of workers – Yahoo Finance

Business sectors ranging from agriculture and manufacturing to automotive and financial services are increasingly turning to artificial intelligence as a means to automate large swaths of their organizationsand, along the way, save enormous sums through improved efficiencies.

But, says Megathreats' Author and NYU Stern School of Business professor Nouriel Roubini, the rise of AI will also have a massively negative impact on workers throughout the economy.

AI has helped revolutionize everything from the smartphones in our pockets to our grocery stores, which use the technology to better predict which items customers want to see on shelves. However, Roubini, whose prediction of the 2008 financial crisis earned him the moniker Dr. Doom, says AI poses a threat to millions of workers.

The downside is that while AI, machine learning, robotics, automation increases the economic pie, potentially, it also leads to losses of jobs and labor income, Roubini said during an interview at Yahoo Finances All Markets Summit.

Take autonomous cars. While they could dramatically reduce the number of car accidents, significantly cutting down on the number of deaths and injuries caused on the nations roadways, theyll also put millions out of work. You have, what, 5 million Uber and Lyft drivers, 5 million truckers and teamsters, and theyre going to be gone for good, Roubini said. And which jobs are they going to get?

CERNOBBIO, ITALY - SEPTEMBER 07: Nouriel Roubini professor of economics at New York University attends the Ambrosetti International Economic Forum 2019 "Lo scenario dell'Economia e della Finanza" on September 6, 2019 in Cernobbio, Italy. (Photo by Pier Marco Tacca/Getty Images)

Fully autonomous vehicles are still years away from hitting the roads. The majority of the technology thats currently available is meant to assist drivers rather than actually control vehicles themselves. But automakers have made it clear that they are intent on developing the technology to the point where theres no need for a driver at all.

But according to Roubini, its not just drivers and truckers who might be at risk of losing their jobs. As AI becomes more powerful, it could be used to replace workers in creative fields including the arts.

Story continues

Increasingly, even cognitive jobs that can be divided into a number of tasks are also being automated, Roubini said. Even creative jobs; there are now AIs that will create a script or a movie, or make a poem, or write...or paint, or even [write] a piece of music that soon enough is going to be top 10 in the Billboard Magazine chart.

While it might be some time before AI is winning any major awards or art prizes, if ever, it is being used to create digital art. Take the open-source DALL-E, which allows users to type in a series of words and get an image based on millions of photos pulled from the internet.

While artists are unlikely to disappear anytime soon, the fact that AI is racing into once unimaginable sectors of the economy could eventually mean Roubini's prognostications, like some of his others, will prove true.

Sign up for Yahoo Finance's Tech newsletter

More from Dan

Got a tip? Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance

See the original post:

Nouriel Roubini: Why AI poses a threat to millions of workers - Yahoo Finance

Microsoft’s GitHub Copilot AI is making rapid progress. Here’s how its human leader thinks about it – CNBC

Earlier this year, LinkedIn co-founder and venture capitalist Reid Hoffman issued a warning mixed with amazement about AI. "There is literally magic happening," said Hoffman, speaking to technology executives across sectors of the economy.

Some of that magic is becoming more apparent in creative spaces, like the visual arts, and the idea of "generative technology" has captured the attention of Silicon Valley. AI has even recently won awards at art exhibitions.

But Hoffman's message was squarely aimed at executives.

"AI will transform all industries," Hoffman told the members of the CNBC Technology Executive Council. "So everyone has to be thinking about it, not just in data science."

The rapid advances being made by Copilot AI, the automated code writing tool from the GitHub open source subsidiary of Microsoft, were an example Hoffman, who is on the Microsoft board, directly cited as a signal that all firms better be prepared for AI in their world. Even if not making big investments today in AI, business leaders must understand the pace of improvement in artificial intelligence and the applications that are coming or they will be "sacrificing the future," he said.

"100,000 developers took 35% of the coding suggestions from Copilot," Hoffman said. "That's a 35% increase in productivity, and off last year's model. ... Across everything we are doing, we will have amplifying tools, it will get there over the next three to 10 years, a baseline for everything we are doing," he added.

Copilot has already added another 5% to the 35% cited by Hoffman. GitHub CEO Thomas Dohmke recently told us that Copilot is now handling up to 40% of coding among programmers using the AI in the beta testing period over the past year. Put another way, for every 100 lines of code, 40 are being written by the AI, with total project time cut by up to 55%.

Copilot, trained on massive amounts of open source code, monitors the code being written by a developer and works as an assistant, taking the input from the developer and making suggestions about the next line of code, often multi-line coding suggestions, often "boilerplate" code that is needed but is a waste of time for a human to recreate. We all have some experience with this form of AI now, in places like our email, with both Microsoft and Google mail programs suggesting the next few words we might want to type.

AI can be logical about what may come next in a string of text. But Dohmke said, "It can't do more, it can't capture the meaning of what you want to say."

Whether a company is a supermarket working on checkout technology or a banking company working on customer experience in an app, they are all effectively becoming software companies, all building software, and once a C-suite has developers it needs to be looking at developer productivity and how to continuously improve it.

That's where the 40 lines of code come in. "After a year of Copilot, about 40% of code was written by the AI where Copilot was enabled," Dohmke said. "And if you show that number to executives, it's mind-blowing to them. ... doing the math on how much they are spending on developers."

With the projects being completed in less than half the time, a logical conclusion is that there will be less work to do for humans. But Dohmke says another way of looking at the software developer job is that they do many more high-value tasks than just rewrite code that already exists in the world. "The definition of 'higher value' work is to take away the boiler-plate menial work writing things already done over and over again," he said.

The goal of Copilot is to help developers "stay in the flow" when they are on the task of coding. That's because some of the time spent writing code is really spent looking for existing code to plug in from browsers, "snippets from someone else," Dohmke said. And that can lead coders to get distracted. "Eventually they are back in editor mode and copy and paste a solution, but have to remember what they were working on," he said. "It's like a surfer on a wave in the water and they need to find the next wave. Copilot is keeping them in the editing environment, in the creative environment and suggesting ideas," Dohmke said. "And if the idea doesn't work, you can reject it, or find the closest one and can always edit," he added.

The GitHub CEO expects more of those Copilot code suggestions to be taken in the next five years, up to 80%. Unlike a lot going on in the computer field, Dohmke said of that forecast, "It's not an exact science ... but we think it will tremendously grow."

After being in the market for a year, he said new models are getting better fast. As developers reject some code suggestions from Copilot, the AI learns. And as more developers adopt Copilot it gets smarter by interacting with developers similar to a new coworker, learning from what is accepted or rejected. New models of the AI don't come out every day, but every time a new model is available, "we might have a leap," he said.

But the AI is still far short of replacing humans. "Copilot today can't do 100% of the task," Dohmke said. "It's not sentient. It can't create itself without user input."

With Copilot still in private beta testing among individual developers 400,000 developer signed up to use the AI in the first months it was available and hundreds of thousands of more developers since GitHub has not announced any enterprise clients, but it expects to begin naming business customers before the end of the year. There is no enterprise pricing information being disclosed yet, but in the beta test Copilot pricing has been set at a flat rate per developer $10 per individual per month or $100 annually, often expensed by developers on company cards. "And you can imagine what they earn per month so it's a marginal cost," Dohmke said. "If you look at the 40% and think of the productivity improvement, and take 40% of opex spend on developers, the $10 is not a relevant cost. ... I have 1,000 developers and it's way more money than 1000 x 10," he said.

The GitHub CEO sees what is taking place now with AI as the next logical phase of the productivity advances in a coding world he has been a part of since the late 1980s. That was a time when coding was emerging out of the punch card phase, and there was no internet, and coders like Dohmke had to buy books and magazines, and join computer clubs to gain information. "I had to wait to meet someone to ask questions," he recalled.

That was the first phase of developer productivity, and then came the internet, and now open source, allowing developers to find other developers on the internet who had already "developed the wheel," he said.

Now, whether the coding task is related to payment processing or a social media login, most companies whether startups or established enterprises put in open source code. "There is a huge dependency tree of open source that already exists," Dohmke said.

It's not uncommon for up to 90% of code on mobile phone apps to be pulled from the internet and open source platforms like GitHub. In a coding era of "whatever else is already available," that's not what will differentiate a developer or app.

"AI is just the third wave of this," Dohmke said. "From punch cards to building everything ourselves to open source, to now withina lot of code, AI writing more," he said. "With 40%, soon enough if AI spreads across industries, the innovation on the phone will be created with the help of AI and the developer."

Today, and into the foreseeable future, Copilot remains a technology that is trained on code, and is making proposals based on looking things up in a library of code. It is not inventing any new algorithms, but at the current pace of progress, eventually, "it is entirely possible that with help of a developer it will create new ideas of source code,," Dohmke said.

But even that still requires a human touch. "Copilot is getting closer, but it will always need developers to create innovation," he said.

Continue reading here:

Microsoft's GitHub Copilot AI is making rapid progress. Here's how its human leader thinks about it - CNBC

What the White House’s ‘AI Bill of Rights’ blueprint could mean for HR tech – HR Dive

Over the last decade, the use of artificial intelligence in areas like hiring, recruiting and workplace surveillance has shifted from a topic of speculation to a tangible reality for many workplaces. Now, those technologies have the attention of the highest office in the land.

On Oct. 4, the White Houses Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.

The blueprint focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate. It also follows the publication in May of two cautionary documents by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice specifically addressing the use of algorithmic decision-making tools in hiring and other employment actions.

Employment is listed in the blueprint as one of several sensitive domains deserving of enhanced data and privacy protections. Individuals handling sensitive employment information should ensure it is only used for functions strictly necessary for that domain while consent for all non-necessary functions should be optional.

Additionally, the blueprint states that continuous surveillance and monitoring systems should not be used in physical or digital workplaces, regardless of a persons employment status. Surveillance is particularly sensitive in the union context; the blueprint notes that federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.

The prevalence of employment-focused AI and automation may depend on the size and type of organization studied, though research suggests a sizable portion of employers have adopted the tech.

For example, a February survey by the Society for Human Resource Management found that nearly one-quarter of employers used such tools, including 42% of employers with more than 5,000 employees. Of all respondents utilizing AI or automation, 79% said they were using this technology for recruitment and hiring, the most common such application cited, SHRM said.

Similarly, a 2020 Mercer study found that 79% of employers were either already using, or planned to start using that year, algorithms to identify top candidates based on publicly available information. But AI has applications extending beyond recruiting and hiring. Mercer found that most respondents said they were also using the tech to handle employee self-service processes, conduct performance management and onboard workers, among other needs.

Employers should note that the blueprint is not legally binding, does not constitute official U.S. government policy and is not necessarily indicative of future policy, said Niloy Ray, shareholder at management-side firm Littler Mendelson. Though the principles contained in the document may be appropriate for AI and automation systems to follow, the blueprint is not prescriptive, he added.

It helps add to the scholarship and thought leadership in the area, certainly, Ray said. But it does not rise to the level of some law or regulation.

Employers may benefit from a single federal standard for AI technologies, Ray said, particularly given that this is an active legislative area for a handful of jurisdictions. A New York City law restricting the use of AI in hiring will take effect next year. Meanwhile, a similar law has been proposed in Washington, D.C., and Californias Fair Employment and Housing Council has proposed regulations on the use of automated decision systems.

Then there is the international regulatory landscape, which can pose even more challenges, Ray said. Because of the complexity involved, Ray added that employers might want to see more discussion around a unified federal standard, and the Biden administrations blueprint may be a way of jump-starting that discussion.

Lets not have to jump through 55 sets of hoops, Ray said of the potential for a federal standard. Lets have one set of hoops to jump through.

The blueprints inclusion of standards around data privacy and other areas may be important for employers to consider, as AI and automation platforms used for hiring often take into account publicly available data that job candidates do not realize is being used for screening purposes, said Julia Stoyanovich, co-founder and director at New York Universitys Center for Responsible AI.

Stoyanovich is co-author on an August paper in which a group of NYU researchers detailed their analysis of two personality tests used by two automated hiring vendors, Humantic AI and Crystal. The analysis found that the platforms exhibited substantial instability on key facets of measurement and concluded that they cannot be considered valid personality assessment instruments.

Even before AI is introduced into the equation, the idea that a personality profile of a candidate could be a predictor of job performance is a controversial one, Stoyanovich said. Laws like New York Citys could help to provide more transparency on how automated hiring platforms work, she added, and could provide HR teams a better idea of whether tools truly serve their intended purposes.

The fact that we are starting to regulate this space is really good news for employers, Stoyanovich said. We know that there are tools that are proliferating that dont work, and it doesnt benefit anyone except for the companies that are making money selling these tools.

See the original post here:

What the White House's 'AI Bill of Rights' blueprint could mean for HR tech - HR Dive

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible – Business Wire

CHICAGO ANSIBLEFEST--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, and IBM Research today announced Project Wisdom, the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. Using an artificial intelligence (AI) model, the project aims to boost the productivity of IT automation developers and make IT automation more achievable and understandable for diverse IT professionals with varied skills and backgrounds.

According to a 2021 IDC prediction1, by 2026, 85% of enterprises will combine human expertise with AI, ML, NLP, and pattern recognition to augment foresight across the organization, making workers 25% more productive and effective. Technologies such as machine learning, deep learning, natural language processing, pattern recognition, and knowledge graphs are producing increasingly accurate and context-aware insights, predictions, and recommendations.

Project Wisdom underpinned by AI foundation models derived from IBMs AI for Code efforts works by enabling a user to input a command as a straightforward English sentence. It then parses the sentence and builds the requested automation workflow, delivered as an Ansible Playbook, which can be used to automate any number of IT tasks. Unlike other AI-driven coding tools, Project Wisdom does not focus on application development; instead the project centers on addressing the rise of complexity in enterprise IT as hybrid cloud adoption grows.

From human readable to human interactive

Becoming an automation expert demands significant effort and resources over time, with a learning curve to navigate varying domains. Project Wisdom intends to bridge the gap between Ansible YAML code and human language, so users can use plain English to generate syntactically correct and functional automation content.

It could enable a system administrator who typically delivers on-premises services to reach across domains to build, configure, and operate in other environments using natural language to generate playbook instructions. A developer who knows how to build an application, but not the skillset to provision it in a new cloud platform, could use Project Wisdom to expand proficiencies in these new areas to help transform the business. Novices across departments could generate content right away while still building foundational knowledge, without the dependencies of traditional teaching models.

Driving open source innovation with collaboration

While the power of AI in enterprise IT cannot be denied, community collaboration, along with insights from Red Hat and IBM, will be key in delivering an AI/ML model that aligns to the key tenets of open source technology. Red Hat has more than two decades of experience in collaborating on community projects and protecting open source licenses in defense of free software. Project Wisdom, and its underlying AI model, are an extension of this commitment to keeping all aspects of the code base open and transparent to the community.

As hybrid cloud operations at scale become a key focus for organizations, Red Hat is committed to building the next wave of innovation on open source technology. As IBM Research and Ansible specialists at Red Hat work to fine tune the AI model, the Ansible community will play a crucial role as subject matter experts and beta testers to push the boundaries of what can be achieved together. While community participation is still being worked through, those interested can stay up to date on progress here.

Supporting Quotes

Chris Wright, CTO and SVP of Global Engineering, Red HatThis project exemplifies how artificial intelligence has the power to fundamentally shift how businesses innovate, expanding capabilities that typically reside within operations teams to other corners of the business. With intelligent solutions, enterprises can decrease the barrier to entry, address burgeoning skills gaps, and break down organization-wide siloes to reimagine work in the enterprise world.

Ruchir Puri, chief scientist, IBM Research; IBM Fellow; vice president, IBM Technical CommunityProject Wisdom is proof of the significant opportunities that can be achieved across technology and the enterprise when we combine the latest in artificial intelligence and software. Its truly an exciting time as we continue advancing how todays AI and hybrid cloud technologies are building the computers and systems of tomorrow.

1IDC FutureScape: Worldwide Artificial Intelligence and Automation 2022 Predictions, Doc # US48298421, Oct 2021

Additional Resources

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity and service. For more information, visit https://research.ibm.com.

Forward-Looking Statements

Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the companys current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat, the Red Hat logo and Ansible are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

The rest is here:

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible - Business Wire

AI-powered government finances: making the most of data and machines – Global Government Forum

Photo by Karolina Grabowska via Pexels

Governments are paying growing attention to the potential of artificial intelligence the simulation of human intelligence processes by machines to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances,Global Government Fintech the sister title of Global Government Forum convened an international panel on 4 October 2022 for a webinar titled How can AI help public authorities save money and deliver better outcomes?.

The discussion, organised in partnership with SAS and Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein said Peter Kerstens, advisor, technological innovation & cyber security at the European Commissions Financial Services Department. That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.

Like most technology-based areas, it is a field that is also moving very quickly. If the last class you took in data science was three years ago, its already dated, cautioned Steve Keller, acting director of data strategy at the US Treasurys Bureau of the Fiscal Service, in his own opening remarks.

Kerstens began by describing the very name artificial intelligence as a big problem, asserting that AI is neither artificial nor is it particularly intelligent at least not in a way that humans are intelligent.

A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data, he explained. That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.

Continuing this theme of caution he further explained: Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And thats why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information, he said.

Read more:Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

Kerstens then mentioned AIs potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering requires very powerful systems. But this is also risky because it comes very close to mass surveillance, he said. So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of Big Brother.

Kerstens also touched on AIs use in understanding macroeconomic developments. Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable, he said.

The US Treasurys Keller described the ultimate aim of AI as being to improve decision accuracy, forecasting and speed trying to use data to make scientific decisions. This includes, he continued, testing and verifying our assumptions with data to help make sure that we dont break things, but also help us ask important questions.

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was turning bills into literally a dataset the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (and how we detect that at scale); in the third area, uses include monitoring whether people are using services correctly.

Were collecting data from so many elements, and often in large public-sector areas, the left hand doesnt talk to the right hand, he said, in the context of entity resolution. We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen.

Read more: Artificial intelligence in the public sector: an engine for innovation in government if we get it right

Keller also raised ethical, upskilling and cultural considerations. If people start buying IT products that are going to have AI organically within them, or theyre building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?.

He concluded his opening remarks by outlining how the bureau was building an internal data ecosystem, including a data governance council, data analytics lab, high-value use case compendium and data university.

The Centre for Data Ethics & Innovation (CDEI), which is part of the UK Department for Digital, Culture, Media and Sport, was established three years ago to drive responsible innovation across the public sector.

A huge focus is around supporting teams to think about governance approaches, the centres deputy director, Sam Cannicott, explained. How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore responsible AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in childrens social care. These are all really sensitive often controversial areas, but also where data can help inform decision-making, he said.

Read more: Canada to create top official to police artificial intelligence under new data law

The CDEI does not prescribe what should be done. Instead it helps different teams to think through these questions themselves.

Ultimately, the questions are complex, Cannicott said. While lots of teams might seek an easy answer, [to] be told what youre doing is fine, its often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.

The CDEIs current work programme is focused on three areas: building an effective AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); responsible data access, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with theCentral Digital and Data Officeto develop the UKs first public sector algorithmic transparency standard).

This is underpinned by a public attitudes function to ensure citizens views inform the CDEIs work important when it comes to the critical challenge of trust.

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italys Ministry of Economics and Finance (MEF), and with Belgiums Federal Public Service Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for improved systematic liquidity and risk management during COVID-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation, he said. Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.

Australias National Artificial Intelligence Centre launched in December 2021. It aims to accelerate positive AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centres director, described AIs ability to scale as incredibly powerful. But, she said, it is incredibly important that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centres focus, she proposed three factors that would be important to help maximise AIs impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A national listening tour organised by the centre had found, she said, low awareness of AIs capabilities. Unless we empower every business to be connected to those opportunities, we wont really succeed, she warned.

Her second point focused on small- and medium-sized businesses. Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI, she said. But small and medium business is really struggling in this area, which is ironic as AI really presents as a great equaliser opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.

Her third point focused on community understanding, which she described as a critical factor in accelerating the uptake of AI technologies. This includes achieving engagement from diverse perspectives in how AI is shaped, created [and] implemented.

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinars Q&A.

In terms of trust, what goes in to any AI tool affects what comes out. How reliable they are [AI systems] depends on how good and how unbiased the dataset was, Kerstens said. Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. Peoples addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If youre not careful, your artificial intelligence engine is going to build in these biases, and therefore its going to be biased.

Its not just about bias within AI, its bias in the data, said Castle, emphasising the importance of responsible innovation across the analytics lifecycle.

Read more: Brazils national AI strategy is unachievable, government study finds

Solar provided a further dimension, adding that organisations can often find themselves working with substantial gaps in data (which she referred to as data deserts). Its actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data, she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to help shape and steer investments and fill gaps in elderly health data.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects will go a long way to address some of the biases and also will go a long way into the question of what should be done and what shouldnt be done.

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

Sometimes theres a push to use these technologies because they can be seen as a way to save money, observed Cannicott. There is also nervousness because some have seen where things have gone wrong, and they dont want to be to blame.

He emphasised the importance of experimentation, governance (having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them) and public engagement.

Some polling we did fairly recently suggested that around half of people dont think the data that government collects from them is used for their benefit, he said. Theres quite a bit of a trust gap there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits peoples lives.

Keller emphasised the importance of incorporating recourse into AI systems. If I build a system that detects fraud, and flag somebody is a villain and theyre not, we need to give them an easy route to appeal that process, he said.

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do, pointed out Kerstens.

Organisational leaders driven by doctrines could struggle to accept a role for AI. If you run an organisation or a governmental entity based on politics, artificial intelligence isnt something youre going to like very much because it is the data speaking to you, he continued. They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, theyll dismiss it.

Public sector agencies also need to be savvy about AI solutions they are buying. Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, thats quite a dangerous space to be in, said Cannicott. Because, for example, if you [look at] childrens social-care different geographies, different populations theres all sorts of different factors in that data. If youre not clear on where the data is coming from to build those tools initially, then you probably shouldnt be using that technology. Thats also where testing and experimentation is very important.

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

Often Ive seen AI be implemented by someone whos very passionate, and it stays as this hobby experiment and project, said Solar, emphasising the importance of developing a base-level understanding of AI across all levels of an organisation. For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders the entire organisational chain, she said.

Kerstens concluded by emphasising that the story of AIs growing deployment across the public sector (and beyond) remains in the early chapters. AI is very powerful. Its just very early days, he said. But what people are most afraid of is that they dont understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.

AIs advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

The Global Government Fintech webinar How can AI help public authorities save money and deliver better outcomes? was held on 4 October 2022, with the support of knowledge partners SAS and Intel. You can watch the 75-minute webinar via our dedicated event page.

Read more: AI intelligence: equipping public and civil service leaders with the skills to embrace emerging technologies

Go here to see the original:

AI-powered government finances: making the most of data and machines - Global Government Forum

DigestAIs 19-year-old founder wants to make education addictive – TechCrunch

When Quddus Pativada was 14, he wished that he had an app that could summarize his textbooks for him. Just five years later, Pativada has been there and done that earlier this year, he launched the AI-based app Kado, which turns photos, documents or PDFs into flash cards. Now, as the 19-year-old founder takes the stage for Startup Battlefield, hes looking to take his company, DigestAI, beyond flashcards to create an AI dialogue assistant that we can all carry around on our phones.

If we make learning truly easy and accessible, its something you could do as soon as you open your phone, Pativada told TechCrunch. We want to put a teacher in every single persons phone for every topic in the world.

Quddus Pativada, founder at DigestAI pitches as part of TechCrunch Startup Battlefield at TechCrunch Disrupt in San Francisco on October 18, 2022. Image Credits: Haje Kamps / TechCrunch

The companys AI is trained on data from the internet, but the algorithm is fine-tuned to recall specific use cases to make sure that its responses are accurate and not too thrown off by online chaos.

We train it on everything, but the actual use cases are called within silos. Were calling it federated learning, where its sort of siloed in and language models are operating on a use case basis, Pativada said. This is good because it avoids malicious use.

Pativada said that this kind of product would be different from smart assistants like Apples Siri or Amazons Alexa because the information it provides would be more personalized and detailed. So, for certain use cases, like asking for sources to use in an essay, the AI will pull from academic journals to make sure that the information is accurate and appropriate for a classroom.

Despite running an educational AI startup, Pativada isnt currently in school. He took a gap year before going to college to work on his startup, but as DigestAI took off, he decided to keep building instead of going back to school. Growing up, he taught himself to code because he loved video games, so he wanted to make his own by age 10, he published a Flappy Bird clone on the App Store. Naturally, his technological ambitions matured a bit over time. Before founding DigestAI, Pativada built a COVID-19 contact tracing platform. At first, he just made the app as a tool for his classmates but his work ended up being honored by the United Arab Emirates government.

Image Credits: DigestAI

So far, the outlook is good for the Dubai-based company. Pativada who says he feels skittish about the CEO label, and prefers to think of himself as just a founder has raised $600,000 so far from angel investors like Mark Cuban and Shaan Patel, who struck a deal on Shark Tank for his SAT prep company, Prep Expert.

How does a 19-year-old in Dubai capture the attention of one of thee most well-known startup investors? A cold email. Mark, we apologize if this admission makes your inbox even more nightmarish.

I was watching a GQ video of Mark Cubans daily routine, Pativada said. He said he reads his emails every morning at 9 AM, and I looked at the time in Dallas, and it was about 9 AM. So I was like, maybe I should just shoot him an email and see what happens. While he was at it, he reached out to Patel, whose educational startup has done over $20 million in sales. Patel hopped on a video call with the teenage founder, and by the next week, he and Cuban both offered to invest in DigestAI.

We raised our entire round through cold emails and Zoom, Pativada told TechCrunch. It sort of helped because no one can see how young I look in person.

Before he decided to eschew college altogether, Pativada applied to Stanford and interviewed with an alumnus, as is standard in the admissions process. He didnt end up getting into the competitive Palo Alto university, but his interviewer, who works at Stanford, did end up investing in his company. Go figure.

Our goal is to work with universities like Stanford, Pativada said. The company is also targeting enterprise clients. Currently, DigestAI works with some U.S.-based universities, Bocconi University in Italy, a European law firm and other clients. At the law firm, DigestAI is testing a tool that allows associates to text a WhatsApp number to quickly brush up on legal terms.

In the long term, DigestAI wants to create an SMS system where people can text the AI asking for help learning something he wants information to be so accessible that its addictive.

That is what AI is its almost the best version of a human being, Pativada said.

View original post here:

DigestAIs 19-year-old founder wants to make education addictive - TechCrunch

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast – HYPEBEAST

Artificial intelligence has allowed us to simulate all kinds of situations through computer systems. Some of its main applications are language processing and speech recognition, and now, through play.ht and podcast.ai were actually able to see how far the technology has come by experiencing a conversation with someone who is not even on Earth anymore.

In an entirely AI-generated podcast, podcast.ai has created a full interview between Joe Rogan and Steve Jobs. While the first bit of the podcast is clunky with weird pauses and awkward laughing, it does start to move into real conversation touching on faith, tech companies, drugs, and at one point the AI-generated Jobs uses the analogy of a car where you have to buy all four wheels separately to Adobes services.

The crazy thing is some parts begin to sound believable, and actually keep you listening as you start to make a connection to what they are saying. This could be enforced by the prevalence of Joe Rogan in the current podcast sphere, and the general curiosity of witnessing what Steve Jobs would have said if the two ever did meet. Have a listen below to experience the AI podcast for yourself.

In other tech news, unopened first-generation apple iPhone from 2007 auctions for $39,000 USD.

See the article here:

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast - HYPEBEAST

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids – TechCrunch

Google is bringing a set of new kids-focused features such as parent-controlled watchlists and AI-powered suggestions to Google TV, the latest in a series of efforts from the Android-maker as it attempts to broaden the offerings of its TV operating system for family consumption.

The company said it is adding these features to the kids profiles to improve content recommendation and exploration. Parents can directly push titles to the must watch lists for kids from their profiles (by just tapping the watchlist button on the titles they came across and pressing add), the company explained in a blog post.

Image Credits: Google

The company is also introducing AI-powered recommendations for kids because Google loves AI. Children can now look at popular shows and movies on their Google TV home screen based on installed apps and parent-defined ratings levels. If they dont like a title that has been recommended to them and dont wish to see it again, they can press and hold the select button and then tap hide to remove the suggestion from the list.

Image Credits: Google

The new additions are Googles ongoing efforts to make its services more appropriate for kids.Google introduced supervised accounts for YouTube last year that helps children migrate from YouTube Kids to the main YouTube app in a safe manner.

Parents can additionally create restrictions on content exploration. It allows guardians to define three levels of access: Explore for content suitable for viewers 9 and above; Explore more for viewers 13 and above; and Most of YouTube to enable access to all videos sans the age-restricted content.

Image Credits: YouTube

The search giant said it is also bringing this supervised experience to Google TV so kids can access the main YouTube app with appropriate content restrictions. Notably, when parents set up these supervised accounts, they provide consent for the collection and use of data collection from kids profiles for COPPA compliance a U.S. privacy law that defines limits for websites providing services to children.

The company first introduced kids profiles on Google TV last year that allows parents to set limits on app access and screen time.

Google said these features are rolling out starting today on the Chromecast with Google TV (both 4K and HD variants) and other Google TV devices from manufacturers like Hisense and Philips, Sony and TCL.

The rest is here:

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids - TechCrunch

Misinformation research relies on AI and lots of scrolling – NPR

Atilgan Ozdil/Anadolu Agency/Getty Images

Atilgan Ozdil/Anadolu Agency/Getty Images

What sorts of lies and falsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer this question, one post at a time. It often gets squishy.

She reviewed a social media post where someone had shared a news story about vaccines with the comment "Hmmm, that's interesting." Was the person actually saying that the news story was interesting, or insinuating that the story isn't true?

Agajanian read around and between the lines often while working at University of Washington's Center for an Informed Public, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.

As the midterm election approaches, researchers and private sector firms are racing to track false claims about everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as the threats to the democratic process posed by viral lies loom. Getting a sense of which falsehoods people online talk about might sound like a straightforward exercise, but it isn't.

"The broader question is, can anyone ever know what everybody is saying?" says Welton Chang, CEO of Pyrra, a startup that tracks smaller social media platforms. (NPR has used Pyrra's data in several stories.)

Automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, places and topics from social media posts. Using the same technologies that in recent years enable AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items like advertising campaigns, gives them a light edit and shares them with clients.

A recent digest of such summaries include the unsubstantiated claim "Energy infrastructure under globalist attack."

The University of Washington and Pyrra's approaches are on the more extreme ends in terms of automation - few teams have so many staff - around 15 - just to monitor social media, or rely so heavily on algorithms as to have it synthesize material and output.

All methods carry caveats. Manually monitoring and coding content could miss out on developments; and while capable of processing huge amounts of data, artificial intelligence struggles to handle the nuances of distinguishing satire from sarcasm.

Although incomplete, having a sense of what's circulating in the online discourse allows society to respond. Research into voting-related misinformation in 2020 has helped inform election officials and voting rights groups about what messages to emphasize this year.

For responses to be proportionate, society also needs to evaluate the impact of false narratives. Journalists have covered misinformation spreaders who seem to have very high total engagement numbers but limited impact, which risks "spreading further hysteria over the state of online operations," wrote Ben Nimmo, who now investigates global threats at Meta, Facebook's parent company.

While language can be ambiguous, it's more straight forward to track who's been following and retweeting whom. Other researchers analyze networks of actors as well as narratives.

The plethora of approaches is typical of a field that's just forming, says Jevin West, who studies the origins of academic disciplines at University of Washington's Information School. Researchers come from different fields and bring methods they're comfortable with to start, he says.

West corralled research papers from academic database Semantic Scholar mentioning 'misinformation' or 'disinformation' in their title or abstract, and found that many papers are from medicine, computer science, psychology and there also geology, mathematics and art.

"If we're a qualitative researcher, we'll go...and literally code everything that we see." West says. More quantitative researchers do large scale analysis like mapping topics on Twitter.

Projects often use a mix of methods. "If [different methods] start converging on similar kinds of...conclusions, then I think we'll feel a little bit better about it." West says.

One of the very first steps of misinformation research - before someone like Agajanian starts tagging posts - is identifying relevant content under a topic. Many researchers start their search with expressions they think people talking about the topic could use, see what other phrases and hashtags appear in the search results, add that to the query, and repeat the process.

It's possible to miss out on keywords and hashtags, not to mention that they change over time.

"You have to use some sort of keyword analysis. " West says, "Of course, that's very rudimentary, but you have to start somewhere."

Some teams build algorithmic tools to help. A team at Michigan State University manually sorted over 10,000 tweets to pro-vaccine, anti-vaccine, neutral and irrelevant as training data. The team then used the training data to build a tool that sorted over 120 million tweets into these buckets.

For the automatic sorting to remain relatively accurate as the social conversation evolves, humans have to keep annotating new tweets and feed them the training set, Pang-Ning Tan, a co-author of the project, told NPR in an email.

If the interplay between machine detection - human review rings familiar, that might be because you've heard of large social platforms like Facebook, Twitter and Tik Tok describing similar processes to moderate content.

Unlike the platforms, another fundamental challenge researchers have to face is data access. Much misinformation research uses Twitter data, in part because Twitter is one of the few social media platforms that easily lets users tap into its data pipeline - known as Application Programming Interface or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.

The data pipelines of smaller platforms tend to be less well-documented and could change on short notice.

Take the recently-deplatformed Kiwi Farms as an example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. "When it first went down, we had to wait for it to basically pop back up somewhere, and then for people to talk about where that somewhere is." says Chang.

"And then we can identify, okay, the site is now here - it has this similar structure, the API is the same, it's just been replicated somewhere else. And so we're redirecting the data ingestion and pulling content from there."

Facebook's data service CrowdTangle, while purporting to serve up all publicly available posts, has been found to not have consistently done so. On another occasion, Facebook bungled data sharing with researchers Most recently, Meta is winding down CrowdTangle, with no alternatives announced set to be in place.

Other large platforms, like YouTube and TikTok, do not have an accessible API , a data service or collaboration with researchers at all. Tik Tok has promised more transparency for researchers.

In such a vast, fragmented, and shifting landscape, West says there's no great way at this point to say what's the state of misinformation on a given topic.

"If you were to ask Mark Zuckerberg, what are people saying on Facebook today? I don't think he could tell you." says Chang.

Continued here:

Misinformation research relies on AI and lots of scrolling - NPR