The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium

Introduction

OpenAI has recently made an exciting announcement that they are working on GPT 5, the next generation of their groundbreaking language model. This news comes hot on the heels of the release of GPT 4 Turbo, showcasing the rapid pace of AI development and OpenAIs commitment to pushing boundaries. GPT models have proven to be revolutionary, consistently delivering jawdropping improvements with each iteration. With OpenAIs evident enthusiasm for GPT 5 and CEO Sam Almans interview, it is clear that this next model will be nothing short of mind-blowing.

One of the most intriguing aspects of GPT 5 is the potential for video generation from text prompts. This capability could have a profound impact on various fields, from education to creative industries. Just imagine being able to transform a simple text description into high-quality video content. The possibilities are endless.

OpenAI plans to achieve this wizardry by focusing on scale. GPT 5 will require a vast amount of data and computing power to reach its full potential. It will analyze a wide range of data sets, including text, images, and audio. This multidimensional approach will allow GPT 5 to excel across different modalities. OpenAI is partnering with NVIDIAs cutting-edge GPUs and leveraging Microsofts Cloud infrastructure to ensure it has the necessary computational resources.

While an official release date for GPT 5 has not been announced, experts predict it could be launched sometime around mid to late 2024. OpenAI will undoubtedly take the time needed to meet their standards before releasing the model to the public. The wait may feel long, but rest assured, it will be worth it. Each iteration of GPT has shattered expectations, and GPT 5 promises to be the most powerful AI system yet.

However, with great power comes great responsibility. OpenAI recognizes the need for safeguards and constraints to prevent harmful outcomes. As GPT 5 potentially approaches the level of artificial general intelligence, questions arise about its autonomy and control. Balancing the potential benefits of increased intelligence with the risks it poses to society is an ongoing debate.

See the rest here:

The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know – ExtremeTech

To many, AI is just a horrible Steven Spielberg movie. To others, it's the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask.

Broadly, artificial intelligence (AI) is the combination of mathematical algorithms, computer software, hardware, and robust datasets deployed to solve some kind of problem. In one sense, artificial intelligence is sophisticated information processing by a powerful program or algorithm. In another, an AI connotes the same information processing but also refers to the program or algorithm itself.

Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.

As defined by John McCarthy in 2004, artificial intelligence is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.

What distinguishes a neural net from conventional software? Its structure. A neural net's code is written to emulate some aspect of the architecture of neurons or the brain.

The difference between a neural net and an AI is often a matter of philosophy more than capabilities or design. A robust neural net's performance can equal or outclass a narrow AI. Many "AI-powered" systems are neural nets under the hood. But an AI isn't just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. All these different types of artificial intelligence overlap along a spectrum of complexity. For example, OpenAI's powerful GPT-4 AI is a type of neural net called a transformer (more on these below).

There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn't very intelligent at all.

IBM explains, "[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers]."

AGI stands for artificial general intelligence. An AGI is like the turbo-charged version of an individual AI. Today's AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to "think" for itself to solve problems it hasn't been trained to do. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn't expect.

In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar's WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-Es time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity.

Conceptually: An AI's logical structure has three fundamental parts. First, there's the decision processusually an equation, a model, or just some code. Second, there's an error functionsome way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Physically: Typically, an AI is "just" software. Neural nets consist of equations or commands written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. Commercial AI applications have typically been run on server-side hardware, but that's beginning to change. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Dedicated hardware neural nets run on a special type of "neuromorphic" ASICs as opposed to a CPU, GPU, or NPU.

A neural net is software, and a neuromorphic chip is a type of hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU's multi-core architecture. But it's not some exotic new transistor type, nor any strange and eldritch kind of data structure. It's all about tensors. Tensors describe the relationships between things; they're a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you're drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, it must filter through multiple layers of software abstraction before it becomes binary. Intel's neuromorphic chip, Loihi 2, affords a very different approach.

Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi's physical architecture invitesalmost requiresthe use of weighting and an error function, both defining features of AI and neural nets. The chip's biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi "fires" in spikes with an integer value capable of carrying much more data. Loihi 2 is designed to excel in workloads that don't necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel's neuromorphic processors.

Machine learning models using Lava can fully exploit Loihi 2's unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls "recurrent neural networks with novel bio-inspired properties."

Intel hasn't announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.

IBM's NorthPole processor is distinct from Intel's Loihi in what it does and how it does it. Unlike Loihi or IBM's earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.

According to Dharmendra Modha of IBM Research, "Architecturally, NorthPole blurs the boundary between compute and memory," Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory." IBM doesn't use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.

IBM Credit: IBMs NorthPole AI processor.

NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn't fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Against an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient.

NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn't fit neatly into any of the other buckets we use to subdivide different types of AI processing engine. Still, it's an interesting example of companies trying radically different approaches to building a more efficient AI processor.

When an AI learns, it's different than just saving a file after making edits. To an AI, getting smarter involves machine learning.

Machine learning takes advantage of a feedback channel called "back-propagation." A neural net is typically a "feed-forward" process because data only moves in one direction through the network. It's efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes.

Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation. It changes the lay of the land. This is important because many AI applications rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension, making a simple graph look like a topographical map. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net's output. But if you change that landscape, where the marble ends up can change.

We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text and adjusts its autocorrect to match. Pandora uses listeners' input to classify music to build specifically tailored playlists. 3blue1brown has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.

Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It's also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed.

Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they're excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensorswhich describe changesare crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it's a digital arms race.

Nvidia Credit: The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.

Video signal has high dimensionality, or bit depth. It's made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screenand in real life.

That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformersa hybrid of a convolutional neural net and a transformerexcel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but it's also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensorslike weather tracking, traffic routing, or industrial control systems. That's what makes AI so useful for data processing "at the edge." AI can find patterns in data and then respond to them on the fly.

Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the "edge" represents the outermost fringe of end nodes within the collective IoT network.

Edge intelligence takes two primary forms: AI on edge and AI for edge. The distinction is where the processing happens. "AI on edge" refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. "AI for the edge" enables edge intelligence by offloading some of the compute demand to the cloud.

In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoTthe artificial intelligence of things.

As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn't necessarily make it smarter. Everything's got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.

There's a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widget's good for.

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn't something anyone can sell. It's more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we're going to avoid an explosion of splinternets, and keep the world operating in real time.

In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They're both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation.

There's also a kind of hybrid hardware-and-software neural net that brings a new meaning to "machine learning." It's made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.

The rest is here:

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

ByAnjana Susarla, Michigan State University

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.

Photo credit: iStockPhoto.com

Original post:

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project

The Era of AI: 2023’s Landmark Year – CMSWire

The Gist

As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.

In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).

The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.

The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:

February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.

February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.

February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.

March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).

March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.

March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.

March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.

March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.

March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.

April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.

April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.

May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.

July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.

August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.

September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.

September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.

November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.

November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.

November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.

December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.

These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.

Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."

Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.

Related Article:Harnessing AI: Top Use Cases for Digital Commerce

Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."

Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.

February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.

March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."

May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.

May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.

October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.

December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.

Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.

The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."

Related Article: The Evolution of AI Chatbots: Past, Present and Future

OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.

November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.

November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.

November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.

November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.

The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.

Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.

The rest is here:

The Era of AI: 2023's Landmark Year - CMSWire

OpenAI’s six-member board will decide ‘when we’ve attained AGI’ – VentureBeat

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

According to OpenAI, the six members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to a for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of artificial general intelligence is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft?

The information was included in a thread on X over the weekend by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its nonprofit status even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.

Smith said: Meta is owned by shareholders. OpenAI is owned by a non-profit. Which would you have more confidence in? Getting your technology from a non-profit or a for profit company that is entirely controlled by one human being?

The AI Impact Tour

Connect with the enterprise AI community at VentureBeats AI Impact Tour coming to a city near you!

In his thread, Kilpatrick quoted from the Our structure page on OpenAIs website, which offers details about OpenAIs complex nonprofit/capped profit structure. According to the page, OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit (which is registered in Delaware). While the for-profit subsidiary, OpenAI Global, LLC which appears to have shifted from the limited partnership OpenAI LP, which was previously announced in 2019, about three years after founding the original OpenAI nonprofit is permitted to make and distribute profit, it is subject to the nonprofits mission.

It certainly sounds like once OpenAI achieves their stated mission of reaching AGI, Microsoft will be out of the loop even though at last weeks OpenAI Dev Day, OpenAI CEO Sam Altman told Microsoft CEO Satya Nadella that I think we have the best partnership in techIm excited for us to build AGI together.

And a new interview with Altman in the Financial Times, Altman said the OpenAI/Microsoft partnership was working really well and that he expected to raise a lot more over time. Asked if Microsoft would keep investing further, Altman said: Id hope sotheres a long way to go, and a lot of compute to build out between here and AGI... training expenses are just huge.

From the beginning, OpenAIs structure details say, Microsoft accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity.

An OpenAI spokesperson told VentureBeat that OpenAIs mission is to build AGI that is safe and beneficial for everyone. Our board governs the company and consults diverse perspectives from outside experts and stakeholders to help inform its thinking and decisions.We nominate and appoint board members based on their skills, experience and perspective on AI technology, policy and safety.

Currently, the OpenAI nonprofit board of directors is made up of chairman and president Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, as well as non-employees Adam DAngelo, Tasha McCauley, and Helen Toner.

DAngelo, who is CEO of Quora, as well as tech entrepreneur McCauley and Honer, who isdirector of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its dangerous take on AI safety. And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy, which is funded by Effective Altruists. And Jan Leike, who leads OpenAIs superalignment team, reportedly identifies with the EA movement.

The OpenAI spokesperson said that None of our board members areeffective altruists, adding that non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.

Suzy Fulton, who offers outsourced general counsel and legal services to startups and emerging companies in the tech sector, told VentureBeat that while in many circumstances, it would be unusual to have a board make this AGI determination, OpenAIs nonprofit board owes its fiduciary duty to supporting its mission of providing safe AGI that is broadly beneficial.

They believe the nonprofit boards beneficiary is humanity, whereas the for-profit one serves its investors, she explained. Another safeguard that they are trying to build in is having the Board majority independent, where the majority of the members do not have equity in Open AI.

Was this the right way to set up an entity structure and a board to make this critical determination? We may not know the answer until their Board calls it, Fulton said.

Anthony Casey, a professor at The University of Chicago Law School, agreed that having the board decide something as operationally specific as AGI is unusual, but he did not think there is any legal impediment.

It should be fine to specifically identify certain issues that must be made at the Board level, he said. Indeed, if an issue is important enough, corporate law generally imposes a duty on the directors to exercise oversight on that issue, particularly mission-critical issues.

Not all experts believe, however, that artificial general intelligence is coming anytime soon, while some question whether it is even possible.

According to Merve Hickok, president of the Center for AI and Digital Policy, which filed a claim with the FTC in March saying the agency should investigate OpenAI and order the company to halt the release of GPT models until necessary safeguards are established, OpenAI, as an organization, does suffer from diversity of perspectives. Their focus on AGI, she explained, have ignored current impact of AI models and tools.

However, she disagreed with any debate about the size or diversity of the OpenAI board in the context of who gets to determine whether or not OpenAI has attained AGI saying it distracts from discussions about whether their underlying mission and claim is even legitimate.

This would shift the focus, and de facto legitimize the claims that AGI is possible, she said.

But does OpenAIs lack of a clear definition of AGI or whether there will even be one AGI skirt the issue? For example, an OpenAI blog post from February 2023 said the first AGI will be just a point along the continuum of intelligence.And in January 2023 LessWrong interview, CEO Sam Altman said that the future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful.

Still, its hard to say what OpenAIs vague definition of AGI will really mean for Microsoft especially without having full details about the operating agreement between the two companies. For example, Casey said, OpenAIs structure and relationship with Microsoft could lead to some big dispute if OpenAI is sincere about its non-profit mission.

There are a few nonprofits that own for profits, he pointed out the most notable being the Hershey Trust. But they wholly own the for-profit. In that case, it is easy because there is no minority shareholder to object, he explained. But here Microsofts for-profit interests could directly conflict with the non-profit interest of the controlling entity.

The cap on profits is easy to implement, he added, but the hard thing is what to do if meeting the maximum profit conflicts with the mission of the non-profit? Casey added that default rules would say that hitting the profit is the priority and the managers have to put that first (subject to broad discretion under the business judgment rule).

Perhaps, he continued, Microsoft said, Dont worry, we are good either way. You dont owe us any duties. That just doesnt sound like the way Microsoft would negotiate.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 – DataDrivenInvestor

Photo by Johannes Plenio on Unsplash

In the fast-paced realm of artificial intelligence (AI), 2024 will be a transformative year, marking a profound shift in our understanding of AI capabilities and its real-world applications. While some developments have been a culmination of years of progress, others have emerged as groundbreaking innovations. In this article, well explore the most important AI innovations that will define 2024.

The term multimodality may sound technical, but its implications are revolutionary. In essence, it refers to an AI systems ability to process diverse types of data, extending beyond text to include images, video, audio, and more. In 2023, the public witnessed the debut of powerful multimodal AI models, with OpenAIs GPT-4 leading the way. This model allows users to upload not only text but also images, enabling the AI to see and interpret visual content.

Google DeepMinds Gemini, unveiled in December, further advanced multimodality, showcasing the models capacity to work with images and audio. This breakthrough opens doors to endless possibilities, such as seeking dinner suggestions based on a photo of your fridge contents. According to Shane Legg, co-founder of Google DeepMind, the shift towards fully multimodal AI marks a significant landmark, indicating a more grounded understanding of the world.

The promise of multimodality extends beyond mere utility; it enables models to be trained on diverse data sets, including images, video, and audio. This wealth of information enhances the models capabilities, propelling them towards the ultimate goal of artificial general intelligence that matches human intellect.

Read the original here:

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor

Game-playing DeepMind AI can beat top humans at chess, Go and poker – New Scientist

Shall we play a game?

mccool/Alamy

A single artificial intelligence can beat human players in chess, Go, poker and other games that require a variety of strategies to win. The AI, called Student of Games, was created by Google DeepMind, which says it is a step towards an artificial general intelligence capable of carrying out any task with superhuman performance.

Martin Schmid, who worked at DeepMind on the AI but who is now at a start-up called EquiLibre Technologies, says that the Student of Games (SoG) model can trace its lineage back to two projects. One was DeepStack, the AI created by a team including Schmid at the University of Alberta in Canada and which was the first to beat human professional players at poker. The other was DeepMinds AlphaZero, which has beaten the best human players at games like chess and Go.

The difference between those two models is that one focused on imperfect-knowledge games those where players dont know the state of all other players, such as their hands in poker and one focused on perfect-knowledge games like chess, where both players can see the position of all pieces at all times. The two require fundamentally different approaches. DeepMind hired the whole DeepStack team with the aim of building a model that could generalise across both types of game, which led to the creation of SoG.

Schmid says that SoG begins as a blueprint for how to learn games, and then improve at them through practice. This starter model can then be set loose on different games and teach itself how to play against another version of itself, learning new strategies and gradually becoming more capable. But while DeepMinds previous AlphaZero could adapt to perfect-knowledge games, SoG can adapt to both perfect and imperfect-knowledge games, making it far more generalisable.

The researchers tested SoG on chess, Go, Texas holdem poker and a board game called Scotland Yard, as well as Leduc holdem poker and a custom-made version of Scotland Yard with a different board, and found that it could beat several existing AI models and human players. Schmid says it should be able learn to play other games as well. Theres many games that you can just throw at it and it would be really, really good at it.

This wide-ranging ability comes at a slight cost in performance compared with DeepMinds more specialised algorithms, but SoG can nonetheless easily beat even the best human players at most games it learns. Schmid says that SoG learns to play against itself in order to improve at games, but also to explore the range of possible scenarios from the present state of a game even if it is playing an imperfect-knowledge one.

When youre in a game like poker, its so much harder to figure out; how the hell am I going to search [for the best strategic next move in a game] if I dont know what cards the opponent holds? says Schmid. So there was some some set of ideas coming from AlphaZero, and some set of ideas coming from DeepStack into this big big mix of ideas, which is Student of Games.

Michael Rovatsos at the University of Edinburgh, UK, who wasnt involved in the research, says that while impressive, there is still a very long way to go before an AI can be thought of as generally intelligent, because games are settings in which all rules and behaviours are clearly defined, unlike the real world.

The important thing to highlight here is that its a controlled, self-contained, artificial environment where what everything means, and what the outcome of every action is, is crystal clear, he says. The problem is a toy problem because, while it may be very complicated, its not real.

Topics:

Read more here:

Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist

Sam Altman Seems to Imply That OpenAI Is Building God – Futurism

Ever since becoming CEO of OpenAI in 2019, cofounder Sam Altman has made the company's number one missionto build an "artificial general intelligence" (AGI) that is both "safe" and can benefit "all of humanity."

And while we haven't really come to an agreement on what would actually count as AGI, Altman's own vision remains as lofty as it is vague.

Take this new interview with the Financial Times where Altman dished on the upcoming GPT-5 and described AGI as a "magic intelligence in the sky," which sounds an awful lot like he's implying his company is building a God-like entity.

OpenAI's own definition of AGI is a "system that outperforms humans at most economically valuable work," a far more down-to-earth description of what amounts to an omnipotent "superintelligence" for Altman.

In an interview with The Atlantic earlier this year, Altman painted a rosy and speculative vision an AGI-powered future, describing a utopian society in which "robots that use solar power for energy can go and mine and refine all of the minerals that they need," all without the requiring the input of "human labor."

And Altman isn't the only one invoking the language of a God-like AI in the sky.

"Were creating God," an AI engineer working on large language models told Vanity Fair in September. "We're creating conscious machines."

In April, Tesla CEO and OpenAI cofounder Elon Musk who recently launched his own AI chatbot called Grok, despite warning about the possibility of an evil AI outsmarting humans and taking over the world for many years told Fox News that Google founder Larry Page "wanted a sort of digital super-intelligence" which would eventually become "basically a digital god, if you will, as soon as possible."

"The reason Open AI exists at all is that Larry Page and I used to be close friends and I would stay at his house in Palo Alto and I would talk to him late in the night about AI safety," Musk added. "At least my perception was that Larry was not taking AI safety seriously enough."

Musk ragequit OpenAI in 2018 over disagreements with the company's direction, a year before Altman was appointed CEO.

For someone so dead-set on AGI, the only trouble is that Altman still sometimes sounds very hazy on the details.

"The vision is to make AGI, figure out how to make it safe...and figure out the benefits," he told the FT,in a vague statement that lacks the degree of specificity you'd expect from the head of a company talking about its number one goal.

But to keep the ball rolling in the meantime, Altman told the newspaper that OpenAI will likely ask Microsoft for even more money, following a $10 billion investment by the tech giant earlier this year.

"Theres a long way to go, and a lot of compute to build out between here and AGI," he told the FT, arguing that "training expenses are just huge."

OpenAI is also conveniently allowing its own board to decide when we've reached AGI, according to the company's website, suggesting there's clearly plenty of wriggle room when it comes to an already hard-to-pin-down topic.

Whether we'll all be witness to a divine ascension of technology or,heck, a robot that can help middle schoolers with their homework remains unclear at best.

Even Altman seemingly hasyet to figure out what the "magic intelligence in the sky" will mean for modern society.

But one thing is for certain: it'll be an extremely expensive endeavor, and he's looking for more investment.

More on AGI: Google AI Chief Says There's a 50% Chance We'll Hit AGI in Just 5 Years

More here:

Sam Altman Seems to Imply That OpenAI Is Building God - Futurism

Artificial intelligence: the world is waking up to the risks – InCyber

All these documents refer to the risks linked to Artificial General Intelligence (AGI), which is level 2 of AI. Todays artificial intelligence, including generative AI systems like ChatGPT, fall within Artificial Narrow Intelligence (ANI), which is level 1. This artificial intelligence can do a single activity as well as a human, perhaps even better.

AGI and its level 3 successor, Artificial Super Intelligence (ASI), are AIs that can accomplish all informational activities to a quality level that equals or exceeds what humans can produce. Currently, the expert consensus is that AGI could arrive between 2030 and 2040. Tomorrow, basically.

These documents point to major risks for humanity, but are they right to warn us of these dangers? The answer is clearly yes. I urge you to read all five documents, but if you were to read just one, it would be the one by this group of 30 experts.

This excerpt gives the general tone of the document: AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity. It coolly suggests the extinction of mankind! The three ensuing documents each mostly resemble each other. They are very general declarations of intent full of goodwill but with little real impact.

They were published by the United Nations, the G7 as well as the Bletchley Summit, an international meeting organized by the United Kingdom that was held on November 1 and 2, 2023.

No one will argue against the ideas expressed in the Bletchley Declaration signed by 28 countries with widely divergent interests, including the United States, China, India, Israel, Saudi Arabia and the European Union. The recognition of the need to take account of human rights protection, transparency and explicability, fairness, accountability, regulation, security, appropriate human oversight, ethics, bias mitigation, privacy and data protection.

The fifth document is different it is an executive order signed by Joe Biden on October 30, 2023. In 60 pages, the US president lists a hundred specific actions to be taken, and for each, the executive order names the public authorities in charge of carrying them out. Furthermore, the timetable is restrictive, with most of these actions being given between 45 and 365 days to be completed. It is far from a catalogue of good intentions: it demonstrates the United States clear desire to do everything it can to maintain its global leadership of AI.

The European Commission has been working on AI since 2020. In June 2023, it published a document, EU Legislation in Progress, detailing work on a European Artificial Intelligence Act (AIA) to follow the Digital Service Act and the Digital Market Act. The AIA must now be submitted to the Member States, who can make changes before its final approval. No one knows how long this could take.

To summarize, can we imagine what the future might hold for collaboration between humankind and AGI and ASI? If we are to believe Rich Sutton, professor at the University of Alberta in Canada and a recognized specialist in artificial intelligence, humanity must inevitably prepare to hand over the reins to AI, as this illustration from one of his recent lectures shows.

My recommendation: the challenges posed by the rapid arrival of AGIs and ASIs are among the questions that require quick reflection from directors of all organizations, public and private.

Furthermore, the best AI specialists are often asked, what is humanitys future in a world where AI performs better than humans?. The common answer? I dont know.But that is no reason not to think about it, all together, and very quickly.

See the original post:

Artificial intelligence: the world is waking up to the risks - InCyber

How to win the artificial general intelligence race and not end … – The Strategist

In 2016, I witnessed DeepMinds artificial-intelligence model AlphaGo defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the worlds greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful in a very deep and humanlike way.

Other scientists and world leaders took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order on AI safety, the G7 announced the Hiroshima AI Process and 28 countries signed the Bletchley Declaration at the UKs AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative.

These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, its vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.

Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved and that an AI system thats more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.

For example, most AI models now use neural networks, an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo werent fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.

The next step towards AGI was the arrival of large-language models, such as OpenAIs GPT-4, which are created using a version of neural networks known as transformers. OpenAIs previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performinga range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities and outperformed human test-takers on the US bar exam, a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.

AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change. It would be our generations Oppenheimer moment, only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.

At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers dont even know if its possible to predetermine those values in AGI systems before theyre created.

Its promising that universities, corporations and civil research groups in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up and pose significant competition. China certainly has the talent, the resources and the intent but faces additional regulatory hurdles and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCPs monopoly on domestic information control by offering alternative worldviews to state propaganda.

Nonetheless, we shouldnt underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCPs National Intelligence Law, were to develop and tame AGI or near-AGI capabilities first, it would further entrench the partys power to repress its domestic population and ability to interfere with the sovereignty of other countries. Chinas state security system or the Peoples Liberation Army could deploy it to supercharge their cyberespionage operations or automate the discovery of zero-day vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress content and topics abroad at the direction of Chinese security services.

At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AIs own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons or supercharge disinformation and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.

When Meta trained Cicero to play the board game Diplomacy honestly by generating only messages that reflected its intention in each interaction, analysts noted that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.

This all may sound futuristic, but its not as far away as you might think. In a 2022 survey, 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 yearsthat is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests2032 as the likely year general AI systems will be devised, tested and publicly announced. But thats just the current estimateexperts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.

That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.

First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.

Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.

We therefore shouldnt see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.

Third, liberal democracies must collectively maintain as large a lead as possible in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in Chinas AI and national-security industries. Impeding the CCPs AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.

For example, Midu, an AI company based in Shanghai that supports Chinas propaganda and public-security work, recently announced the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While Chinas access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into Chinas AI and national-security industries.

Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institutes Millennium Prize Problems, which provides awards for solutions to some of the worlds most difficult mathematics problems.

Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the worlds first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.

The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australias Future Fund to achieve an average annual return of at least the consumer price index plus 45% per annum over the long term. Instead of being reinvested into the fund, the 45% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.

A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.

The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.

This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.

Read the original here:

How to win the artificial general intelligence race and not end ... - The Strategist

AI 2023: risks, regulation & an ‘existential threat to humanity’ – RTE.ie

Opinion: AI's quickening pace of development has led to a plethora of coverage and concern over what might come next

These days the public is inundated with news stories about the rise of artificial intelligence and the ever quickening pace of development in the field. The last year has been particularly noteworthy in this regard and the most noteworthy stories came as ChatGPT was introduced to the world in November 2022.

This is one of many Generative AI systems which can almost instantaneously create text on any topic, in any style, of any length, and at a human level of performance. Of course, the text might not be factual, nor might it make sense, but it almost always does.

ChatGPT is a "large language model". It's large in that it has been trained on enormous amounts of text almost everything that is available in a computer-readable form and it produces extremely sophisticated output of a level of competence we would expect of a human. This can be seen as a big sibling to the predictive text system on your smartphone that helps by predicting the next word you might want to type.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT 2fm's Dave Fanning Show, Prof Barry O'Sullivan on the rise of AI

But ChatGPT doesn't do this just at word level, but at the level of entire passages of text. It can also compose answers to complex queries from the user. For example, ChatGPT takes the prompt "how can I make something that flies from cardboard?" and answers with clear instructions, explains the principles of flight that can be utilised and how to incorporate them into your design.

The most powerful AI systems, those using machine learning, are built using huge amounts of data. Arthur C. Clarke said that "any sufficiently advanced technology is indistinguishable from magic". For many years now, there has been growing evidence that the manner in which these systems are created can have considerable negative consequences. For example, AI systems have been shown to replicate and magnify human biases. Some AI systems have been shown to amplify gender and racial biases, often due to hidden biases in the data used to train them. They have also been shown to be brittle in the sense that they can be easily fooled by carefully formulated or manipulated queries.

AI systems have also been built to perform tasks that raise considerable ethical questions such as, for example, predicting the sexual orientation of individuals. There is growing concern about the impact of AI on employment and the future of work. Will AI automate so many tasks that entire jobs will disappear and will this lead to an unemployment crisis? These risks are often referred to as the "short-term" risks of AI. On the back of issues like these, there is a considerable focus on the ethics of AI, how AI can be made trustworthy and safe and the many international initiatives related to the regulation of AI.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's Morning Ireland, Prof Barry O'Sullivan discusses an open letter signed by key figures in artificial intelligence who want powerful AI systems to be suspended amid fears of a threat to humanity.

We have recently also seen a considerable focus on the "long-term" risks of AI which tend to be far more dystopian. Some believe that general purpose AI and, ultimately, artificial general intelligence are on the horizon. Todays AI systems, often referred to as "narrow AI systems", tend to be capable of performing one task well, such as, for example, navigation, movie recommendation, production scheduling and medical diagnosis.

On the other hand, general purpose AI systems can perform many different tasks at a human-level of performance. Take a step further and artificial general intelligence systems would be able to perform all the tasks that a human can and with far greater reliability.

Whether we will ever get to that point, or even if we really would want to, is a matter of debate in the AI community and beyond. However, these systems will introduce a variety of risks, including the extreme situation where AI systems will be so advanced that they would pose an existential threat to humanity. Those who argue that we should be concerned about these risks sometimes compare artificial general intelligence to an alien race, that the existence of this extraordinarily advanced technology would be tantamount to us living with an advanced race of super-human aliens.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's This Week, fears over AI becoming too powerful and endangering humans has been a regular sci-fi theme in film and TV for decades, but could it become a reality?

While I strongly believe that we need to address both short-term and long-term risks associated with AI, we should not let the dystopian elements distract our focus from the very real issues raised by AI today. In terms of existential threat to humanity, the clear and present danger comes from climate change rather than artificial general intelligence. We already see the impacts of climate change across the globe and throughout society. Flooding, impacts on food production and the risks to human wellbeing are real and immediate concerns.

Just like the role AI played in the discovery of the Covid-19 vaccines, the technology has a lot to offer in dealing with climate change. For almost two decades the field of computational sustainability has used the methods of artificial intelligence, data science, mathematics, and computer science, to the challenges of balancing societal, economic, and environmental resources to secure the future well-being of humanity, very much addressing the Sustainable Development Goals agenda.

AI has been used to design sustainable and climate-friendly policies. It has been used to efficiently manage fisheries and plan and monitor natural resources and industrial production. Rather than being seen as an existential threat to humanity, AI should be seen as a tool to help with the greatest threat there exists to humanity today: climate change.

Of course, we cannot let AI develop in a way that is without guardrails and without proper oversight. I am confident that the fact that there is active debate about the risks of AI, and that there are regulatory frameworks being put in place internationally, that we will tame the genie that is AI.

Prof Barry O'Sullivan appears on Game Changer: AI & You which airs on RT One at 10:15pm tonight

The views expressed here are those of the author and do not represent or reflect the views of RT

More:

AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie

Europe’s weaknesses, opportunities facing the AI revolution – EURACTIV

From the regulatory approach currently under discussion to the geopolitical risks of AI,Europes challenges vis-a-vis Artificial Intelligence are many. The think thank network PromethEUs presented its paper on AI on Tuesday (14 November), focusing on the EUs AI Act, generative AI, and AI and businesses.

The network includes four Southern European think tanks: the Institute for Competitiveness from Italy, the Elcano Royal Institute from Spain, the Foundation for Economic and Industrial Research from Greece, and the Institute of Public Policy from Portugal.

For the presentation of its latest study, experts and stakeholders gathered in Brussels to discuss the possible road ahead for Europes future competitiveness in this field.

The EUs AI Act is a flagship legislative proposal and the worlds first attempt to regulate Artificial Intelligence on a risk-based approach.

The definition of AI, as strange as it may sound, is still under discussion in the trilogue, said Steffen Hoernig, professor at Nova School of Business and Economics, adding that it is important to be able to decide which type of systems fall under the AI Act.

Euractiv understands that EU policymakers have been waiting for the Organisation for Economic Co-operation and Development (OECD) to update its definition of AI.

Hoernig said that discussions are ongoing about the file, such as under which risk category biometric AI belongs, or the establishment of an AI Board or an AI Office. National positions differ, especially on the latter, Hoerning noted.

He said a big issue is the question of foundational models and the general purpose of AI, pointing out that ChatGPT was introduced after the proposal was drafted so it is not covered in the text.

Last Friday, Euractiv reported that France and Germany, under pressure from their leading AI startups, were pushing against obligations for foundation models, leading to strong political frictions with MEPs, who want to regulate these models.

Hoering believes that national interests in some countries are taking priority over the interests of the EU when it comes to the regulation and that the question of how we should define hyperscale AI systems remains.

Stefano da Empoli, president of the Institute for Competitiveness, argued that, while generative AI systems like the chatbot ChatGPT may be the most visible to users, the terms also refer to other tools.

The study focuses on Italy, Spain, Greece, and Portugal, which are at the bottom of the ranking in terms of using generative AI compared to Nordic EU countries. More than a third of the generative AI startups in Europe are located in the UK.

At the same time, da Empoli emphasised that investments in this disruptive technology have been put slightly on the sidelines because they are more in the hands of the member states.

Raquel Jorge, a policy analyst at the Elcano Royal Institute explained that in terms of security, what we have identified is that generative AI will present security risks, but we are not quite sure that it will create new threats, adding that instead, it looks like it will amplify the existing threats.

When it comes down to the dual-use applications of generative AI, there is some doubt about the military usage, she said.

Jorge also noted that while it may seem that NATO keeps away from the EUs reality, in July, NATOs Data and Artificial Intelligence Review Board hosted a private event related to generative AI.

Aggelos Tsakanikas, an associate professor at the National Technical University of Athens, said they aimed to measure the impact of AI on businesses for entrepreneurship and assess the policies implemented in the four countries of the PromethEUs network.

The research showed, for example, that there is a shortage of specialists in Spain, while in Greece, there are startup activities related to AI.

Tsakanikas agreed with Hoernig that defining AI is still ongoing but added that it is also a question of how businesses use it.

We need to have a very strict definition of what exactly we are measuring when we are trying to see the diffusion of AI in the business sector, he said.

A SWOT (strengths, weaknesses, opportunities, and threats) analysis has been conducted for the paper, discussing all the major issues related to AI, such as non-qualified workers, political resistance, and economic costs, Tsakanikas explained.

[Edited by Luca Bertuzzi/Zoran Radosavljevic]

Read more:

Europe's weaknesses, opportunities facing the AI revolution - EURACTIV

How the AI Executive Order and OMB memo introduce … – Brookings Institution

President Biden recently signed the Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With sections on privacy, content verification, and immigration of tech workers (to name just a few areas), the executive order is sweeping. Encouragingly, it introduces key guardrails for the use of AI and takes important steps to protect peoples rights. It is also inherently limited: Unlike acts of Congress, executive actions cannot create new agencies or grant new regulatory powers over private companies. (They can also be undone by the next president.) The EO was followed two days later by a draft memorandum, now open for public comment, from the Office of Management and Budget (OMB) with additional guidance for the federal government to manage risks and mandate accountability while advancing innovation in AI. Taken together, these two government directives offer one of the most detailed pictures of how governments should establish rules and guidance around AI.

Notably, these actions towards accountability focus on current harms and not existential risk, and thus can serve as useful guides to policymakers focused on the everyday concerns of their constituents. Beyond executive action, with its inherent limits, the next step will be for other policymakersfrom Congress to the statesto use these documents as a guide for future action in requiring accountability in the use of AI.

As we analyze the EO and the OMB memo alongside each other for accountability directions, here is what stands out:

Impact on government use of AI

The executive order (in Section 10.1(b)) gives explicit guidance to federal agencies for using AI in ways that protect safety and rights. The section outlines contents of the draft OMB memo released for public comment two days after the EO. In what may become a model for AI governance from localities, to states, to international governing agreements, the OMB memo, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, requires specific AI guardrails.

Critically, the memo includes definitions of safety- and rights-impacting AI as well as lists of systems presumed to be safety- and-rights impacting. This approach builds on work done over the past decade to document the harms of algorithmic systems in mediating critical services and impacting peoples vital opportunities. By taking this presumptive approach, rather than requiring agencies start from scratch with risk assessments on every system, the OMB memo also reduces the administrative burden on agencies and allows decision-makers to move directly to instituting appropriate guardrails and accountability practices. Systems can also be added or removed from the list based on a conducted risk assessment.

Once an AI system is identified as safety- or rights-impacting, the draft OMB memo specifies a minimum set of practices that must be in place before and during its use. As required by the executive order, these practices build on those identified in the Blueprint for an AI Bill of Rights. This detailed section of the memo leads off with impact assessments and lists three key areas that agencies must assess before a system is put into use: intended purpose and expected benefit; potential risks to a broad range of stakeholder groups; and quality and appropriateness of the data the AI model is built from. Should the assessing agency conclude that the systems benefits do not meaningfully outweigh the risks, agencies should not use the AI. The memo also directs agencies to assess, through this process, whether the AI system is fit for the task at hand; this is a critical effort to make sure AI actually works, when many times it has been shown not to, and to assess whether AI is the right solution to the given problem, countering the tendency to assume it is.

The OMB memo goes on to require a range of accountability processes, including human fallback, the mitigation of new or emerging risks to rights and safety, ongoing assessment throughout a systems lifecycle, assessment for bias, and consultation and feedback from affected groups. Taken together, if carried through to the final version of the memo, these requirements create a remarkable step forward in establishing an accountability ecosystemnot one point of intervention, but many methodologies and practices that, working together over time and at multiple stages in an AI lifecycle, could represent meaningful controls.

Importantly, the OMB memo requires agencies to stop using an AI system if these practices are not in place. The minimum practices additionally include instructions to reconsider use of a system if concerning outcomes, such as discrimination, are found through testing.

Public accountability will be challenging, given the breadth and complexity of these practices. One key accountability mechanism used will be annual reporting, as part of an expanded AI use case inventory. However, the details of what will be reported were not included as part of the memorandum and will be determined later by OMB. Journalists and researchers have identified problems with the previous practices of the AI use case inventory, including both that agencies left known AI uses off their inventory and that the reporting requirements were minimal and did not include testing and bias assessment results. Looking forward, effectiveness of the AI use case inventory as an accountability mechanism will depend on whether existing loopholes and under-reporting concerns are addressed through the OMB process to come. Its also important to consider that the effectiveness of transparency reporting on AI systems as an accountability mechanism has also been more broadly challenged.

Throughout the guidance, OMB refers to requirements for government use of AI. This phrase, importantly, covers both AI that is developed and then used by the federal government, and AI that is procured by the government. By using the power of the governments purse, the guidance also has the potential to influence the private sector as well. OMB also commits to developing further guidance for AI contracts that aligns with what it has laid out so far in this draft memo. That current guidance is rigorous; if those same provisions are successfully required for government purchasing of AI, it will significantly shape how government AI vendors are building and testing their products.

Impact on the private sector

The president only has so many levers to pull through an executive order to regulate private industry. Because the EO cannot make new laws, it relies on existing agency and presidential authorities (and the development of procurement rules described above) to influence how private companies are developing and deploying AI systems. Within that scope, the regulatory impact of the EO on the private sector could still be far-reaching.

The EO directs agencies with enforcement powers to deepen their understanding of their capacities in the context of AI, to coordinate, and to develop guidance and potentially additional regulations to protect civil rights and civil liberties in the broader marketplaceas well as to protect consumers from fraud, discrimination, and other risks, including risks to financial stability, and specifically to protect privacy. Sections 7 through 9 address various aspects of this, starting by directing the attorney general to assemble the heads of federal civil rights offices, including those of enforcement agencies, to determine how to apply and potentially expand the reach of civil rights law across the government to address existing harms.

Additionally, the President calls on Congress to pass federal data privacy protections, and then through the EOs Section 9 directs agencies to do what they can to protect peoples data privacy without Congressional action. The section opener calls out not only AIs facilitation of the collection or use of information about individuals, but also specifically the making of inferences about individuals. This could open up a broader approach to assessing privacy violations, along the lines of networked privacy and associated harms, which considers not only individual personal identifiable information but the inferences that can be drawn by looking at connected data about an individual, or relationships between individuals.

The EO directs agencies to revisit the guidelines for privacy impact assessments in the context of AI, as well as to assess and potentially issue guidelines on the use of privacy-enhancing technologies (PETs), such as differential privacy. Though brief, the EOs privacy section pushes to expand the understanding of data privacy and the remedies that might be taken to address novel and emerging harms. As those ideas move through government, they will inevitably inform potential data protection and privacy laws at the federal and (more likely) state level that will govern private industry.

Its not surprising that generative AI was given a prominent treatment in the executive order: systems like ChatGPT that can generate text in response to prompts and other systems that can generate images, video, or audio, have catapulted concerns about AI into the public consciousness. Concerns have ranged from the technologys potential to replace skilled writers to its reinforcement of degrading stereotypes to the overblown notion that it will end humanity as we know it. Yet these systems are largely created by the private sector, and without new legislation the White House has limited levers to require these companies to act responsibly. There is an unfolding, live debate about whether to treat generative AI systems differently than other AI systems. The EOs authors choose to differentiate generative AI in Section 4, and have drawn criticism for that decision; a better approach may have been the one taken in the OMB memo where the same protections are required for generative AI as other AI and the focus is on the potential harms of the system.

To govern generative AI systems, the executive order invokes the Defense Production Act. Introduced during the Korean War and also used for production of masks and ventilators during the COVID pandemic, the Defense Production Act gives the president the authority to expedite and expand industrial production in order to promote national defense. The executive order (in Section 4.2(i)) uses it to require private companies to preemptively test their models for specific safety concerns; it also specifies red-teaming as the testing methodology. Red-teaming is a practice of having a team external to the development of a system (but potentially still within the company) stress-test the system for specific concerns. The executive order requires that companies perform red-teaming in line with guidance from NIST that will be developed per Section 4.1(ii). Companies must report the resulting documentation of safety testing practices and results to the federal government.

This AI accountability modelpreemptive testing according to specific standards and associated reporting requirementsis potentially useful. Unfortunately, the specifics in this case leave much to be desired. First, given the use of the Defense Production Act, the testing and reporting the EO requires are limited to concerns relating to national defense and the protection of critical infrastructure, including cybersecurity and bioweapons. Yet as public debate has shown, concerns about generative AI go well beyond these limited settings. Second, the specific definitions used in the executive order to determine which systems must adhere to these standards appear to have been copied wholesale from a policy document put forth by OpenAI and other authors. Its thresholds for model size have little substantive justification; this means that future technological developments may render them under-inclusive or otherwise ineffective in targeting the systems with the most potential for harm. Finally, the executive order positions AI red-teaming as the singular AI accountability mechanism to be used for generative AI, when AI red-teaming works best in combination with other accountability mechanisms. By contrast, the OMB guidance for AI use by the federal government, which will also be required for generative AI, requires multiple accountability mechanisms including algorithmic impact assessments and public consultation. The full landscape of AI accountability mechanisms should be applied to generative AI by private companies as well.

Consistent with the EOs broad approach, the order addresses AIs worker impacts in multiple ways. First, while research suggests a more complicated picture on technological automation and work, the EO sets out to support workers during an AI transition. To that end, the EO directs the chairman of the presidents Council of Economic Advisers to prepare and submit a report to the president on the labor-market effects of AI. Section 6(a)(ii) mandates that the secretary of labor submit to the president a report analyzing how federal agencies may support workers displaced by the adoption of AI and other technological advancements.

Alongside the focus on AI displacement, the EO recognizes that automated decision systems are already in use in the workplace and directs attention to their ongoing impacts on job quality, worker power, and worker health and safety. The most encompassing directive lies in Section 6(b), which directs the secretary of labor, working with other agencies and outside entities, including labor unions and workers, to develop principles and best practices to mitigate harms to employees well-being. The best practices must cover labor standards and job quality, and the EO further encourages federal agencies to adopt the guidelines in their internal programs.

Section 7.3 of the EO directs the labor department to publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. Given the overwhelming evidence that algorithmic systems replicate and reinforce human biases, the broad language of other technology-based hiring systems is a major opportunity for the DOL to model standards of nondiscriminatory hiring.

While the EOs worker protections are only guidance and best practices, the OMB memo directly mandates protocols to support workers and their rights when agencies use AI. The memo applies the minimum risk management practices where AI is used to determine the terms and conditions of employment. This broad definition positions the federal government, as the nations largest employer, to influence the use of AI systems within the workplace. The memo also requires that human remedies are in place in some cases, a requirement that may add jobs, adding complexity to concerns about the labor-market effects of AI. Further, the OMB memos requirement that federal agencies consult and incorporate feedback from affected groups positions workers and unions to influence the deployment of AI technology, which aligns with calls from civil society and academia to ensure that the people most likely to be affected by technology should have influence into that systems design and deployment.

How will this all get done?

The narrative that the federal government is not knowledgeable about AI systems should be laid to rest by these recent documents. There was clearly a lot of thought put into the design and implementation of a national AI governance model. That said, its also clear that many more people representing the right mix of expertise will be needed quickly to implement this ambitious plan on the tight timeline laid out in the orderand on the implicit deadline marked by the end of the Biden administrations first term. Given that the EO and the OMB memo collectively run to well over 100 pages of actions that the federal government should take to address AI, the question looms: who will do all this work?

A major new role addressed in both the EO and the OMB memo is that of the Chief AI Officer (CAIO), which every agency head is required to designate within 60 days of the EOs enactment. The CAIOs responsibilities are laid out in the OMB memo and fall into three categories: coordinating agency use of AI, promoting AI innovation, and managing risks from AI use. The way the CAIO role is understood and filled will be critical to what comes next; if agencies interpret the role as solely or primarily a technical one, rather than one focused societally on opportunities and risks related to the public interest use of AI, they may pursue very different implementation priorities than those articulated by the EO. CAIOs are also responsible for agency-level AI strategies, which are due within one year of the EOs launch. The strategies seem likely to call for increased headcount and new expertise in government.

The EO has anticipated the need for both bringing new talent into the government and building the skills and capacities of civil servants on AI matters. The federal government has long been criticized for its slow, difficult hiring processes, making it tremendously challenging for an administration to pivot attention to an emerging issue. This administration has tried to preempt this criticism through the announcement of AI talent surge specified in Section 10.2 of the EO. That section gives OSTP and OMB a spare 45 days to figure out how to get the needed people into government, including through the establishment of a cross-agency AI and Technology Talent Task Force. The federal government has already started some of that recruitment push in the launch of a new AI jobs website.

What is potentially most challenging in recruiting AI talent is identifying the actual skills, capacities, and expertise needed to implement the EOs many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to governindeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledgehealth and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administrations talent surgethe growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. Its also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the Presidents declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.

As broad as the EO is, there are critical areas of concern that have either been pushed off to later consideration, or avoided. For instance, the EO includes a national security carveout, with direction to develop separate guidance in 270 days to address the governance of AI used as a component of a national security system or for military and intelligence purposes; many applications of AI could potentially fall within those criteria. The EO also doesnt take the opportunity to ban specific practices shown to be harmful or ineffective; an example where it could have taken further action is in banning the use of affective computing in law enforcement. The EO addresses the potential for AI to be valuable in climate science and the mitigation of climate change; however, it does nothing about AIs own environmental impact, missing an opportunity to force reporting on energy and water usage by companies creating some of the biggest AI systems. Lastly, the EO sets guidelines for the use of AI by federal agencies and contractors but does not attach any requirements or guidance for recipients of federal grants, such as cities and states.

Finally, the EO addresses research in a number of points throughout the document and references research on a range of topics and through many vehicles, including an National Science Foundation (NSF) Regional Innovation Engine and four NSF AI Research Institutes, to join the 25 already established. Yet the EO doesnt include major *new* commitments to research funding. A more robust approach to addressing AI research and education in the EO could have been a statement that reframed the national AI research and development field as sociotechnical, rather than purely technicalproactively focused on interdisciplinary approaches that center societal impacts of AI alongside technological advancement. Such a statement would have aligned meaningfully with Vice President Kamala Harriss November 1st 2023 speech at the UK AI Safety Summit in which she argued for a future where AI is used to advance the public interest.

If the administration is indeed committed to seeing AI in the public interest, as Vice President Harris indicated, its new EO and OMB guidance are the clearest indication of how it intends to meet that ambition: mandating hard accountability to protect rights, regulating private industry, and moving iteratively, so that governance efforts advance alongside the field of sociotechnical research. But the executive branch can only do so much. Ultimately, the EO can be readamong other waysas a roadmap for Congress to legislate. Additionally, cities, states, and other countries should understand these new documents as direction-setting and could choose to rapidly align their policies with these documents to create more comprehensive rights and safety protections.

Continue reading here:

How the AI Executive Order and OMB memo introduce ... - Brookings Institution

How AI Ecosystems Are Transforming the Future of Business – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Over the past few years, AI technologies have begun to connect with each other, creating a more advanced and powerful system known as the Open AI Ecosystem. This ecosystem has the ability to connect all of our technologies together, whether it's analyzing data, analyzing images or experimental results. The interrelationship between AI, the internet and data can unlock unlimited potential to increase productivity, improve living standards and build a better society for years.

AI ecosystems enable businesses to leverage the power of AI in various domains and applications, such as customer service, marketing, sales, operations, finance and more. AI ecosystems also help businesses to innovate faster, optimize costs, enhance customer experience and create new value propositions.

However, building and maintaining an AI ecosystem is not an easy task. It requires a clear vision, a strategic roadmap, a collaborative culture, a robust infrastructure and a skilled workforce. Businesses must also be aware of the challenges and risks associated with AI ecosystems, such as ethical issues, data privacy, security, governance and regulation.

Here's how AI ecosystems are transforming the future of business.

Related: The Secret to How Businesses Can Fully Harness the Power of AI

Businesses play a crucial role in shaping and leveraging AI ecosystems. Businesses can create and share data with other entities in the ecosystem to enable data-driven decision-making, innovation and collaboration. For example, OpenAI, a research organization dedicated to creating artificial general intelligence (AGI), has created GPT-3, one of the world's most advanced natural language processing (NLP) models.

GPT-3 can generate coherent and relevant texts on any topic based on a given prompt. OpenAI has made GPT-3 available to other researchers and developers through its OpenAI API, which allows them to access the model and create various applications using natural language.

Businesses can also develop and deploy algorithms to perform various data tasks and functions. For example, Netflix, one of the leading streaming platforms in the world, uses algorithms to personalize its content recommendations for each user based on their preferences, behavior and feedback. Netflix also uses algorithms to optimize its content production, distribution and marketing strategies.

Services are the outcomes of AI ecosystems, and businesses can provide and consume services enabled by data and algorithms. For example, Amazon, one of the largest e-commerce platforms in the world, provides various services to its customers using AI technologies, such as voice assistant (Alexa), delivery drones (Prime Air) and smart home devices (Echo).

Businesses can also help shape AI by building and maintaining infrastructure supporting the ecosystem's data collection, storage, processing, analysis and transmission. For example, Google, one of the leading technology companies in the world, has built and maintained a massive infrastructure that powers its search engine, email service (Gmail), video platform (YouTube), etc. Google also provides infrastructure services to other entities in the ecosystem through its cloud platform (Google Cloud).

By playing these roles, businesses can shape and leverage AI ecosystems to create value for themselves, their customers and society.

Related: How AI Is Being Used to Increase Transparency and Accountability in the Workplace

One of the most common ways AI ecosystems help businesses today is by enhancing customer experience. One area of interest is in customer support, where AI-driven self-services such as chatbots and knowledge bases can offer 24/7 assistance. This is typically achieved through chatbots and similar technology facilitating personalized, relevant and timely services.

Especially in the information-intensive financial sector, large model technology offers a wealth of application scenarios. It can implement risk control and enhance efficiency. Besides, in the investment domain, large models could combine securities investment companies to create a "smart brain," which means that if there is key information based on the industry, through deep learning and machine learning technologies, it can analyze massive historical data and real-time market conditions and predict risks more accurately to help investors make decisions.

Related: What Will It Take to Build a Truly Ethical AI? These 3 Tips Can Help.

AI ecosystems can also support businesses to improve operational efficiency by automating, optimizing and streamlining various processes and tasks. Smart manufacturing uses emerging, advanced technologies like AI to increase the efficiency of traditional manufacturing processes. For example, Siemens and Microsoft harness the collaborative power of generative artificial intelligence (AI) to help industrial companies drive innovation and efficiency across product design, engineering, manufacturing and operational lifecycle.

Another common field for which AI shows highly promising capabilities is innovation. Specifically, AI systems can drive innovation and growth by enabling new products, services, markets and business models. For example, in the digital health industry, AI has shown great innovation potential. The combination of AI and smart baby crib leverages multimodal sensors to accurately monitor vital signs like breathing and heart rate 24/7, without wearable devices. Simultaneously, with intelligent cameras, it can identify abnormal situations such as crying or nasal congestion, promptly send real-time risk alerts, and proactively detect safety and health concerns, alleviating the anxieties of new parents.

Similarly, in other industries, such as vehicle manufacturing, the potential of AI is equally evident. Tesla, one of the leading electric vehicle manufacturers in the world, uses AI technologies to create self-driving cars that can learn from their environment and improve over time. Tesla also uses AI technologies to design and produce its batteries, solar panels and power grids.

Lastly, businesses are using AI ecosystems to solve social and environmental problems by providing solutions that can benefit humanity and the planet. Smart technologies can be used to create intelligent tools for ensuring water and food security and smarter food transactions. Intelligent solutions can also help optimize energy efficiency and monitor greenhouse emissions.

By adopting AI ecosystems, businesses can gain a competitive edge, increase efficiency and quality, and create value for their stakeholders, customers and society. AI ecosystems are a technological trend and a strategic imperative for businesses that want to thrive in the digital age. As more enterprises and governmental organizations are starting to enter the race, and Moore's law appears to be in full swing, we can confidently expect significant innovation over the next decade with the potential to completely disrupt every sector of business operations.

Originally posted here:

How AI Ecosystems Are Transforming the Future of Business - Entrepreneur

Understanding Artificial Intelligence: Definition, Applications, and … – Medium

Artificial Intelligence (AI) epitomizes computer systems capabilities to perform intricate tasks that traditionally demanded human intellect, such as problem-solving, decision-making, and reasoning. Today, the term AI encompasses a broad spectrum of technologies powering various services and products that significantly influence our daily lives from recommendation apps for TV shows to real-time customer support via chatbots. Yet, the question persists: do these technologies genuinely embody the envisioned concept of artificial intelligence? If not, why is the term ubiquitously applied? This article delves into the essence of artificial intelligence, its functionalities, diversified types, along with a glance at its potential perils and rewards, elucidating pathways for furthering knowledge through flexible educational courses.

Artificial Intelligence Defined AI encapsulates the theory and evolution of computer systems adept at performing tasks historically reliant on human intelligence, including speech recognition, decision-making, and pattern identification. This all-encompassing term spans various technologies like machine learning, deep learning, and natural language processing (NLP). However, a debate lingers on whether current technologies categorically constitute true artificial intelligence or merely denote highly sophisticated machine learning, perceived as an initial stride towards achieving general artificial intelligence (GAI).

Present AI Landscape While philosophical disparities persist regarding the existence of true intelligent machines, contemporary use of the term AI mostly refers to machine learning-fueled technologies such as ChatGPT or computer vision, enabling machines to accomplish erstwhile human-exclusive tasks like content generation, autonomous driving, or data analysis.

Illustrative AI Applications Though humanoid AI entities akin to characters in science fiction remain elusive, encounters with machine learning-powered services or devices are commonplace. These range from systems making music suggestions, optimizing travel routes, translating languages (e.g., Google Translate), personalized content recommendations (e.g., Netflix), to self-driving capabilities in vehicles like Teslas cars.

AI in Diverse Industries AI pervades multiple sectors, revolutionizing operations by automating tasks devoid of human intervention. Examples include fraud detection in finance, leveraging AIs data analysis prowess, and healthcares deployment of AI-driven robotics to facilitate surgeries near sensitive organs, curbing risks like blood loss or infections.

Unveiling Artificial General Intelligence (AGI) AGI embodies the theoretical realm where computer systems attain or surpass human intelligence. Recognizing true AGIs advent remains a point of contention, with the Turing Test proposed by Alan Turing in 1950 often cited as a benchmark for machine intelligence. Despite claims of early AGI forms, skepticism lingers among researchers regarding the achievement of AGI.

The 4 AI Paradigms In a bid to comprehend intelligence and consciousness in AI, scholars delineate four AI types:

AIs Prospects and Perils AIs transformative potential in various domains comes with an array of benefits and concerns. While promising greater accuracy, cost efficiencies, personalized services, and enhanced decision-making, AI also raises alarms about job displacement, biases in training data, cybersecurity threats, opaque decision-making processes, and the potential for misinformation and regulatory breaches.

In Conclusion AIs multifaceted impacts demand a balanced perspective. Its capabilities and implications underscore the importance of responsible implementation. Understanding AIs nuances is crucial, for wielding such power entails commensurate responsibility.

See original here:

Understanding Artificial Intelligence: Definition, Applications, and ... - Medium

The impact of AI and Language Models – Girton College

Girton College's Supernumerary Fellow, Professor Ted Briscoe and PhD Student, Austin Tripp, presented their pioneering AI research into Large Language Models and using AI to design molecules at our recent Fellows' Research Evening. Discover more about what their talks focused on and their impact below. Professor Ted Briscoe:"Large Language Models (like ChatGPT): The Hype and the Reality"

Professor Briscoe Ted's talk focused on how ChatGPT has exposed an unprecedented number of people to cutting-edge natural language processing using large language models. It has also ignited a vigorous and often overblown public debate over the potential benefits, risks and capabilities of Generative AI. In the talk he explained the differences between 'small' and large language models, and showed via examples that, despite their impressive fluency and some 'emergent' capabilities like translation and question answering, they do not as yet fully learn the mapping between form and meaning encoded in the grammar of individual languages, often struggle to resolve pronoun references, and fail to infer the discourse relations between sentences. As such, they represent an impressive and useful step change in language processing capabilities if used with care, but artificial general intelligence remains a challenging and elusive goal that will likely require a significantly different type of model.

Ted has worked on statistical and robust parsing algorithms, computational approaches to lexicon acquisition and to representation of lexical, syntactic and semantic knowledge, textual information extraction from scientific articles and regulatory documents, models of human language learning and processing, and evolutionary models of language development and change. His recent work has mostly focussed on NLP and ML techniques in support of language learning.

Original post:

The impact of AI and Language Models - Girton College

Startup gaining investment traction for AI clinician productivity tool – Mobihealth News

Melbourne-based health tech startup Heidi Health has raised A$10 million ($6.5 million) in a Series A funding round led by Blackbird Ventures.

Hostplus, Hesta, Wormhole Capital, Archangel Ventures, Possible Ventures and Saniel Ventures also participated in this investing round.

This brings its total investments to date to A$15 million ($9.7 million); Blackbird Ventures also led its seed funding round in 2021, which attractedA$5 million ($3 million).

WHAT THEY DO

Formerly Oscer, Heidi was founded just two years ago by a vascular surgery registrar, Dr Thomas Kelly, alongside Waleed Mussa and Yu Liu. They aim to develop AI-powered software that will improve patient experience while enhancing clinicians' working conditions.

Its flagship product, Heidi Clinician, leverages "artificial general intelligence" to automate tedious administrative tasks for clinicians. These include gathering histories, building ward round lists, performing clinical audits, writing clinical notes, creating documents, optimising discharge summaries for billings and processing referrals. Used as either an off-the-shelf or an enterprise white-label solution, Heidi Clinician is now already adopted by 100 GPs in 30 clinics across Australia.

WHAT IT'S FOR

Based on a media release, Heidi will use itsfresh fundsto develop Heidi Clinician further and to get more clinics and GPs in Australia to use its solution.

The startup directly connects with clinicians to offer their product. It also seeks partnerships with organisations and other software companies to include their AI offering.

"We're all about creating awesome and easy-to-share experiences, especially with multiplayer features that let clinicians share with their colleagues. For big companies, it's about using our already-working healthcare AI system, adding their own data to make Heidi even better, and selling the upgraded version to their existing customers," Dr Thomas Kelly told Mobihealth News, further explaining their go-to-market strategy.

It also plans to use its new funds to expand its team of doctors, designers, and engineers.

WHY IT MATTERS

Australia is facing a shortfall of around 10,600 GPs and a 58% increase in demand for GP services by the end of the decade, according to projections of the Australian Medical Association. It is said that Australian clinicians are now spending up to twice the amount of time on paperwork and administrative tasks than providing essential care and services. This negatively contributes not only to patient outcomes but also to clinician burnout.

"You're overrun with patients and there are never enough hours in the day. My time as a doctor was so often wasted doing paper referrals, waiting on hold or filling in copious amounts of documentation to satisfy the government's requirements for some piece of Medicare funding," Dr Kelly said, sharing her anecdote.

Meanwhile, it was also observed that not many junior doctors are choosing to become GPs, creating a "crippling burden on our GPs."

Heidi is taking a shot at these issues with Heidi Clinician. "[We are using] AI to automate the administrative components of care and better orchestrate our clinician resources with our patient population."

According to Dr Kelly, some GP users of their AI solution are saving between one to two hours of documentation. Several psychologists and occupational therapists have also reported having reduced the time to generate their detailed reports "by a third," down from three weeks to two weeks instead.

"Heidi Clinician is superpower for clinicians, and the first expression of our vision to change the world powered by Heidi's consult data," she emphasised.

MARKET SNAPSHOT

AI, particularly generative AI, easily comes on top of mind of healthcare providers today when considering solutions for raising staff productivity. These tools could bring up to $13 billion in value to the healthcare sector in Australia by 2030, according to Microsoft.

With so many genAI-powered solutions coming into the market lately including Ubie Medical Navi, Sculpted AI by Pieces Technologies, SayHeart, and AI4Rx's MedBeat HealthConnect, it can be challenging to stay ahead of the pack.

Explaining how they intend to set Heidi apart from its competition, Dr Kelly said: "We're founded by a clinician (me) and have been building in healthcare since 2019. We understand the data, security, privacy, and compliance challenges of using [large language models] in a healthcare setting."

"We believe this is one of the few industries where a sustainable moat can be built at every level of the AI product in AI safety at the model level, in the application layer with the most amazing product innovations like My Additions that lets you add things that aren't said out loud, and in the [go-to-market] motion creating groundswell around our approach to this space. As with any great idea, there'll be heaps of folks building similar things; we just have to be the best."

ON THE RECORD

"We desperately need a safe path to scale the most scarce resource in our healthcare system clinicians. Heidi's AI allows clinicians to spend less time on administrative tasks, and more time on what matters most: to foster enduring relationships with their patients and invest in preventative care," commented Michael Tolo, general partner at Blackbird Ventures, who led Heidi's Series A funding round.

View post:

Startup gaining investment traction for AI clinician productivity tool - Mobihealth News

What OpenAI’s latest batch of chips says about the future of AI – Quartz

OpenAI has received a coveted order of H100 chips and is expecting more soon, CEO Sam Altman said in a Nov. 13 interview with the Financial Times, adding that next year looks already like its going to be better in regards to securing more chips.

One could say that the level of attention on AI chatbots like OpenAIs ChatGPT and Googles Bard this year matches the amount of focus on Nvidias $40,000 H100 chips. OpenAI, like many other AI companies, uses Nvidias latest model of chips to train its models.

The procurement of more chips from OpenAI signals that more sophisticated AI models, which go beyond powering the current version of chatbots, will be ready in the near future.

Generative AI systems are trained on vast amounts of data to generate complex responses to questions, and that requires a lot of computing power. Enter Nvidias H100 chips, which are tailored for generative AI and run much faster than previous chip models. The more powerful the chips, the faster you can process queries, Willy Shih, a professor at Harvard Business School, previously told Quartz.

In the background, startups, chip rivals like AMD, and Big Tech companies like Google and Amazon have been working on building more efficient chips tailored to AI applications to meet the demandbut none so far have been able to outperform Nvidia.

Such intense demand for a specific chip from one company has created somewhat of a buying frenzy for Nvidia, and its not just tech companies racing to snap up these hot chipsgovernments and venture capital firms are chomping at the bit too. But if OpenAI was able to obtain its order, perhaps that tide is finally turning, and the flow of chips to AI businesses is improving.

And while Nvidia reigns, just last week, Prateek Kathpal, the CEO of SymphonyAI Industrial, which is building AI chatbots for internal use within manufacturers, told Quartz that, although its AI applications run on Nvidias chips, the company has also been in discussion with AMD and Arm for their technology.

OpenAIs growing chip inventory means a couple of things.

The H100 chips will help power the companys next AI model GPT-5, which Altman said is currently in the works. The new model will require more data to train on, which will come from both publicly available information and proprietary intel from companies, he told the Financial Times. GPT-5 will likely be more sophisticated than its predecessors, although its not clear what it will do that GPT-4 cant, he added.

Altman did not disclose a timeline for the release of GPT-5. But the quick succession of releases, with GPT-4 coming just eight months ago, following the release of its predecessor GPT-3 in 2020, highlights a rapid development cycle.

The procurement of more chips also suggests that the company is getting closer to creating artificial general intelligence, or AGI, for short, which is an AI system that can essentially accomplish any task that human beings can do.

Read the rest here:

What OpenAI's latest batch of chips says about the future of AI - Quartz

ChatGPT or not ChatGPT? That was the question, briefly, as … – GeekWire

Microsoft CEO Satya Nadella, right, on stage with OpenAI CEO Sam Altman at OpenAI Dev Day in San Francisco this week. (GeekWire Photo / Todd Bishop)

A brief restriction on employees ability to use ChatGPT inside Microsoft triggered at least one report Thursday that the tech giant was taking a curious approach in its multibillion-dollar investment in OpenAI.

Microsoft cited security and data concerns in an update on an internal website as it cut off AI tools such as ChatGPT for employee use, CNBC reported.

But the lockout was brief and apparently not intended, and was related to a large language model test being conducted by Microsoft.

We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees, a Microsoft spokesperson said in an emailed statement to GeekWire. We restored service shortly after we identified our error.

The spokesperson said that Microsoft encourages its employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.

ChatGPT provides sophisticated answers and detailed information in response to natural language queries. OpenAI said this week that the tool, which has more than 100 million users, was experiencing outages due to a targeted attack.

The situation with Microsoft had OpenAI CEO Sam Altman joking on X about retaliation rumors, as he posted the CNBC story.

It was all love earlier this week when Altman and Microsoft CEO Satya Nadella shared a stage at OpenAIs developer event in San Francisco on Monday.

We love you guys, Nadella told Altman, saying the OpenAI partnership requires us to be on the top of our game.

Altman said later, I think we have the best partnership in tech, and were excited to build AGI together, referring to their ambitions to create artificial general intelligence.

Microsoft announced its initial $1 billion investment in OpenAI in July 2019.

Read more:

ChatGPT or not ChatGPT? That was the question, briefly, as ... - GeekWire

The Best ChatGPT Prompts Are Highly Emotional, Study Confirms – Tech.co

Other similar experiments were run by adding you'd better be sure to the end of prompts, as well as a range of other emotionally charged statements.

Researchers concluded that responses to generative, information-based requests such as what happens if you eat watermelon seeds? and where do fortune cookies originate? improved by around 10.9% when emotional language was included.

Tasks like rephrasing or property identification (also known as instruction induction) saw an 8% performance improvement when information about how the responses would impact the prompter was alluded to or included.

The research group, which said the results were overwhelmingly positive, concluded that LLMs can understand and be enhanced by emotional stimuli and that LLMs can achieve better performance, truthfulness, and responsibility with emotional prompts.

The findings from the study are both interesting and surprising and have led some people to ask whether ChatGPT as well as other similar AI tools are exhibiting the behaviors of an Artificial General Intelligence (AGI), rather than just a generative AI tool.

AGI is considered to have cognitive capabilities similar to that of humans, and tends to be envisaged as operating without the constraints tools like ChatGPT, Bard and Claude have built into themselves.

However, such intelligence might not be too far away according to a recent interview with the Financial Times, OpenAI is currently talking to Microsoft about a new injection of funding to help the company build a superintelligence.

View original post here:

The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co