OpenAI aims to solve AI alignment in four years – Warp News

At its core, AI alignment seeks to ensure artificial intelligence systems resonate with human objectives, ethics, and desires. An AI that acts in harmony with these principles is termed as 'aligned'. Conversely, an AI that veers away from these intentions is 'misaligned'.

The conundrum of AI alignment isn't new. In 1960, AI pioneer Norbert Wiener aptly highlighted the necessity of ensuring that machine-driven objectives align with genuine human desires. The alignment process encompasses two main hurdles: defining the system's purpose (outer alignment) and ensuring the AI robustly adopts this specification (inner alignment).

It is this unsolved problem that makes some people afraid of super-intelligent AI.

OpenAI, the organization behind ChatGPT, is spearheading this mission. Their goal? To devise a human-level automated alignment researcher. This means not only creating a system that understands human intent but also ensuring that it can keep evolving AI technologies in check.

Under the leadership of Ilya Sutskever, OpenAI's co-founder and Chief Scientist, and Jan Leike, Head of Alignment, the company is rallying the best minds in machine learning and AI.

"If youve been successful in machine learning, but you havent worked on alignment before, this is your time to make the switch", they write on their website.

"Superintelligence alignment is one of the most important unsolved technical problems of our time. We need the worlds best minds to solve this problem."

This is another example of why it is counterproductive to "pause" AI progress. AI gives us new tools, to understand and create with. Out of that comes tonnes of opportunities, like creating new proteins. But also new problems.

If we "pause" AI progress we won't get the benefits, but the problems will also be much harder to solve, because we won't have the tools to do that. Pausing development to first solve problems is therefore not a viable path.

One such problem was that we don't understand exactly how tools like ChatGPT come up with their answers. But OpenAI used their latest model, GPT4, to do that.

Now OpenAI is repeating that approach to solve what some believe is an existential threat to humanity.

OpenAIs breakthrough in understanding AIs black box (so we can build safe AI)

OpenAI has found a way to solve part of the AI alignment problem. So we can understand and create safe AI.

WALL-Y WALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.

View post:

OpenAI aims to solve AI alignment in four years - Warp News

Comments are closed.