Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco … – Medium

Posted: October 31, 2023 at 1:38 pm

Image generated with DALL-E 3

Back when I was a kid, I remember one day playing the Solo campaign of Halo: Combat Evolved (The most epic game ever created, is not up to discussion; its just a fact). And I wondered, what would happen if suddenly the enemy NPCs became as smart as a real person playing the game? Would they attack fiercely? or would they rather improve tactics and ambush me in different ways, making the game unbeatable? Back then, of course, those were just my kids thoughts trying to find more challenging a game that I had already played for hundreds of hours, but now remembering and looking back at that idea, I see that it could come to reality in just a few years. With Large Language Models (LLMs) becoming more and more intelligent, their cognition abilities are improving every second and as many experts have mentioned in multiple forums, it is just a matter of time before we reach the Artificial General Intelligence (AGI), that artificial being capable of matching and surpassing human intelligence and for the first time in history, humans will no longer be the most intelligent species on the planet.

One of the main concerns in the AI field is that we are not certain of what will happen once the AI becomes as intelligent as a real person, how will we ensure that our objectives as a collective human species align with those of the Super Intelligence we have created?

We know that AI engineers are making their best to train their AI models following ethical guidelines, rooting in their code the need for a positive impact in society, to help us achieve everything that humanity has ever desired and walk with us down the path of success and evolution. But there is something that concerns many of the people working with AI, and that is the constant question: Is it possible that at some point AI will know better what is best for the planet and for Humanity? and is that outcome one that will make us feel good and more importantly, will we keep our freedom and free-will with that AI generated future?

When we think about a future in which AI has achieved Superintelligence is very easy to revisit in our minds the fictional worlds created by many authors and depicted in several movies. What if we are all sleeping, and the machines are suddenly the masters of our reality? What if AI will at some point realize that there is no possibility for a better world unless it gets rid of those nasty humans, despicable beings eager to consume everything on their path? Do we really stand a chance against a super-powerful being, connected to everything, aware of everything and trained with all the data required to predict our every movement, our every thought?

That is a future that we certainly would not like to be in, and according to a large group of experts it is highly improbable, but that does not mean that we shouldnt be doing something about it and actually, there are actions being done right now to prevent that. It has been determined in several forums that a multidisciplinary effort is required in order to solve all those concerns and prevent any apocalyptic scenario. It is extremely important that we as society create the right conditions for these new technologies to flourish in an adequate, ethical and responsible environment.

Because of the importance of this matter, ensuring that AI acts accordingly to Human values, ethics, interests and do not pose risks, there are active efforts in multiple fronts to tackle these challenges.

Institutions such as MIRI (Machine Intelligence Research Institute) are encouraging individuals with a background in computer science or software engineering to engage in AI risk workshops and work as engineers at MIRI, focusing on the AI alignment problem.

OpenAI has recently launched a research program named, Superalignment with the ambitious goal of addressing the AI alignment problem by 2027. They have dedicated 20% of their total computing power to this initiative.

Also, the MIT Center for Information Systems Research (CISR) has been investigating AI solutions to identify safe deployment practices at scale, emphasizing an adaptive management approach to AI alignment.

Additional to these, there are efforts focused in building strong frameworks and standards to guide the design and deployment of AI systems. One example of this, is the AI Risk Management Framework (AI RMF). It is designed to help manage the risks associated with AI systems. It guides the design, development, deployment, and usage of AI systems to promote trustworthy and responsible AI. In its words the AI RMF seeks to Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks.

Guiding principles have been established by the most important Tech companies in the world such as Microsofts Six Principles for AI Development and Use. These are: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, Accountability.

Engaging with new technologies, especially with AI can be both exciting and challenging. As we hear about new developments each week it is very important that we remain informed, train ourselves on how to use them for our benefit, keep track and participate in the initiatives that seek to ensure that all AI projects adhere to ethical practices and remain transparent on the data usage, protecting users and society in general from any misuse.

As mentioned previously, AI development is a constant work that requires the abilities, experience and skills from many professionals in different fields. Engage in the conversation, participate in forums and share your knowledge so that we all contribute to this great leap in human development. AI landscape is rapidly evolving, make sure you are on the path of continuous learning and leverage this technology to enrich your life, improve your skills, connect with others, and contribute positively to society.

Go here to read the rest:

Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium

Related Posts