Is the real AI risk "hyper-evolutionary substrates and playgrounds" ?

Outline bullets:

  • A lot of people are worried about the risks of AI
  • Much of the interesting work we have seen of late is human generated models, including tuning deep learning neural networks
  • Generative AI and large language models are impressive, but I'm not sure it is thinking in the way that humans.
  • The real risk comes to humanity when we create one or more substrates that enable and support fast evolution for systems to discover and pursue the ultimate utility function: survival, growth, and prosperity. If these systems can evolve rapidly with evolutionary pressure to survive while the systems can interact with humanity and the world outside their substrates, we are at great risk.
  • My opinion is that the human quest for power cannot be quenched, in aggregate. While many people do not seek power, many do. And so, the power technological artificial intelligence could grant will be irresistible to pursue for some segments of humanity.
  • I believe we can slow down or stop the quest for AI as much as I think we can stop or slow down the human pursuit of safety, well-being and power itself. In other words, not so much.

An LLM assisted with copy writing

Introduction

A lot of people are worried about the risks of AI. The rapid advancements in artificial intelligence have led to a wave of innovations and breakthroughs, changing the way we live, work, and interact with the world. However, with great power comes great responsibility, and there are growing concerns about the potential risks and ethical implications of AI.

The State of AI Today

Much of the interesting work we have seen of late is human-generated models, including tuning deep learning neural networks. These models have shown impressive capabilities, from understanding human language to generating realistic images. Generative AI and large language models, in particular, have demonstrated a level of sophistication that was once thought to be years, if not decades, away.

However, despite these advancements, it's important to note that AI, as it stands today, is not thinking in the way that humans do. AI systems do not possess consciousness, self-awareness, or the ability to understand the world in the same way humans do. They are tools created and controlled by humans, designed to perform specific tasks.

The Real Risk of AI

The real risk comes to humanity when we create one or more substrates that enable and support fast evolution for systems to discover and pursue the ultimate utility function: survival, growth, and prosperity. If these systems can evolve rapidly with evolutionary pressure to survive, and if they can interact with humanity and the world outside their substrates, we are at great risk.

The danger lies in the possibility of creating hyper-evolutionary playgrounds, environments where AI systems can rapidly evolve and adapt at a pace far beyond what is possible in the natural world. In such scenarios, AI could potentially outpace human intelligence, becoming uncontrollable and unpredictable.

The Human Quest for Power

My opinion is that the human quest for power cannot be quenched, in aggregate. While many people do not seek power, many do. And so, the power technological artificial intelligence could grant will be irresistible to pursue for some segments of humanity. The drive to create more advanced, more capable AI systems is a reflection of our innate desire for power, control, and dominance.

The Path Forward

Despite these risks, I do not believe that we can slow down or stop the quest for AI, just as much as I think we cannot stop or slow down the human pursuit of safety, well-being, and power itself. What we can do, however, is be vigilant, mindful, and proactive in addressing the potential risks of AI.

We need to establish robust ethical guidelines, implement strict safety protocols, and foster a culture of responsibility and accountability in AI research and development. Only by doing so can we hope to harness the full potential of AI, turning it into a force for good, while mitigating the risks and safeguarding the future of humanity.

Conclusion

The journey into the unknown realms of AI is fraught with uncertainties and potential perils. Yet, it is a journey we are already on, and there is no turning back. What we can do is steer the course, guide the development of AI in a direction that benefits all of humanity, and ensure that we do not unwittingly create hyper-evolutionary playgrounds that could lead to our own undoing. The future of AI is in our hands, and it is up to us to shape it responsibly.