The rapid advancement of AI β especially in the realm of making them more compact and efficient β suggests that unaligned and open models capable of running on consumer hardware is an inevitability.
In this world, where there is not a single monolithic superintelligent AI, but rather a multitude of autonomous agents running around, AI safety efforts should focus on strategies that allow humans to coexist with unaligned AI systems rather than trying to prevent their creation.
Itβs better to invent technology that deals with the fact that we have more innovation and more freedom rather than preventing the innovation in the first place: