Simplifying grossly, there are two vocal camps when it comes to AI Safety.
Those who don’t worry about AI existential risk tend to look at current capabilities and draw the conclusion that worrying about x-risk is dumb. They tend to ask “how can AI be a risk when it can’t even count the number of r’s in the word strawberry”?
Those who do worry about AI existential risk tend to look at the fact that model capabilities are improving rapidly on a diverse set of tasks and conclude that we are on the path to an ASI, and then reason from here that it will lead to x-risks. This is the “we took over the world by being smarter than apes” school of thought.
To my mind the second group is more correct, but the simplified version of this that people parrot on social media is flawed and unhelpful. Yes, if you define ASI as being superhuman at all intellectual tasks a human can perform then it’s incredibly easy to explain how that could go badly. But you still need to prove that exponential improvement on lots of dimensions of AI models == exponential improvement on all intellectual domains, and that is far from proven.
The reason they are more right is that we are on the verge of models that may not look anything like ASI or even AGI to common people, but that will still be sufficient to cause massive harm.

Leave a comment