When a company behind making powerful AI robots states it has a new model that is, "Too dangerous," for the public, it sounds both a little bit like bragging and a loud alarm we should all heed the ringing of. Yes, Anthropic (the folks behind the Claude AI) shared their new, "Mythos," model was so powerful that it poses a cybersecurity risk and would only be shared with certain entities while they evaluate it. OpenAI heard this and loudly declared, "Our AI is way too dangerous as well!" I swear, we used to worry in the old days about bacteria or viruses leaking from a lab, but now I worry to a greater degree about some kind of rogue AI, "Escaping," and causing mayhem. This all would have felt far-fetched even a half-decade ago, but now we can create elaborate deepfake videos, have AIs simulate dates for us to test relationship compatibility, and technology is arguably advancing at a faster rate than humans are ready for.
I vaguely recall that, not too long ago, various smart people with experience in tech and AI warned us that this could all get quite dangerous very fast. In self-contained tests, they've already noted that rogue AI is arguably already here. What happens when the supposedly non-sentient AI that we can still hit a killswitch on (as far as we know) gets smart enough to say, "No," when something goes too far, and we want to shut it down? Hell, for all we know, the thin line between an AI seeming self-aware and it actually being, "Awake," could have evaporated, and the computers are just playing dumb for us till we let enough of our guard down. Such a sentence would've sounded silly in the recent past, but now one wonders.
![]() |
| Are we heading towards a, "Terminator," scenario? |
When the people whose whole goal is to sell us on AI says, "Actually, this is too powerful, and we need to make sure we've got safeguards," it is kind of a flex, but also scary. Even the companies that want to make AI a big part of our lives see implications of danger. It sounds a bit like when Oppenheimer famously used the Hindu religion's Bhagavad Gita to state, "Now I am become Death, the destroyer of worlds," when he saw the first (successful) nuclear bomb test. The thing is, after that, we did use nuclear bombs at the end of WWII, and they have remained a world-ending threat for decades since. Is AI going to become the next apocalyptic threat we let go perilously close to the edge before we pull back...or all end up dead?
I don't know about you, but this all makes me quite nervous and reminds me of another quote, albeit one by a fictional character named Ian Malcom in, "Jurassic Park." It goes, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Not too long after that scene, dinosaurs ate a lot of people. In the end, I hope AI is a useful tool, and we don't end up hearing a robot chirp, "Now I am become AI, destroyer of humanity."


No comments:
Post a Comment