AGI in Less Than 5 years, Says Former OpenAI Employee
How do you intend to spend your last few years alive?
AGI in less than 5 years. That’s what an ex-OpenAI is ringing alarms about.
How do you intend to spend your last few years alive? (kidding. maybe).
Former OpenAI safety researcher Leopold Aschenbrenner released ‘Situational Awareness,’ a no-holds-barred essay on the future of Artificial General Intelligence.
It is 165 pages long and fresh as of June 4. It examines where AI stands now, where it’s headed and why humans will eventually be livestock.
In some ways, this graph is: “LINE GOES UP OOOOOOOOOOH IT’S HAPPENING IT’S HAPPENING.”
Reminiscent in some ways to this old Simpsons joke:
But Aschenbrenner envisions AGI systems becoming “smarter than you or I,” ushering in an era of true superintelligence. Alongside this rapid advancement, he warns of significant national security implications not seen in decades.
“AGI by 2027 is strikingly plausible,” Aschenbrenner claims, suggesting that AGI machines will outperform college graduates by 2025 or 2026. “To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.”
Is AI Really The Next Big Thing? (Terminator Takeover)
What the fuck is going on?
Why is Nvidia the second highest-valued company in the world?
They're rivaling Microsoft for 1st place now. Did they make some deal with a demon or what? AI isn't a big enough meme to justify this shit.
Well, as Aschenbrenner sees it—and in some ways, his former boss Sam Altman of OpenAI—AIs will be in charge of every system within the next 10 years. It doesn't matter if the AI makes mistakes sometimes or even quite often, it will be the same as human error but cheaper and automated.
In 200 years nobody will know how to do shit, various AIs will do it all.
As Grammarly reminds me how to spell “Aschenbrenner,” and ChatGPT now does my taxes it seems like we inch closer to that every day.
AI may run the government and decide the laws. AI will tell police where to go, what to do, who to arrest. AI will distribute the digital currency to pay all jobs.
In 200 years, humans may have forgotten what AI is and think "the gods" are running everything.
More Fuel On The Existential AGI Fire
On Tuesday, more than a dozen staffers from AI heavyweights like OpenAI, Anthropic, and Google’s DeepMind raised red flags against AGI.
Their open letter cautions that without extra protections, AI might become an existential threat.
“We believe in the potential of AI technology to deliver unprecedented benefits to humanity, the letter states. “We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”
The letter takes aim at AI giants for dodging oversight in favor of fat profits.
AI is quickly becoming a battleground, but the letter's message is simple: Don’t punish employees for speaking out about the dangers of AI.
Conclusion: The Road Ahead for AGI
On the one hand, it can be scary to think that human creativity and the boundaries of thought are being closed in by politically correct code monkies tinkering with matrix multiplication.
Connect the dots, Blackrock is building a new stock market for crypto.
Crypto blockchain and AI are like bread and butter. What happens when you plug AI into an entirely digitized economy and let it learn from it and actively participate via automated market/bot (just AI) implementations?
On the other, the power of artificial intelligence is currently incomprehensible because it is unlike anything we have understood before.
It could be a revolution, just as the first man discovered the spark or the spinning of a stone wheel – one moment, it didn’t exist, and the next, it changed the face of humanity. We’ll see.