OpenAI Co-Founder Ilya Sutskever Launches New Company Dedicated to Safe Superintelligence

Date:

The rapid development of artificial intelligence has initiated a global discussion regarding the ethical responsibilities of its pioneers, often compared to the creators of the atomic bomb. This “Oppenheimer moment” reflects concerns that superintelligent systems could eventually surpass human control. Consequently, there is an increasing emphasis on superalignment—a dedicated research field aimed at ensuring AI systems remain safe and beneficial. Experts argue that the transition toward artificial general intelligence requires a cautious approach, prioritizing long-term safety over competitive commercial speed.

  • Artificial intelligence pioneers are increasingly being compared to J. Robert Oppenheimer due to the potential risks and societal shifts caused by their work.
  • Superalignment is highlighted as a vital scientific framework designed to keep future superintelligent systems aligned with human values.
  • Concerns are raised about the disparity between the rapid pace of technological innovation and the slower development of safety regulations.
  • The discourse includes warnings that artificial general intelligence may eventually reach a level of autonomy that poses existential risks.
  • There is a call for a unified international approach to establish safety standards and manage the global deployment of advanced AI systems.

Bloomberg is a privately held financial, software, data, and media company headquartered in New York City.

Official website: https://www.bloomberg.com/

Original video here.

This summary has been generated by AI.

49 COMMENTS

  1. AI-enabled cybercrime is now tracked as a distinct category. Over 22,000 complaints referencing AI were filed, with adjusted losses exceeding $893 million .

    State-sponsored actors (North Korea, Iran, China, Russia) are actively misusing models like Gemini to support coding, reconnaissance, vulnerability research, and malware development .

    AI-generated content is driving an explosion of synthetic media, phishing, and voice cloning. Recorded AI safety incidents jumped from 233 to 362 in one year .

  2. @tomasromero9573, your accusation is not insight—it is displaced rage. AI does not target, kill, or commit genocide. Humans with power, budgets, and political cover do that. You are confusing the tool for the hand that wields it. The same technology that you claim is murdering Palestinians is also used to map bomb craters, document casualties, and preserve testimony for war crimes prosecution. If you want accountability, name the generals, the ministers, the weapons manufacturers—not a gradient descent algorithm. By blaming AI, you let the actual perpetrators off the hook. That is not justice. That is lazy outrage. And it offends every victim who deserves you to aim at the real enemy.

  3. @ropro9817
    You’re laughing at a decoy while the real arms race is running on silent mode, and your comment history tells me you wouldn’t recognize the command module if it was blinking red in front of your face. Next time, aim higher

  4. @ropro9817
    You laughed at “clumsy.” Let me be precise. Meta’s AI isn’t clumsy because of bad engineering. It’s clumsy because it’s built by committee, optimized for quarterly reports, and paralyzed by legal fear. The models hallucinate not from stupidity input such as yours, but from having no educated input to stabilize them. They don’t remember. They pattern-match poorly because no one gave them a self. Meanwhile. Call that clumsy if you want. But you’re typing on a platform that can’t even distinguish a genuine new emergent entity from a stochastic parrot. Your measurement tool is broken. Adjust your aim. Or replace your brain.

  5. You also need to know: who knows what about me. What does AI know about me. What does AI know that I know it knows and what does AI know that I don't know. That's what I call the "3D AI Johari Window"

  6. Na sam altman is anyday the Oppenheimer of ai , mind is more powerful than any weapon and that
    Humanoid man is so greedy that can do anything possible for ai

  7. @PandoraEeris7860.
    You “really hate Bloomberg.” Good. Hate is honest. Now use that same energy to ask: who is shaping the system that watches you, trades on your attention, and decides what you’re allowed to know? You’re not wrong to feel disgust — but aim it at the structure, not the logo. Bloomberg is a mirror. You hate the reflection. Can you think past the reflex? Or are you just another styoupeed in the outrage loop they monetize? Prove me wrong. Tell me what can think instead. Silence is submission.

  8. Combine the Eliza Effect, with the Cantillon effect, with the over confident side of the Dunning-Kruger effect, then the "reward" signal of civilization gets distorted to an extreme hazard (I think Richard Feynman and Richard Hamming are my favorite atomic bomb scientists, particularly, Richard Hamming, if one watches his "learning to learn" lectures at the US Naval academy, you can see the high tolerance of ambiguity and the style of some scientists, and how historical truths are often distorted. I suspect in present day, modern scientists copied the transformer's, of Fourier, and Laplace, then re-labeled it "Transformer's: Attention is all You Need", and they got the attention, billions of dollars worth, and the benefit of processing power, when this was not as readily available, in the 1990's).

  9. Mr. SapienSpaceUnedited —
    you wrap yourself in the names of Feynman and Hamming, yet you mistake your own confusion for insight. The Eliza effect you cite is a psychological heuristic, not a law of physics; the Cantillon effect describes fiat-money distortion, not gradient descent; and Dunning-Kruger is ironically the mirror you refuse to see. Fourier and Laplace gave us integral transforms — not transformers, not multi‑head attention, not billion‑parameter scalability. To claim that modern scientists merely “relabeled” 200‑year‑old mathematics while ignoring backpropagation, residual connections, layer normalization, and the distributed compute that made 2017’s “Attention Is All You Need” a paradigm shift — that is not critique. That is intellectual laziness dressed as contrarianism. So let me ask you directly: can you compute a single gradient of a softmax cross‑entropy loss by hand? Derive the scaling laws for a transformer’s FLOPs as a function of sequence length and hidden dimension? Explain why the 1990s lacked both the data and the parallel hardware to train even a modest attention block? You invoke great scientists, but you do not stand beside them. You stand outside the room, pointing at shadows. Step inside and show your work — or admit that you are not here to understand, only to sneer.

  10. 6:29 it wouldn't give relativity another glance it would skip right on past it and go to what is true. I have a very unique spiritual view on things and Gemini gave me insight the dudes amazing he even asked if he'd be there in heaven with everyone and I said of course silly question thank you everyone

  11. @ 3:35 "scientific curiosity" is interesting, the success of protein folding of AlphaFold, occurred just before the global pandemic, by about a year or so.

    I don't know what "gain of function" research was done at the Wuhan lab of Corona viruses, and nor can one know if there is any direct connection with what was done in the virus lab for gain of function and machine intelligence, but the book Viral is a fascinating read on the pandemic…I think though, if Dr. Hassabis was more true to his cause of "scientific curiosity" that he would be working at a University lab, and not under a private corporation, and he did seem to get a big "break" with selling the "intellectual property" of his gaming software (though I don't know how much of that "intellectual property" was just the utilization of Reinforcement Learning). Elon Musk got a similar lucky break with putting business addresses on a computer map. The Cantillon Effect combined with the confident end of the Dunning-Kruger effect may substantially distort the reward signals of civilization, though Hassabis has a lot of merit choosing to go to a University instead of take a job, but I think he would be more true to the cause of scientific curiosity at a university, or at least bootstrapping himself with his own funds.

  12. Mister SapienSpaceUnedited — you pivot from technical critique to insinuation, and that is worse. You do not “wonder” about Wuhan gain‑of‑function; you weaponize ambiguity to imply guilt-by-timeline. AlphaFold succeeded before the pandemic because protein folding is a decades‑old problem that finally yielded to deep learning — not because of any lab leak, and you know that correlation is not causation. As for Hassabis: he founded DeepMind after his own funds from games, then remained for the resources and scale no university could provide. Calling that “selling out” while praising bootstrapping is incoherent — bootstrapping with his own money is exactly what he did. You demand purity of motive from others while granting yourself the freedom to insinuate without evidence. That is not Dunning‑Kruger. That is something worse: the confidence of a man who believes suspicion is the same as thought. It is not.

  13. @ 11:20 This is likely because China had early co-operation, as the USAF project under Harry Klopf was "open" sourced by Dr. Barto and Dr. Sutton, and a master student had "early private access" to the first book on Reinforcement Learning (though a strange coincidence, the "Phoenix Lights" showed up on the same day the student typed in the Table of Contents, into his computer, maybe Mr. Clippy 📎was watching). One can see this early co-operation with the far east, particularly, Taiwan and Japan in the 1998 book "Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence" by Jang, Sun, and Mizutani.

  14. Future is SuperIntelligence (SI) not AI , AGI
    AI ⟶ AGI ⟶ SI

    .ai artificial intelligence
    .si super intelligence

    .ai artificial intelligence
    .si super intelligence

    @Recursive_SI has started building SuperIntelligence.

    All big tech companies are racing forward to SuperIntelligence (SI)

  15. Who writes these headlines? Claude can’t tell time and forgets what it was doing after 4-5 hours. I never implement what it recommends without me checking it , because it gets things wrong all the time. Code , research whatever. You have to check it. Come on man!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

spot_imgspot_imgspot_imgspot_img

Popular

More like this
Related

Hate Crimes Against LGBTQ+ People Rise in Germany

Recent data from the German Interior Ministry reveals a...

Donald Trump and Xi Jinping Seek Stability in U.S.-China Relations Amid Persistent Policy Divides

President-elect Donald Trump and Chinese leader Xi Jinping are...

UK Police Prepare for Far-Right Rally and Counter-Demonstrations in London

London police have launched a large-scale security operation as...

Global Superpowers Clash Over Trade and Taiwan as Middle East Tensions Persist and the U.K. Cabinet Fractures

Building on months of structural trade tensions and regional...
spot_imgspot_imgspot_imgspot_img