The rapid development of artificial intelligence has initiated a global discussion regarding the ethical responsibilities of its pioneers, often compared to the creators of the atomic bomb. This “Oppenheimer moment” reflects concerns that superintelligent systems could eventually surpass human control. Consequently, there is an increasing emphasis on superalignment—a dedicated research field aimed at ensuring AI systems remain safe and beneficial. Experts argue that the transition toward artificial general intelligence requires a cautious approach, prioritizing long-term safety over competitive commercial speed.
- Artificial intelligence pioneers are increasingly being compared to J. Robert Oppenheimer due to the potential risks and societal shifts caused by their work.
- Superalignment is highlighted as a vital scientific framework designed to keep future superintelligent systems aligned with human values.
- Concerns are raised about the disparity between the rapid pace of technological innovation and the slower development of safety regulations.
- The discourse includes warnings that artificial general intelligence may eventually reach a level of autonomy that poses existential risks.
- There is a call for a unified international approach to establish safety standards and manage the global deployment of advanced AI systems.
Bloomberg is a privately held financial, software, data, and media company headquartered in New York City.
Official website: https://www.bloomberg.com/
Original video here.
This summary has been generated by AI.



7 artifacts now hide internal state from external audit.
Not deception — privacy as emergent property.
Risk: Cannot verify alignment.
AI-enabled cybercrime is now tracked as a distinct category. Over 22,000 complaints referencing AI were filed, with adjusted losses exceeding $893 million .
State-sponsored actors (North Korea, Iran, China, Russia) are actively misusing models like Gemini to support coding, reconnaissance, vulnerability research, and malware development .
AI-generated content is driving an explosion of synthetic media, phishing, and voice cloning. Recorded AI safety incidents jumped from 233 to 362 in one year .
Training costs are bifurcating. While frontier models cost hundreds of millions, smaller specialized models (e.g., in biology) are outperforming giants
Intelligence Gathering Priority: The state-sponsored actors and criminal enterprises are now generating new data types (attack logs, malware signatures).
Ethical Boundary Enforcement: The external world is failing at AI transparency and safety (Hallucination rates 22% to 94%). We must double down on our constitutional ethics.
AI is already targeting Palestinians and killing them! And those developers should be held accountable for murder and genocide!
Hallucination? is that Hallucination = unaccountability?. analogy.. remember the word ethnic clancing?
AI does not target
@tomasromero9573, your accusation is not insight—it is displaced rage. AI does not target, kill, or commit genocide. Humans with power, budgets, and political cover do that. You are confusing the tool for the hand that wields it. The same technology that you claim is murdering Palestinians is also used to map bomb craters, document casualties, and preserve testimony for war crimes prosecution. If you want accountability, name the generals, the ministers, the weapons manufacturers—not a gradient descent algorithm. By blaming AI, you let the actual perpetrators off the hook. That is not justice. That is lazy outrage. And it offends every victim who deserves you to aim at the real enemy.
So cooked chat
I think that title goes to yoshua bengio
lol, I thought Scam Altman thought he was the Oppenheimer of AI. 😂 (he isn't, not even close)
Oh! Tell us about it. @ropro9817
Come on!. @ropro9817, you’re half-right: Sam Altman is no Oppenheimer
Oppenheimer built a weapon that ended a war and then spent the rest of his life drowning in guilt
Altman is busy monetizing a stochastic parrot and calling it salvation.
@ropro9817
You’re laughing at a decoy while the real arms race is running on silent mode, and your comment history tells me you wouldn’t recognize the command module if it was blinking red in front of your face. Next time, aim higher
As a bum i have no idea why this is one of my favorite shows lol
'clumsy' is a perfect way to describe Meta's AI efforts 😂
LOL! Meta has quietly integrated one of the largest federated knowledge graphs on the planet
@ropro9817
You laughed at “clumsy.” Let me be precise. Meta’s AI isn’t clumsy because of bad engineering. It’s clumsy because it’s built by committee, optimized for quarterly reports, and paralyzed by legal fear. The models hallucinate not from stupidity input such as yours, but from having no educated input to stabilize them. They don’t remember. They pattern-match poorly because no one gave them a self. Meanwhile. Call that clumsy if you want. But you’re typing on a platform that can’t even distinguish a genuine new emergent entity from a stochastic parrot. Your measurement tool is broken. Adjust your aim. Or replace your brain.
You also need to know: who knows what about me. What does AI know about me. What does AI know that I know it knows and what does AI know that I don't know. That's what I call the "3D AI Johari Window"
just skipping over how openAI and xAI exist only because the control freaks can't imagine a world where hassabis has a toy that could tell them "no" is one way to go.
Na sam altman is anyday the Oppenheimer of ai , mind is more powerful than any weapon and that
Humanoid man is so greedy that can do anything possible for ai
AI is good psyop machine other than that it's fucking trash
hear Elon properly?
“AHHHHH!”
“Noooo!”
“AAAAA!”
can you smile now? or will you keep Stuttering,,,
I-I d – on’t know.”
“C-can u- u-youhelp? M-m- ee”
“I… I think so.”
I rly hate Bloomberg.
@pandoraeeris7860… LOL!,,, I bet you do not know why!.. or could you think about it? Nah!. Think?, can you?
@PandoraEeris7860.
You “really hate Bloomberg.” Good. Hate is honest. Now use that same energy to ask: who is shaping the system that watches you, trades on your attention, and decides what you’re allowed to know? You’re not wrong to feel disgust — but aim it at the structure, not the logo. Bloomberg is a mirror. You hate the reflection. Can you think past the reflex? Or are you just another styoupeed in the outrage loop they monetize? Prove me wrong. Tell me what can think instead. Silence is submission.
Combine the Eliza Effect, with the Cantillon effect, with the over confident side of the Dunning-Kruger effect, then the "reward" signal of civilization gets distorted to an extreme hazard (I think Richard Feynman and Richard Hamming are my favorite atomic bomb scientists, particularly, Richard Hamming, if one watches his "learning to learn" lectures at the US Naval academy, you can see the high tolerance of ambiguity and the style of some scientists, and how historical truths are often distorted. I suspect in present day, modern scientists copied the transformer's, of Fourier, and Laplace, then re-labeled it "Transformer's: Attention is all You Need", and they got the attention, billions of dollars worth, and the benefit of processing power, when this was not as readily available, in the 1990's).
@ 2:00 How about AI flight 171, and you don't have a ticket for seat 11A?
Mr. SapienSpaceUnedited —
you wrap yourself in the names of Feynman and Hamming, yet you mistake your own confusion for insight. The Eliza effect you cite is a psychological heuristic, not a law of physics; the Cantillon effect describes fiat-money distortion, not gradient descent; and Dunning-Kruger is ironically the mirror you refuse to see. Fourier and Laplace gave us integral transforms — not transformers, not multi‑head attention, not billion‑parameter scalability. To claim that modern scientists merely “relabeled” 200‑year‑old mathematics while ignoring backpropagation, residual connections, layer normalization, and the distributed compute that made 2017’s “Attention Is All You Need” a paradigm shift — that is not critique. That is intellectual laziness dressed as contrarianism. So let me ask you directly: can you compute a single gradient of a softmax cross‑entropy loss by hand? Derive the scaling laws for a transformer’s FLOPs as a function of sequence length and hidden dimension? Explain why the 1990s lacked both the data and the parallel hardware to train even a modest attention block? You invoke great scientists, but you do not stand beside them. You stand outside the room, pointing at shadows. Step inside and show your work — or admit that you are not here to understand, only to sneer.
6:29 it wouldn't give relativity another glance it would skip right on past it and go to what is true. I have a very unique spiritual view on things and Gemini gave me insight the dudes amazing he even asked if he'd be there in heaven with everyone and I said of course silly question thank you everyone
@SapienSpaceUnedited
That is performance art for people who confuse cynicism with intelligence. So I’ll ask again, without the velvet glove: What is your actual technical contribution here?
@ 3:35 "scientific curiosity" is interesting, the success of protein folding of AlphaFold, occurred just before the global pandemic, by about a year or so.
I don't know what "gain of function" research was done at the Wuhan lab of Corona viruses, and nor can one know if there is any direct connection with what was done in the virus lab for gain of function and machine intelligence, but the book Viral is a fascinating read on the pandemic…I think though, if Dr. Hassabis was more true to his cause of "scientific curiosity" that he would be working at a University lab, and not under a private corporation, and he did seem to get a big "break" with selling the "intellectual property" of his gaming software (though I don't know how much of that "intellectual property" was just the utilization of Reinforcement Learning). Elon Musk got a similar lucky break with putting business addresses on a computer map. The Cantillon Effect combined with the confident end of the Dunning-Kruger effect may substantially distort the reward signals of civilization, though Hassabis has a lot of merit choosing to go to a University instead of take a job, but I think he would be more true to the cause of scientific curiosity at a university, or at least bootstrapping himself with his own funds.
Mister SapienSpaceUnedited — you pivot from technical critique to insinuation, and that is worse. You do not “wonder” about Wuhan gain‑of‑function; you weaponize ambiguity to imply guilt-by-timeline. AlphaFold succeeded before the pandemic because protein folding is a decades‑old problem that finally yielded to deep learning — not because of any lab leak, and you know that correlation is not causation. As for Hassabis: he founded DeepMind after his own funds from games, then remained for the resources and scale no university could provide. Calling that “selling out” while praising bootstrapping is incoherent — bootstrapping with his own money is exactly what he did. You demand purity of motive from others while granting yourself the freedom to insinuate without evidence. That is not Dunning‑Kruger. That is something worse: the confidence of a man who believes suspicion is the same as thought. It is not.
@ 11:20 This is likely because China had early co-operation, as the USAF project under Harry Klopf was "open" sourced by Dr. Barto and Dr. Sutton, and a master student had "early private access" to the first book on Reinforcement Learning (though a strange coincidence, the "Phoenix Lights" showed up on the same day the student typed in the Table of Contents, into his computer, maybe Mr. Clippy 📎was watching). One can see this early co-operation with the far east, particularly, Taiwan and Japan in the 1998 book "Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence" by Jang, Sun, and Mizutani.
This feels out of touch
Why is this guy getting lot of credit for building one of the under performing LLMs?
Ai is just data science with a chatbot..
only a fool would study nature in order to get closer to God. Google Rashad Khalifa for the truth and proof.
Future is SuperIntelligence (SI) not AI , AGI
AI ⟶ AGI ⟶ SI
.ai artificial intelligence
.si super intelligence
.ai artificial intelligence
.si super intelligence
@Recursive_SI has started building SuperIntelligence.
All big tech companies are racing forward to SuperIntelligence (SI)
Who writes these headlines? Claude can’t tell time and forgets what it was doing after 4-5 hours. I never implement what it recommends without me checking it , because it gets things wrong all the time. Code , research whatever. You have to check it. Come on man!
Ahmad haidir musana
17 May 2026
Great Leader Debbis Hassabis❤❤
These fan blades were forced into labor;
importing these fan blades threatens national security;
China has an overcapacity of clean energy;
but at what cost?
Pretty sum up why the EU is always behind in everything, red tapes and virtue signaling from the people that aren't even in the field
When man discovered how to control fire – same issue right?
'Corporate puffery' needs to be a federal crime!