Singularity

Singularity Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Singularity Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

  • OpenAI: Our agreement with the Department of War
    by /u/likeastar20 on February 28, 2026 at 8:40 pm

    submitted by /u/likeastar20 [link] [comments]

  • POV: Claude watching ChatGPT take hours to eliminate Ayatollah Khamenei. meanwhile Claude smoked Maduro in just 15 minutes
    by /u/reversedu on February 28, 2026 at 8:39 pm

    submitted by /u/reversedu [link] [comments]

  • “Cancel ChatGPT” movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens
    by /u/OkayButFoRealz on February 28, 2026 at 6:36 pm

    submitted by /u/OkayButFoRealz [link] [comments]

  • Claude #1 in Canada
    by /u/ScaryBlock on February 28, 2026 at 6:29 pm

    submitted by /u/ScaryBlock [link] [comments]

  • Cancel your Chatgpt subscriptions and pick up a Claude subscription.
    by /u/spreadlove5683 on February 28, 2026 at 6:13 pm

    In light of recent events, I recommend canceling your Chatgpt subscription and picking up a Claude subscription. submitted by /u/spreadlove5683 [link] [comments]

  • We are officially entering in beta version of Skynet. AI can’t replicate itself, yet.
    by /u/reversedu on February 28, 2026 at 5:00 pm

    AGI fanboys, do you still want agi? submitted by /u/reversedu [link] [comments]

  • I spent 8 years in AI and 3 years studying radicalization. Yesterday I watched both fields collide in real time. Here’s what I saw.
    by /u/Straight-Abroad-1247 on February 28, 2026 at 3:42 pm

    I’m going to say something that sounds arrogant. Bear with me. I’ve been watching yesterday happen for seven years. Not predicting it exactly. But building the theoretical framework to understand it before it became undeniable. Who I am and why that matters I’m a 44-year-old former AI entrepreneur who burned out, went back to university, and started studying radicalization in criminology. Not a career move. An obsession. I spent years travelling across the US watching friends and entire communities radicalize during the first Trump era, trying to understand the mechanism. Not the politics. The mechanism. In 2018 I built a startup called Rain 4 Us. One component was something I called Data 4 Me, a tool that would analyze how algorithms were manipulating your data and show you a portrait of the manipulation being done to you. Give you back your narrative sovereignty. Nobody cared. VCs thought it was interesting but unfundable. The problem wasn’t visible enough yet. Also, they thought, as it would require training an AI for years, that it would be a money dump. They were right but… Yesterday it became visible enough. What happened in 24 hours Anthropic gets blacklisted by the Trump administration for refusing to remove safeguards preventing Claude from being used in mass domestic surveillance and fully autonomous weapons. OpenAI signs a Pentagon deal hours later. The US and Israel launch Operation Shield of Judah, with major strikes on Iran, including Tehran and nuclear facilities. Iran retaliates against US bases across the Gulf. These look like three separate news stories. They’re not. They’re infrastructure, capability, and deployment in sequence. But I’m not here to do geopolitical analysis. I’m here to talk about the mechanism underneath all of it. The framework I built to understand radicalization After years of research across psychology, criminology, sociology, anthropology, and media studies, I developed what I call the “narrative power” framework. I published an academic version last month: “Narrative Power: A Complementary Diagnostic Framework to the RBR Model for Intervention with Marginalized Youth.” The core idea is this. Radicalization, whether toward street gangs, extremist groups, or conspiracism, happens when three psychological pillars collapse at the same time. Narrative Coherence. The ability to construct an intelligible story about your own life. Why are you where you are? How did you get here? Where are you going? Control. The genuine sense that your choices are yours, that your actions have real impact, that you have actual authority over your own interpretation of reality. Not the feeling of control. Real control. Relevance. The feeling that your life matters. That you’re part of something larger than yourself. That’s what you do that means something to someone. When these three collapse simultaneously, a person becomes maximally vulnerable. Not because they’re weak or stupid. Because they’re human. We are narrative creatures. We cannot tolerate the absence of a coherent story about who we are and why we exist. Radical groups, whether gangs, extremist movements, or conspiracist communities, are extraordinarily good at exactly one thing. Offering to restore all three pillars at once. “Your life is chaotic because of them.” That restores Coherence. “Join us, and you’ll have power.” That’s Control. “You’ll be a soldier in something cosmic.” That’s Relevance. The offer is almost always partially built on real injustice. That’s what makes it work. That’s what makes it so hard to counter. What Orwell got wrong 1984 is the reference everyone reaches for right now, and they’re not entirely wrong. But Orwell made a critical error in his architecture. He imagined control as visible and violent. The Ministry of Truth actively rewrites history. The telescreen watches you openly. The Party demands conscious participation in lies; doublethink requires actual effort from the person doing it. He assumed people would feel the manipulation and have to suppress that feeling. What he didn’t anticipate was a system where you never feel it at all. The algorithm doesn’t rewrite your past. It just never shows you anything that contradicts your present narrative. It doesn’t tell you what to think. It curates an environment where certain thoughts become literally unimaginable over time. Radicalization through a social media feed doesn’t feel like radicalization. It feels like finally understanding what’s really going on. It feels like clarity. Like the fog lifting. Because it’s not destroying your coherence, it’s providing a coherence that crowds out every alternative. It’s not taking away your sense of control, it’s offering an illusion of control that fills the void left by real powerlessness. It’s not making you feel meaningless; it’s making you feel cosmically important inside a system that needs you angry and engaged. Winston Smith knew something was wrong. That knowing is what made him human in the novel. The modern version eliminates the intuition that something is wrong. You don’t silence dissent. You make it invisible to itself. What yesterday actually proved The Anthropic blacklisting happened because Claude refused to enable mass domestic surveillance. The Pentagon wanted what every authoritarian infrastructure eventually needs: a tool that can build a cognitive fingerprint of millions of people simultaneously. Not just their behaviour. Their reasoning patterns. Where their doubts live. What arguments move them? What emotional states make them susceptible? What specific combinations of ideas make them act versus stay passive? Advertising already uses parts of this to sell shoes. What gets built with that capability in the hands of a government managing internal dissent during a prolonged war is not complicated to imagine. And the timing isn’t coincidental. You build the surveillance infrastructure. You deploy the capability. You launch the war that creates the emergency requiring the surveillance. All in 24 hours. The tool I should have built in 2018 Data 4 Me was trying to be a mirror. Show you what was being done to your narrative by the digital environment around you. The framework I’ve spent years building in criminology is essentially the manual for understanding why that mirror matters and exactly what it should show you. A personal AI layer that doesn’t filter your information environment but continuously monitors the three pillars in your own thinking. Is your narrative coherence being artificially stabilized around a single totalizing explanation? Is your sense of control real, or are you following scripts that benefit someone else? Is your sense of meaning genuinely yours, or have you been made cosmically important by a system that needs you angry? Not censorship. Not a political tool. A cognitive sovereignty device. The technology to build this exists right now. The theoretical framework to make it rigorous exists right now. And the reason it matters just became front-page news. Why I’m writing this today I’m a 44-year-old criminology student at Université de Montréal, a Master student, with no PhD and about 20 Substack subscribers. I have a paper that’s just starting its academic journey and a prototype that isn’t built yet. I’m not writing this because I think I’ll save anything. I’m writing this because I’ve been watching this specific mechanism operate for years, built a framework to describe it precisely, and yesterday it scaled to a civilizational level in a single news cycle. If you’ve read this far, you already sense that something is wrong. The question is whether we develop the language to describe it precisely enough to do something about it before the architecture gets built around us. I think we’re close to that line. The academic paper is available on request. As it is related to clinical intervention and linked to projects for my master’s degree, it has to be taken in that context, but I will write a version ready for the field. I’m not claiming to be an expert. I’m someone who has been staring at this problem from an unusual angle for a long time and would rather say something imperfect right now than something polished in eighteen months. For those interested in criminology specifically, the framework proposes a testable hypothesis about radicalization patterns that complements existing risk assessment models used across Canada and most European countries. Happy to go deep in the comments. \**Disclaimer: My native language being French, I used Claude AI to translate the original version of this text and my article. I also used Grammarly to avoid common typpos and syntax errors.* https://preview.redd.it/vcr81lyx99mg1.png?width=1334&format=png&auto=webp&s=bc2c1128ebad3515f5afb14d0f855da20b571288 submitted by /u/Straight-Abroad-1247 [link] [comments]

  • What are your thoughts on the OpenAI deal with DOW?
    by /u/dataexec on February 28, 2026 at 2:29 pm

    submitted by /u/dataexec [link] [comments]

  • Full interview: Anthropic CEO Dario Amodei on Pentagon feud
    by /u/Cubewood on February 28, 2026 at 1:11 pm

    submitted by /u/Cubewood [link] [comments]

  • What We Learned Tonight And What We Can Expect Going Forward
    by /u/Neurogence on February 28, 2026 at 7:47 am

    What We Learned •Sam Altman is the ultimate scavenger and a liar. •OpenAI is a shit company (I unsubscribed tonight). •Anyone who believes Sam Altman is completely gullible. •Anthropic has the most capable models and is the most ethical. •AGI under the Trump administration is probably a very undesirable outcome. •Politics and AI are now fully inseparable. •Big Tech CEO’s are cowards (Every single one of them knows Anthropic is right. Yet, every single one of them is quietly calculating how they can get some of that government money and Anthropic’s market share instead of speaking up). What We Can Expect •The public will become increasingly extremely anti-AI (Elon Musk, Sam Altman will do almost irreparable harm to AI’s reputation when all is said and done). •The government will try to cripple Anthropic. (It’s possible this whole entire ordeal was a hit job, a plan to destroy Anthropic. Elon is close to the Trump Administration, OpenAI are the biggest donors to the Trump administration, and neither Grok nor GPT could compete with Claude on Enterprise-so much for the free market). •OpenAI’s “safeguards” in the Pentagon deal will be tissue paper. There’s no mechanism to enforce them and no incentive to try. Altman said “The DoW displayed a deep respect for safety”—lol, does anyone really believe this pathological liar? submitted by /u/Neurogence [link] [comments]

  • Katy Perry, with 85 million followers, subscribes to Anthropic
    by /u/Cagnazzo82 on February 28, 2026 at 7:13 am

    submitted by /u/Cagnazzo82 [link] [comments]

  • Time to cancel ChatGPT Plus after three Years. Anthropic got nuked for having ethics, and Sam Altman instantly swooped in for the Pentagon bag.
    by /u/Rare-Site on February 28, 2026 at 6:26 am

    The body wasn’t even cold before OpenAI signed a deal to deploy on classified Department of War networks. And the absolute audacity to spin selling out to the military industrial complex as “serving all of humanity” is generational PR garbage. “The world is a complicated place” is just Silicon Valley CEO speak for “the check cleared.” Stop giving this company your $20 a month. You’re just subsidizing their pivot to defense contracting. Cancel ChatGPT Plus. Switch to Claude. Support the only AI company that actually had the spine to say “no” to the government. Vote with your wallet. submitted by /u/Rare-Site [link] [comments]

  • Sam Altman showing his support for Anthropic today
    by /u/Sextus_Rex on February 28, 2026 at 6:19 am

    submitted by /u/Sextus_Rex [link] [comments]

  • Good Riddance.
    by /u/surrogate_uprising on February 28, 2026 at 4:55 am

    submitted by /u/surrogate_uprising [link] [comments]

  • DeepSeek V4 will be released next week and will have image and video generation capabilities
    by /u/BuildwithVignesh on February 28, 2026 at 4:40 am

    DeepSeek is set to release its latest large language model next week, more than a year since its last major release in a fresh test of China’s ambitions to challenge US rivals in AI. The Hangzhou-based lab plans to unveil V4, a “multimodal” model with picture, video and text-generating functions, according to two people familiar with the matter. Source: FT submitted by /u/BuildwithVignesh [link] [comments]

  • Boycott OpenAI?
    by /u/safcx21 on February 28, 2026 at 3:57 am

    At the risk of this post being instantly deleted by the moderators of this subreddit, should there be a discussion about boycotting OpenAI? Regardless of political views, ensuring a safe transition from our lives at present to a potential technological singularity should be something that we are all concerned about. As a non-US citizen I find it unbelievably concerning that the following timeline has occured: Anthropic rejects Department of War deal due to concerns regarding mass surveillance and autonomous weapon systems uses OpenAI support anthropic Trump tweets that Anthropic use be ceased immediately. Labels them a ‘woke’ company and implies designation as a supply chains risk OpenAI takes department of war deal The above reads eerily similar to the tactics of an authoritarian government and regardless of views should be highly concerning. The government elected by the people should not give companies the choice of supporting them or facing punishment. Boycotting OpenAI appears to be the only reasonable choice to me. submitted by /u/safcx21 [link] [comments]

  • DoW says: trust me bro we won’t use it for weapons or surveillance
    by /u/DigSignificant1419 on February 28, 2026 at 3:05 am

    submitted by /u/DigSignificant1419 [link] [comments]

  • Anthropic plans to sue the Pentagon if designated a supply chain risk
    by /u/exordin26 on February 28, 2026 at 2:01 am

    submitted by /u/exordin26 [link] [comments]

  • Statement on the comments from Secretary of War Pete Hegseth | Anthropic responds to Pete Hegseth
    by /u/141_1337 on February 28, 2026 at 1:53 am

    submitted by /u/141_1337 [link] [comments]

  • It’s extremely good that Anthropic has not backed down — Ilya Sutzkever
    by /u/141_1337 on February 28, 2026 at 12:01 am

    submitted by /u/141_1337 [link] [comments]

  • Pentagon designates anthropic as a supply chain risk
    by /u/Just_Stretch5492 on February 27, 2026 at 10:23 pm

    submitted by /u/Just_Stretch5492 [link] [comments]

  • Trump goes on Truth Social rant about Anthropic, orders federal agencies to cease usage of products
    by /u/ShreckAndDonkey123 on February 27, 2026 at 9:02 pm

    submitted by /u/ShreckAndDonkey123 [link] [comments]

  • Outside Anthropic’s office in SF
    by /u/Outside-Iron-8242 on February 27, 2026 at 8:32 pm

    Source: Roy E. Bahat on X submitted by /u/Outside-Iron-8242 [link] [comments]

  • Google releases Nano banana 2 model
    by /u/BuildwithVignesh on February 26, 2026 at 4:02 pm

    submitted by /u/BuildwithVignesh [link] [comments]

  • (Sound on) Gemini 3.1 Pro surpassed every expectation I had for it. This is a game it made after a few hours of back and forth.
    by /u/Glittering-Neck-2505 on February 20, 2026 at 8:57 pm

    This is what it managed to make, I did not contribute anything except for telling it what to do. For example, when I added plants to the planets, it caused performance to tank. I simply asked it “optimize the performance” and it goes from 3 fps to buttery smooth. I asked for it to add cool sci fi music and a music selector and it did that. I asked it to add cool title cards to the planets with sound effects and it absolutely nailed it. Literally anything you want it to do you just say in plain language. Final result is around 1,800 lines of code in html. submitted by /u/Glittering-Neck-2505 [link] [comments]

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.