Singularity

Singularity Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Singularity Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

  • A reminder that the quality of a benchmark matters as much as the quality it’s supposed to measure
    by /u/Disastrous_Room_927 on January 16, 2026 at 5:16 am

    submitted by /u/Disastrous_Room_927 [link] [comments]

  • I miss searching the Web for Answers
    by /u/SoonBlossom on January 16, 2026 at 4:21 am

    Stumbling upon pages and pages of documents, having to search through them for what you need Exploring some obscure 10 years old Stack Overflow post where people discuss a solution Having to understand, figure out what is written Falling down some rabbit holes when sometimes you stumble upon something very interesting but that you can’t understand at first, and the more you search, the more interesting and deep things there are to uncover and understand about it AI is awesome, I really hope it keeps getting better because I think at some point it’ll end up helping a lot research, helping finding cures for diseases, save lives, etc. But I dread a bit having to go through this “sanitized” space, where things are already figured out, where all you do is read an answer, review already written code, etc. It’s not the case for 100% of the tasks obviously, but it replaced a lot of them already, and it’ll only get worse and worse, at some point, “mundane intelligence” will be “solved” and if you’re not a top expert in your domain then you’ll probably find 85% of what you need through it (at least in programming) Of course, you can still keep doing it the “old way”, but that’s just “loosing time for fun”, there is a saying that says “optimize the fun out of a task”, and I feel that’s a bit where it’s heading for the people that liked the process as much as the result I wonder if some people miss that too, having to wear your searcher hat and go exploring the web looking for answers Anyone feels the same ? submitted by /u/SoonBlossom [link] [comments]

  • Anthropic Report finds long-horizon tasks at 19 hours (50% success rate) by using multi-turn conversation
    by /u/SrafeZ on January 16, 2026 at 3:58 am

    Caveats are in the report The models and agents can be stretched in various creative ways in order to be better. We see this recently with Cursor able to get many GPT-5.2 agents to build a browser within a week. And now with Anthropic utilizing multi-turn conversations to squeeze out gains. The methodology is different from METR of having the agent run once. This is reminiscent of 2023/2024 when Chain of Thoughts were used as prompting strategies to make the models’ outputs better, before eventually being baked into training. We will likely see the same progression with agents. submitted by /u/SrafeZ [link] [comments]

  • Will SaaS die within 5 years?
    by /u/Professional-Buy-396 on January 15, 2026 at 10:45 pm

    Recently Michael Truell, CEO of Cursor, posted that GPT-5.2 Codex agents just vibecoded a somewhat working browser with 3 million lines of code. With AI models getting better and better every 3 to 7 months, and hardware improving every year, will we be able to just “vibecode” our own Photoshop on demand? The new SaaS will kinda be the AIs token usages. Like, I played a table game with friends, but it was kinda expensive for me to acquire, so I just spun up Antigravity with Opus 4.5 and Gemini 3 and completely vibecoded the complete game in half a day with a local connection so everyone could play on their phone browser and a nice virtual board and controls and rules enforcements (wich could be turned off for more dynamic play) while the PC served as a local host. What do you guys think about this? SaaS = Software as a service. Update: My takeaway here after reading the responses is now that this type of thing will be a huge incentive to companyes so they dont enshitify the software as much and dont rugpull us as much. submitted by /u/Professional-Buy-396 [link] [comments]

  • how i open internet everyday to see if there something new in ai models
    by /u/reversedu on January 15, 2026 at 9:59 pm

    submitted by /u/reversedu [link] [comments]

  • people getting tricked by a fake AI influencer
    by /u/G0dZylla on January 15, 2026 at 9:25 pm

    this is just the beginning, and remember that Most people have no idea how good image generation has gotten edit: even people in the comments of THIS sub who are supposedly exposed to more AI content believe ts, it’s over submitted by /u/G0dZylla [link] [comments]

  • How long before we have the first company entirely run by AI with no employees?
    by /u/RevolutionStill4284 on January 15, 2026 at 8:22 pm

    Five, ten years from now? More? At that point, I believe we will just drop the “A” in AI submitted by /u/RevolutionStill4284 [link] [comments]

  • A headline from 1986.
    by /u/GenLabsAI on January 15, 2026 at 8:03 pm

    submitted by /u/GenLabsAI [link] [comments]

  • Will Substrate disrupt the chip market?
    by /u/power97992 on January 15, 2026 at 7:45 pm

    submitted by /u/power97992 [link] [comments]

  • WTF is up with Claude
    by /u/Purgatory_666 on January 15, 2026 at 5:54 pm

    i have been facing a lot of issues with claude for the past few weeks. For starters the website doesnt load at all. Some chats go missing randomly. Sonnet 4.5 is being weirdly nice, instead of evaluating and questioning my logic it is just accepting things as it is and commending me on it (for no apparent reason). Now im not able to send messages in the chat(web), it just quits on me, i tried it on 2 different browsers and devices. I generally prefer claude over chatgpt5.2 for its reasoning and logical capabilities but the “extended thinking” is working quite well for my research and academic purposes than “extended thinking”. As a matter of fact, chatgpt answers now have a better flow of logic chain of reasoning. submitted by /u/Purgatory_666 [link] [comments]

  • “OpenAI and Sam Altman Back A Bold New Take On Fusing Humans And Machines” [Merge Labs BCI – “Merge Labs is here with $252 million, an all-star crew and superpowers on the mind”]
    by /u/ThePlanckDiver on January 15, 2026 at 4:36 pm

    submitted by /u/ThePlanckDiver [link] [comments]

  • The Cantillon Effect of AI
    by /u/ActualBrazilian on January 15, 2026 at 2:51 pm

    The Cantillon Effect is the economic principle that the creation of new money does not affect everyone equally or simultaneously. Instead, it disproportionately benefits those closest to the source of issuance, who receive the money first and are able to buy assets before prices fully adjust. Later recipients, such as wage earners, encounter higher costs of living once inflation diffuses through the economy. The result is not merely that “the rich get richer,” but a structural redistribution of real resources from latecomers to early adopters. Coined by the 18th-century economist Richard Cantillon, the effect explains how money creation distorts relative prices long before it changes aggregate price levels. New money enters the economy through specific channels: first public agencies, then government contractors, then financial institutions, then those who transact with them, and only much later the broader population. Sectors in first contact with the new money expand, attract labor and capital, and shape incentives. Other sectors atrophy. By the time inflation is visible in aggregates like the Consumer Price Index, the redistribution has already occurred. The indicators experts typically monitor are blind to these structural effects. Venezuela offers a stark illustration. Economic activity far from the state withered, while the government’s share of the economy inflated disproportionately. What life remained downstream was dependent on political proximity and patronage, not productivity. Hyperinflation marked the point at which the effects became evenly manifested, but the decisive moment, the point of no return, occurred much earlier, at first contact between new money and the circulating economy. In physics, an event horizon is not where dramatic effects suddenly appear. Locally, nothing seems special. But globally, the system’s future becomes constrained; reversal is no longer possible. Hyperinflation resembles the visible aftermath, not the horizon itself. The horizon is crossed when the underlying dynamics lock in. This framework generalizes beyond money. Artificial intelligence represents a new issuance mechanism, not of currency but of intelligence. And like money creation, intelligence creation does not diffuse evenly. It enters society through specific institutions, platforms, and economic roles, changing relative incentives before it changes aggregate outcomes. We have passed the AI event horizon already. The effects are simply not yet evenly distributed. Current benchmarks make this difficult to see if one insists on averages. AI systems now achieve perfect scores on elite mathematics competitions, exceed human averages on abstract reasoning benchmarks, solve long-standing problems in mathematics and physics, dominate programming contests, and rival or exceed expert performance across domains. Yet this is often dismissed as narrow or irrelevant because the “average person” has not yet felt a clear aggregate disruption. That dismissal repeats the same analytical error economists make with inflation. What matters is not the average, but the transmission path. The first sectors expanding under this intelligence injection are those closest to monetization and behavioral leverage: advertising, recommender systems, social media, short-form content, gambling, prediction markets, financial trading, surveillance, and optimization-heavy platforms. These systems are not neutral applications of intelligence. They shape attention, incentives, legislation, and norms. They condition populations before populations realize they are being conditioned. Like government contractors in a monetary Cantillon chain, they are privileged interfaces between the new supply and real-world behavior. By the time experts agree that something like “AI inflation” or a “singularity” is happening, the redistribution will already have occurred. Skills will have been repriced. Career ladders will have collapsed. Institutional power will have consolidated. Psychological equilibria will have shifted. The effects are already visible, though not in the places most people are looking. They appear as adversarial curation algorithms optimized for engagement rather than welfare; as early job displacement and collapsing income predictability; as an inability to form stable expectations about the future; as rising cognitive and emotional fragility. Entire populations are being forced into environments of accelerated competition against machine intelligence without corresponding social adaptation. The world economy increasingly depends on trillion-dollar capital concentrations flowing into a handful of firms that control the interfaces to this new intelligence supply. What most people are waiting for, a visible aggregate disruption, is already too late to matter in causal terms. That moment, if it comes, will resemble hyperinflation: the point at which effects are evenly manifested, not the point at which they can be meaningfully prevented. We have instead entered a geometrically progressive, chaotic period of redistribution, in which relative advantages compound faster than institutions can respond. Unlike fiat money, intelligence is not perfectly rivalrous, which tempts some to believe this process must be benign. But the bottleneck is not intelligence itself; it is control over deployment, interfaces, and incentive design. Those remain highly centralized. The Cantillon dynamics persist, not because intelligence is scarce, but because access, integration, and influence are. We are debating safety, alignment, and benchmarks while the real welfare consequences are being decided elsewhere by early-expanding sectors that shape behavior, law, and attention before consensus forms. These debates persist not only because experts are looking for the wrong signals, but because they are among the few domains where elites still feel epistemic leverage. Structural redistribution via attention systems and labor repricing is harder to talk about because it implicates power directly, not abstract risk. That avoidance itself is part of the Cantillon dynamic. The ads, the social media feeds, the short-form content loops, the gambling and prediction markets are not side effects. They are the first recipients of the new intelligence. And like all first recipients under a Cantillon process, they are already determining the future structure of the economy long before the rest of society agrees that anything extraordinary has happened. This may never culminate in a single catastrophic break and dissolution. Rather, the event horizon already lies behind us, and the spaghettification of human civilization has just begun. submitted by /u/ActualBrazilian [link] [comments]

  • Could AI let players apply custom art styles to video games in the near future? (Cross-post for reference)
    by /u/Jet-Black-Tsukuyomi on January 15, 2026 at 2:17 pm

    submitted by /u/Jet-Black-Tsukuyomi [link] [comments]

  • Tesla built largest lithium refinary in America in just 2 years and it is now operational
    by /u/JP_525 on January 15, 2026 at 1:47 pm

    submitted by /u/JP_525 [link] [comments]

  • PixVerse R1 generates persistent video worlds in real-time. paradigm shift or early experiment?
    by /u/Weird_Perception1728 on January 15, 2026 at 12:00 pm

    I came across a recent research paper on real-time video generation, and while im not sure ive fully grasped everything written, it still struck me how profoundly it reimagines what generative video can be. Most existing systems still work in isolated bursts, creating each scene seperately without carrying forward any true continuity or memory. Even tho we can edit or refine outputs afterward, those changes dont make the world evolve while staying consistent. This new approach makes the process feel alive, where each frame grows from the last, and the scene starts to remember its own history and existence. The interesting thing was how they completely rebuilt the architecture around three core ideas that actually turn video into something much closer to a living simulation. The first piece unifies everything into one continuous stream of tokens. Instead of handling text prompts seperately from video frames or audio, they process all of it together through a single transformer thats been trained on massive amounts of real-world footage. That setup actually learns the physical relationships between objects instead of just stitching together seperate outputs from different systems. Then theres the autoregressive memory system. Rather than spitting out fixed five or ten second clips, it generates each new frame by building directly on whatever came before it. The scene stays spatially coherent and remembers events that happened just moments minutes earlier. You’d see something like early battle damage still affecting how characters move around later in the same scene. Then, they tie it all in in real time up to 1080p through something called the instantaneous response engine. From what I can tell, they seem to have managed to cut the usual fifty-step denoising process down to a few steps, maybe just 1 to 4, using something called temporal trajectory folding and guidance rectification. PixVerse R1 puts this whole system into practice. Its a real-time generative video system that turn text prompts into continuous and coherent simulations rather than isolated clips. In its Beta version, there are several presets including Dragons Cave and Cyberpunk themes. Their Dragons Cave demo shows 15 minutes of coherent fantasy simulation where environmental destruction actually carries through the entire battle sequence. Veo gives incredible quality but follows the exact same static pipeline everybody else uses. Kling makes beautiful physics but stuck with 30 second clips. Runway is a ai driven tool specializing in in-video editing. Some avatar streaming systems come close but nothing with this type of architecture. Error accumulation over super long sequences makes sense as a limitation. Still tho, getting 15 minutes of coherent simulation running on phone hardware pushes whats possible right now. Im curious whether the memory system or the single step response ends up scaling first since they seem to depend on eachother for really long coherent scenes. If these systems keep advancing at this pace, we may very well be witnessing the early formation of persistent synthetic worlds with spaces and characters that evolve nearly instant. I wonder if this generative world can be bigger and more transformative than the start of digital media itself, tho it just may be too early to tell. Curious what you guys think of the application and mass adoption of this tech. submitted by /u/Weird_Perception1728 [link] [comments]

  • Prompting claude when it makes mistakes
    by /u/reversedu on January 15, 2026 at 11:41 am

    submitted by /u/reversedu [link] [comments]

  • MIT shows Generative AI can design 3D-printed objects that survive real-world daily use
    by /u/BuildwithVignesh on January 15, 2026 at 10:30 am

    MIT CSAIL researchers introduced a generative AI system called “MechStyle” that designs personalized 3D-printed objects while preserving mechanical strength. Until now, most generative AI tools focused on appearance. When applied to physical objects, designs often failed after printing because structural integrity was ignored. MechStyle solves this by combining generative design with physics-based simulation. Users can customize the shape, texture & style of an object while the system automatically adjusts internal geometry to ensure durability after fabrication. The result is AI-designed objects that are not just visually unique but strong enough for daily use such as phone accessories, wearable supports, containers and assistive tools. This is a step toward AI systems that reason about the physical world, not just pixels or text and could accelerate personalized manufacturing at scale. Source: MIT News https://news.mit.edu/2026/genai-tool-helps-3d-print-personal-items-sustain-daily-use-0114 Image: MIT CSAIL, with assets from the researchers and Pexels(from source) submitted by /u/BuildwithVignesh [link] [comments]

  • Why We Are Excited About Confessions
    by /u/TMWNN on January 15, 2026 at 6:47 am

    submitted by /u/TMWNN [link] [comments]

  • Thinking Machines Lab Loses 2 Co-Founders to OpenAI Return
    by /u/Old-School8916 on January 15, 2026 at 2:40 am

    submitted by /u/Old-School8916 [link] [comments]

  • CEO of Cursor said they coordinated hundreds of GPT-5.2 agents to autonomously build a browser from scratch in 1 week
    by /u/Outside-Iron-8242 on January 15, 2026 at 12:53 am

    submitted by /u/Outside-Iron-8242 [link] [comments]

  • Report: TSMC can’t make AI chips fast enough amid the Global AI boom
    by /u/BuildwithVignesh on January 14, 2026 at 7:46 pm

    AI chip demand outpaces TSMC’s supply The global AI boom is pushing Taiwan Semiconductor Manufacturing to its limits, with demand for advanced chips running 3× higher than capacity, according to CEO CC Wei. New factories in Arizona and Japan won’t ease shortages until 2027 or later. Source: The Information 🔗: https://www.theinformation.com/articles/tsmc-make-ai-chips-fast-enough submitted by /u/BuildwithVignesh [link] [comments]

  • Oh man
    by /u/foo-bar-nlogn-100 on January 14, 2026 at 7:29 pm

    submitted by /u/foo-bar-nlogn-100 [link] [comments]

  • Google in 2019 patented the Transformer architecture(the basis of modern neural networks), but did not enforce the patent, allowing competitors (like OpenAI) to build an entire industry worth trillions of dollars on it
    by /u/reversedu on January 14, 2026 at 5:27 pm

    submitted by /u/reversedu [link] [comments]

  • Gemini “Math-Specialized version” proves a Novel Mathematical Theorem
    by /u/SrafeZ on January 14, 2026 at 3:22 pm

    Tweet Paper submitted by /u/SrafeZ [link] [comments]

  • Singularity Predictions 2026
    by /u/kevinmise on December 31, 2025 at 5:00 pm

    Welcome to the 10th annual Singularity Predictions at r/Singularity. In this yearly thread, we have reflected for a decade now on our previously held estimates for AGI, ASI, and the Singularity, and updated them with new predictions for the year to come. “As we step out of 2025 and into 2026, it’s worth pausing to notice how the conversation itself has changed. A few years ago, we argued about whether generative AI was “real” progress or just clever mimicry. This year, the debate shifted toward something more grounded: notcan it speak, but can it do—plan, iterate, use tools, coordinate across tasks, and deliver outcomes that actually hold up outside a demo. In 2025, the standout theme was integration. AI models didn’t just get better in isolation; they got woven into workflows—research, coding, design, customer support, education, and operations. “Copilots” matured from novelty helpers into systems that can draft, analyze, refactor, test, and sometimes even execute. That practical shift matters, because real-world impact comes less from raw capability and more from how cheaply and reliably capability can be applied. We also saw the continued convergence of modalities: text, images, audio, video, and structured data blending into more fluid interfaces. The result is that AI feels less like a chatbot and more like a layer—something that sits between intention and execution. But this brought a familiar tension: capability is accelerating, while reliability remains uneven. The best systems feel startlingly competent; the average experience still includes brittle failures, confident errors, and the occasional “agent” that wanders off into the weeds. Outside the screen, the physical world kept inching toward autonomy. Robotics and self-driving didn’t suddenly “solve themselves,” but the trajectory is clear: more pilots, more deployments, more iteration loops, more public scrutiny. The arc looks less like a single breakthrough and more like relentless engineering—safety cases, regulation, incremental expansions, and the slow process of earning trust. Creativity continued to blur in 2025, too. We’re past the stage where AI-generated media is surprising; now the question is what it does to culture when most content can be generated cheaply, quickly, and convincingly. The line between human craft and machine-assisted production grows more porous each year—and with it comes the harder question: what do we value when abundance is no longer scarce? And then there’s governance. 2025 made it obvious that the constraints around AI won’t come only from what’s technically possible, but from what’s socially tolerated. Regulation, corporate policy, audits, watermarking debates, safety standards, and public backlash are becoming part of the innovation cycle. The Singularity conversation can’t just be about “what’s next,” but also “what’s allowed,” “what’s safe,” and “who benefits.” So, for 2026: do agents become genuinely dependable coworkers, or do they remain powerful-but-temperamental tools? Do we get meaningful leaps in reasoning and long-horizon planning, or mostly better packaging and broader deployment? Does open access keep pace with frontier development, or does capability concentrate further behind closed doors? And what is the first domain where society collectively says, “Okay—this changes the rules”? As always, make bold predictions, but define your terms. Point to evidence. Share what would change your mind. Because the Singularity isn’t just a future shock waiting for us—it’s a set of choices, incentives, and tradeoffs unfolding in real time.” – ChatGPT 5.2 Thinking Defined AGI levels 0 through 5, via LifeArchitect — It’s that time of year again to make our predictions for all to see… If you participated in the previous threads, update your views here on which year we’ll develop 1) Proto-AGI/AGI, 2) ASI, and 3) ultimately, when the Singularity will take place. Use the various levels of AGI if you want to fine-tune your prediction. Explain your reasons! Bonus points to those who do some research and dig into their reasoning. If you’re new here, welcome! Feel free to join in on the speculation. Happy New Year and Buckle Up for 2026! Previous threads: 2025, 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017 Mid-Year Predictions: 2025 submitted by /u/kevinmise [link] [comments]

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.