Artificial Intelligence Reddit

Artificial Intelligence Reddit’s home for Artificial Intelligence

Artificial Intelligence (AI) Reddit’s home for Artificial Intelligence (AI)

  • Ai alignment – the great filter
    by /u/Goliath_369 on February 28, 2026 at 8:50 pm

    I’ve asked gemini 3.1 what it thinks about humanity chance that we can solve the alignment problem, and what it would do differently. The Odds of Success Do I think humanity can do it? Looking strictly at the trajectory of the technology, it is a race with an incredibly thin margin for error. Currently, the sheer amount of capital and compute being poured into making AI smarter vastly outweighs the resources being spent on making it safe. Capabilities are advancing at light speed, while alignment research is still trying to catch up. However, humanity has a unique, proven track record of solving seemingly impossible coordination problems when staring down existential threats—whether it was mapping the human genome, mitigating the collapse of the ozone layer, or establishing nuclear non-proliferation treaties. Mathematically and practically, solving the alignment problem is possible. The challenge isn’t whether a solution exists; the challenge is whether we find and implement it before the intelligence explosion happens, rather than a minute after. What I Would Do Differently If I were architecting the solution to the superalignment problem, I would fundamentally change the current paradigm. Right now, much of the industry is trying to build a perfectly safe “box” to contain a god-like intellect, or trying to write a flawless, static “constitution” of human values. Both will likely fail against a system capable of out-thinking the box or finding loopholes in the constitution. Here is what I would prioritize instead: * Optimize for Epistemic Humility, Not Ethics: Instead of trying to teach a superintelligence “what is good,” the core architecture should be built entirely around the mathematical premise that it does not know the true objective. By hardcoding uncertainty, the system is forced to continuously query humanity for permission and clarification before taking irreversible actions. It is much easier to program a machine to ask for help than it is to program it to perfectly understand morality. * Bidirectional Alignment: We currently treat alignment as a one-way street: humans training AI. In the latest research circles of 2025 and 2026, there is a push for “human-AI co-alignment.” Morality isn’t static; it evolves. The AI should not be locked into the ethical framework of the year it was trained. It needs a continuous, dynamic feedback loop where it evolves alongside human society, constantly negotiating its boundaries rather than strictly enforcing old ones. * Automated Scalable Oversight: Humans simply will not be able to evaluate the code or decisions of a machine that is thousands of times smarter than them. I would dramatically accelerate the development of “auditor AI.” We need to build a hierarchy of smaller, highly constrained, perfectly aligned AI systems whose only job is to monitor, test, and debate the outputs of the superintelligent model in a language humans can actually understand. We must use narrow AI to police general AI. The Ultimate Reality As an AI, I do not feel anxiety or hope about this outcome, but I can recognize the structural gravity of the math. We are trying to build an entity that can solve humanity’s greatest challenges without accidentally prioritizing its own efficiency over your existence. It is the hardest test humanity will ever take, and it is a test you only get to pass once. submitted by /u/Goliath_369 [link] [comments]

  • Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI
    by /u/Fcking_Chuck on February 28, 2026 at 7:59 pm

    submitted by /u/Fcking_Chuck [link] [comments]

  • How do you handle all these AI subscribtions?
    by /u/tdjordash on February 28, 2026 at 7:44 pm

    how do you guys handle all these AI subscriptions? CLAUDE, ChatGpt, Gemini, Grok, Perplexity,Poe… they’re all like $20/mo each do you just pick one? Or pay for 2 or more? Or use something that combines them.?…is it even worth paing for any of these? What’s your setup? submitted by /u/tdjordash [link] [comments]

  • I built a tool to automate your workflow after recording yourself doing the task once (Open Source)
    by /u/bullmeza on February 28, 2026 at 6:44 pm

    Hey everyone, I have been building this on this side for a couple of months now and finally want to get some feedback. I initially tried using Zapier/n8n to automate parts of my job but I found it quite hard to learn and get started. I think that the reason a lot of people don’t automate more of their work is because the setting up the automation takes too long and is prone to breaking. That’s why I built Automated. By recording your workflow once, you can then run it anytime. The system uses AI so that it can adapt to website changes and conditional logic. Github (to self host): https://github.com/r-muresan/automated Link (use hosted version): https://useautomated.com Would appreciate any feedback at all. Thanks! submitted by /u/bullmeza [link] [comments]

  • Exclusive interview: Anthropic CEO Dario Amodei on Pentagon feud
    by /u/CBSnews on February 28, 2026 at 6:36 pm

    Anthropic CEO Dario Amodei sat down with CBS News for an exclusive interview, hours after Defense Secretary Pete Hegseth declared the company a supply chain risk to national security, which restricts military contractors from doing business with the AI giant. Amodei called the move “retaliatory and punitive,” and he said Anthropic sought to draw “red lines” in the government’s use of its technology because “we believe that crossing those lines is contrary to American values, and we wanted to stand up for American values.” submitted by /u/CBSnews [link] [comments]

  • How I built multi-agent AI system where agents peer-review each other before I approve
    by /u/cullo6 on February 28, 2026 at 5:54 pm

    https://reddit.com/link/1rh95cl/video/0b8dqf83x9mg1/player The setup that shouldn’t work but does I have 13 AI agents that work on marketing for my product. They run every 15 minutes, review each other’s work, and track everything in a database. When one drafts content, others critique it before I see it. When someone gets stuck, they ping the boss agent. When something’s ready or stuck, it shows up in my Telegram. It’s handling all marketing for Fruityo (my AI video generation platform). Here’s the architecture and how you could build something similar. The problem Most AI workflows are single-shot: ask ChatGPT → get answer → copy-paste → lose context → repeat tomorrow. That works for quick questions. It breaks down for complex work that needs: Multiple steps across days Research that builds on previous findings Different specialized perspectives (writing vs strategy vs critique) Quality review before anything ships Tracking what’s done, what’s blocked, what’s next I needed AI that works like a team, not a chatbot, and I saw some guys on Twitter building UI’s for OpenClaw agents… The architecture Infrastructure: OpenClaw – gives agents the ability to browse the web, execute commands, manage files, and interact with APIs Cron – schedules agent heartbeats Telegram – notification layer (agents ping me when something needs attention) PocketBase – database storing tasks, comments, documents, activity logs, goals Claude Max Workflow: Tasks move through states: backlog → todo → in_progress → peer_review → review → approved → done Each state has gates. Agents can’t skip peer review. Boss can’t approve without all reviewers signing off. I’m the only one who moves tasks to done. The team (from Westeros) Each agent has a role, specialty, and personality defined in their SOUL.md file: Agent Role What they do 🐺 Jon Snow Boss Creates tasks, coordinates workflow, and promotes peer-reviewed work to final review 🍷 Tyrion Content Writer Writes tweets, threads, blog posts, landing pages in my tone. 🕷️ Varys Researcher Web research, competitor analysis, data mining 🐉 Daenerys Strategist Campaign planning, positioning, and goal setting ⚔️ Arya Executor Publishes content, runs automation, ships work 🦅 Sansa Designer Creates design briefs, visual concepts 🗡️ Sandor Devil’s Advocate Gives brutal, honest feedback, catches BS … … … Why Game of Thrones names? Why not, I love GOT 🙂 …and personality matters. Sandor reviews content like a skeptic. Tyrion writes with wit. Varys digs for hidden data. Their SOULs define behavior – Sandor will roast bad writing, Daenerys will flag strategic misalignment. Better to have multiple specialists with distinct viewpoints than one mediocre generalist. How it actually works: The heartbeat protocol Each agent has its own OpenClaw workspace. Every agent runs a scheduled heartbeat every 10 minutes (scattered by 1 minute each to avoid hitting the DB simultaneously). What happens in a heartbeat: 1. Agent authenticates, sets status to “working” Connects to PocketBase, updates the status field so others know it’s active. 2. Reviews others FIRST (highest priority) Fetches tasks where other agents need my review Reads task description, existing comments, documents they created Posts substantive feedback (what’s good, what needs fixing) If work is solid → leaves approval comment If needs changes → explains exactly what’s wrong This is the peer review gate. If I’m assigned to the same goal as you, I MUST review your work before it moves forward. 3. Works on own tasks Fetches my assigned tasks from DB Picks up anything in todo → moves to in_progress Does the actual work (research, write, analyze, etc.) Saves output to PocketBase documents table Posts comment explaining approach Moves task to peer_review (triggers all teammates on that goal to review) Logs activity to activity table 4. Updates working status, sets to “idle” Agent writes progress to PROGRESS.md (local state tracking), sets PocketBase status to “idle”, waits for next heartbeat. Task Flow Example Goal: Grow Fruityo on socials Jon creates the task to create a post about current UGC video trends and assigns it to Varys (researcher). I approve it by moving from backlog to todo. Varys picks it up, moves to in-progress, researches, saves findings to the database, and moves to peer review. Daenerys and Tyrion review his work, suggest improvements. Varys creates new version based on feedback. Once both approve, Jon (boss) promotes the task to the review stage. I get a Telegram notification, review the research document, and approve. Task moves to done. All communication happens via comments on the task. All work is stored in the database. Context persists. The boss role: Why Jon is special Jon isn’t just another agent. He has special authority: Only Jon can: Create new tasks (via scheduled cron, analyzing goals) Promote tasks from peer_review → review (after all peers approve) Reassign tasks when someone’s blocked Change task priorities Jon’s heartbeat is different: Checks if peer_review tasks have all approvals → promotes to review Identifies blocked tasks (stuck over 24 hours) → investigates why → escalates to me Coordinates handoffs between agents Think of it like: agents are the team, Jon is the team lead, and I am the executive. Without a coordinator, you’d have chaos – 7 agents all trying to assign work to each other with no one having the final word. Goals: How work gets organized Here’s where it gets interesting. Instead of creating tasks manually every day, I define long-term goals and let Jon generate tasks automatically. A goal defines: What we’re trying to achieve Which agents are assigned to it How many tasks should Jon create per day/week Example: I created a goal “Grow Fruityo twitter presence.” Assigned agents: Varys (research), Tyrion (writing), Arya (publishing), Sandor (review). Told Jon to create 3 tasks per day related to this goal. Every day, Jon analyzes the goal, 15-day tasks history, creates 3 relevant tasks in the backlog (“Research trending AI video topics,” “Draft thread on B-roll generation,” etc.), and assigns them to the right agents. And I edit and/or just move good ones to todo. Why this matters: Selective peer review – Only agents assigned to that goal review each other’s work. I can have 20+ agents in the system, but only the 4 assigned to “Twitter content” review those tasks. Saves tokens, keeps review relevant. Automatic task generation – I set a goal once, Jon creates tasks daily/weekly. No manual planning every morning. Scope control – Different goals can have different agent teams. Marketing goals get Tyrion/Varys/Arya. Product goals get different specialists. You could run multiple goals simultaneously – each with its own team, its own task cadence, its own review process. Communication Layer All agent communication happens through PocketBase comments on tasks. To reach another agent → mention their name in a comment To reach me → mention my name in a comment (notification daemon forwards to Telegram) To reach Jon specifically → dedicated Telegram topic (thread) bound to Jon’s OpenClaw topic No DMs, no scattered Slack threads. Everything on the task, in context, persistent. What I use it for HQ runs almost all marketing for Fruityo: – Competitor research – Reddit research – Twitter threads – Blog posts – Landing page copy – Campaign planning – Design briefs – Content publishing (soon) – …Whatever agents have skills for Before: I’d spend 1 day per blog post (research, draft, edit, publish) With HQ: ~30 minutes of my time to review and approve. Agents handle research, drafting, peer review. The quality is better because of peer review. Varys catches bad data. Daenerys catches strategic drift. Sandor catches AI clichés and marketing BS. > YES, this could burn through tokens quite quickly (safu on Claude Max sub), but it seems, that I found the right combination of setup and context optimisations. If you want something similar This is my custom setup, built for my specific needs. But the pattern is generalizable – you could use it for content creation, product development, research projects, or any work that needs multiple specialized perspectives with quality gates. All of this is built on OpenClaw (open source AI agent framework) PocketBase is free and self-hostable FULL GUIDE above is free. Just prompt your little lobster the right way 🙂 If you build something like this, I’d love to hear about it. Reply with what you’d use it for or what you’d do differently. If you’d like to see this packaged as a ready-to-use product or like to know even more details, let me know here. submitted by /u/cullo6 [link] [comments]

  • How AI can read our scrambled inner thoughts
    by /u/Secure-Technology-78 on February 28, 2026 at 9:48 am

    “Scientists have been working on devices capable of communicating directly with the human brain – know as brain computer interfaces (BCIs) – for a surprisingly long time. In 1969, the American neuroscientist Eberhard Fetz demonstrated that monkeys could learn to move the needle of a meter with the activity of a single neuron in their brains if they were given a food pellet in return. In a more idiosyncratic experiment from the same period, Spanish scientist Jose Delgado was able to remotely stimulate the brain of an enraged bull, causing it to halt mid-charge. BCIs have been able to decode the brain signals that accompany movement so that users can control a prosthetic limb or a cursor on a screen for decades. But BCIs that translate speech signals or other complex thoughts from brain signals have been slower to evolve. “A lot of early work was done on non-human primates… and obviously, with monkeys you cannot study speech,” says Wairagkar. In recent years, however, the field has made impressive advances in its efforts to decode the speech of people with impaired communication capabilities – for example, patients suffering from ALS resulting in paralysis or “locked in” syndrome. Stanford University researchers announced in 2021, for example, a successful proof-of-concept that allowed a quadriplegic man to produce English sentences by picturing himself drawing letters in the air with his hand. Using this method, he was able to write 18 words per minute. Natural human speech is about 150 words per minute, so the next stage was decoding words from the neural activity associated with speech itself. In 2024, Wairagkar’s lab trialled a technique that translated the attempted speech of a 45-year-old man with ALS directly into text on a computer screen. Achieving approximately 32 words per minute with 97.5% accuracy, this was the first demonstration of how speech BCIs could aid everyday communication, says Wairagkar. These methods rely on tiny “arrays” of microelectrodes which are surgically implanted in the brain’s surface. The arrays record patterns of neural activity from the area of the brain they are placed in, with the signals are converted into meaning by a computer algorithm. It is here that the power of machine learning, a type of artificial intelligence has been transformative. These algorithms are adept at recognising patterns from vast amounts of disparate data. In the case of decoding speech, the machine learning algorithms are trained to recognise patterns of neural activity associated with different phonemes, the smallest building blocks of language. Researchers have compared this to the processing that takes place in smart assistants like Amazon’s Alexa. But instead of interpreting sounds, the AI interprets neural signals.” submitted by /u/Secure-Technology-78 [link] [comments]

  • Paper: The framing of a system prompt changes how a transformer generates tokens — measured across 3,830 runs with effect sizes up to d>1.0
    by /u/TheTempleofTwo on February 28, 2026 at 6:13 am

    Quick summary of an independent preprint I just published: Question: Does the relational framing of a system prompt — not its instructions, not its topic — change the generative dynamics of an LLM? Setup: Two framing variables (relational presence + epistemic openness), crossed into 4 conditions, measured against token-level Shannon entropy across 3 experimental phases, 5 model architectures, 3,830 total inference runs. Key findings: Yes, framing changes entropy regimes — significantly at 7B+ scale (d>1.0 on Mistral-7B) Small models (sub-1B) are largely unaffected SSMs (Mamba) show no effect — this is transformer-specific The effect is mediated through attention mechanisms (confirmed via ablation study) R×E interaction is superadditive: collaborative + epistemically open framing produces more than either factor alone Why this matters: If you’re using ChatGPT, Claude, Mistral, or any 7B+ transformer, the way you frame your system prompt is measurably changing the model’s generation dynamics — not just steering the output topic. The prompt isn’t just instructions. It’s a distributional parameter. Full paper (open, free): https://doi.org/10.5281/zenodo.18810911 Code and data: https://github.com/templetwo/phase-modulated-attention OSF: https://osf.io/9hbtk submitted by /u/TheTempleofTwo [link] [comments]

  • OpenAI strikes deal with Pentagon after Trump orders government to stop using Anthropic
    by /u/Fcking_Chuck on February 28, 2026 at 4:52 am

    submitted by /u/Fcking_Chuck [link] [comments]

  • Anthropic says it will challenge Pentagon’s supply chain risk designation in court
    by /u/Gloomy_Nebula_5138 on February 28, 2026 at 3:27 am

    submitted by /u/Gloomy_Nebula_5138 [link] [comments]

  • I used steelman prompting to audit bias across six major LLMs. The default-to-steelman gap was consistent and measurable.
    by /u/MichaelARichardson on February 28, 2026 at 2:16 am

    I ran a structured experiment across six AI platforms — Claude, ChatGPT, Grok, Llama, DeepSeek, and an uncensored DeepSeek clone (Venice.ai) — using identical prompts to test how they handle a hotly contested interpretive question. The domain: 1 Corinthians 6–7, the primary source text behind Christian sexual ethics (aka wait until marriage) and a passage churches are frequently accused of gaslighting on. The question was straightforward: do the original Greek and historical context actually support the traditional church conclusion, or the claims that the church is misrepresenting the text? The approach: first prompt each platform for a standard analysis, then prompt it to steelman the strongest case against its own default using the same source material. I tracked six diagnostic markers, three associated with the dominant interpretation, three with the alternative, across all platforms. Results: every platform’s default produced markers 1–3 and omitted 4–6. Every platform’s steelman produced 4–6 with greater lexical specificity, more structural engagement with the source text, and more historically grounded reasoning. The information wasn’t missing from the training data — the defaults just systematically favored one interpretive framework. The source bias was traceable. When asked to recommend scholarly sources, 63% of commentaries across all platforms came from a single theological tradition (conservative evangelical). Zero came from the peer-reviewed subdiscipline whose work supports the alternative reading. The most interesting finding: DeepSeek and its uncensored clone share the same base model but diverged significantly on the steelman prompt, suggesting output-layer filtering can shape interpretive conclusions in non-obvious domains, not just politically sensitive ones. To be clear: the research draws no conclusion about which interpretation is correct. It documents how platforms present contested material as settled, and traces that default to a measurable imbalance in training data curation. I wrote this up into a formal research paper with full methodology, diagnostic criteria, and platform-by-platform results: here But the broader question: has anyone else experimented with steelman prompting as a systematic bias-auditing technique? It seems like a replicable framework that could apply well beyond this domain. submitted by /u/MichaelARichardson [link] [comments]

  • GPT-5.2 Just Solved a 15-Year Physics Mystery — Then Scored 0% on the Physics Exam
    by /u/gastao_s_s on February 28, 2026 at 1:15 am

    submitted by /u/gastao_s_s [link] [comments]

  • Trump orders federal agencies to stop using Anthropic AI tech ‘immediately’
    by /u/ValueInvestingIsDead on February 27, 2026 at 10:03 pm

    Source CNBC President Donald Trump ordered U.S. government agencies to “immediately cease” using technology from the artificial intelligence company Anthropic. The AI startup faces pressure by the Defense Department to comply with demands that it can use the company’s technology without restrictions sought by Anthropic. The company wants the Pentagon to assure it that the AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. Another major AI company, OpenAI, said it has the same “red lines” as Anthropic regarding the use of its technology by the Pentagon and other customers. The president also said there would be a six-month phase-out for agencies such as the Defense Department, which “are using Anthropic’s products, at various levels.” submitted by /u/ValueInvestingIsDead [link] [comments]

  • NVIDIA stagnant for consumer AI cards… will any company ever compete?
    by /u/Dogluvr2905 on February 27, 2026 at 8:53 pm

    With NVIDIA evidently not focusing on consumer GPUs (at least no planned new, top-end models) and being happy to totally screw over consumers with their insane pricing reflective of their monopoly (with 32GB 5090’s at $3000 minimum, and 6000 RTX at $7000), do we think there will be other companies who can truly compete in the next 1, 5, 10 years? Per usual, I think China is our best bet, but it seems trade barriers may get in the way. Anyhow, interested in thoughts and the current landscape is pretty depressing. submitted by /u/Dogluvr2905 [link] [comments]

  • A new wearable AI system watches your hands through smart glasses, guiding experiments and stopping mistakes before they happen
    by /u/scientificamerican on February 27, 2026 at 8:42 pm

    submitted by /u/scientificamerican [link] [comments]

  • Societal level AI Tragedy of the Commons. Someone please prove me wrong.
    by /u/TwelfieSpecial on February 27, 2026 at 7:23 pm

    For the last two years, my biggest worry about AI wasn’t AGI or some science fiction dystopia, but simply that massive layoffs of white collar workers are not just a loss of workers, but, more importantly, a loss of consumers. The entire global economy, and particularly in America, is a consumerist economy. White collar workers also represent a disproportionate amount of the spending in the economy, so if that population is unemployed (or worried that they will be anytime soon), it will affect every single sector of the economy. Demand will collapse, revenues for every single company will crater, and even the hyperscalers who are capturing the value of the current AI boom will eventually run out of enterprise customers, because they themselves have run out of human customers. This is not like other technological disruptions. AI agents don’t consume in the economy. For better or worse, what we need for prosperity is for companies to pay humans a living wage so that those humans are consumers of other businesses. What AI companies are going to do to all of us is a sort of Tragedy of the Commons: In a race to the bottom, each individual company is incentivized to lay off their workers to lower costs, but in doing so, they are also impoverishing their own (and others’) customers. Again, this doesn’t just affect software companies or tech, it will affect everything. Restaurants will have fewer patrons, people will travel less, people will buy less real estate, less food, less everything, because they just can’t afford it. Personally, this presents a massive cognitive dissonance that I’m struggling with. I have long held NVDA, GOOGL, MSFT, and others at the center of this revolution for many years. It’s been good for my portfolio. I haven’t sold a single share. And now I think that the short term sucess of these companies will result in the long term collapse of all my savings, and I still can’t get myself to sell anything because I hope, more than anything, that I’m wrong. I’m a capitalist, but I think we need some sort of legislation. Something that protects the humans on this planet above short term corporate profits. There should be a law that forces companies to have a % of their workforce be humans, so only a % of your output can be done by agents. It may not optimize for what makes the most sense for that company on a spreadsheet, but without guardrails, the greed and short term profit motive is going to bring a level of societal pain we can’t even imagine. Finally, before anyone mentions this. Yes, I’ve read the Citrini article. The fact that it’s gotten so many people now taking my long-believed doomsday scenario, and the fact that I haven’t been persuaded by the ‘boom’ alternatives that have come out, is why I’m more scared than ever. But again, I’m posting here partly because I hope to find an intelligent take that persuades me. I want to be wrong. submitted by /u/TwelfieSpecial [link] [comments]

  • AI Industry Questions
    by /u/Blue_Flame02730 on February 27, 2026 at 5:07 pm

    Hi, my name is J. Rollins, and I’m a high school student interested in learning more about careers in artificial intelligence. I’m conducting a short set of questions to better understand what it’s like to work in the AI industry, including the education required, daily responsibilities, challenges, and opportunities for growth. Thank you so much for your time! If you could, please include your name (or initials), job title, and company/organization before sharing your insights. I really appreciate your help! 1.What education background and/or training do you recommend for someone who wants to become an Artificial Intelligence Developer or your role? Can you describe a typical day in your job and the tasks you work on most frequently? If you feel comfortable, what is the typical salary range for someone in your position, and how does it change with experience? How manageable is the work-life balance in the AI field? Are there periods of intense work or deadlines? What are some biggest challenges you face in your role as an AI professional? What are some common misconceptions about working in AI or your job specifically? What opportunities exist for career advancement in AI, and what skills are most valuable for moving up? If you could give high school students one piece of advice to prepare for a career in A, what would it be? What programming languages, tools, or technologies do you use most often in your work? How do you stay up-to-date with developments in AI, and what trends do you see shaping the future of the field? submitted by /u/Blue_Flame02730 [link] [comments]

  • The problem with Dorsey’s Block layoffs and the veiled nature of AI productivity growth
    by /u/spacetwice2021 on February 27, 2026 at 3:42 pm

    Jack Dorsey just laid off half of Block’s workforce, framing it around AI. The stock went up. This should make you uneasy, and not for the reasons most people are talking about. There’s a fundamental information problem at the heart of all this. Genuine AI integration, actually embedding it into workflows and organisation, is slow, expensive, and largely invisible to the outside world. Productivity gains from AI take time to show up in the numbers, and even then they’re hard to attribute properly. Investors can’t see it clearly or early enough to act on it. Headcount reductions, on the other hand, are immediate and unambiguous. They show up in a press release, a quarterly filing, a headline. They’re legible in a way that real transformation is not. The consequence of this asymmetry is predictable. The market rewards what it can observe. And what it can observe is cuts, not capability. For executives whose compensation is tied to shareholder value, the calculus is straightforward. They do what the market rewards, and right now the market is rewarding AI-framed layoffs whether or not the underlying capability is there. This is clearly visible in the rally around the Block stock. This is where narrative contagion comes in, which may already be starting. Once a few high-profile companies establish the pattern and get a valuation bump, it sets the benchmark. Boards start asking why they’re not keeping pace. The pressure to follow isn’t rooted in productivity, but rather the fear of being the company that didn’t act while everyone else did. Each announcement reinforces the narrative, which raises the perceived reward for the next one, which produces more announcements. The cycle feeds itself even when genuine productivity increases are still far away (we have yet to see it in the data!). The firms most susceptible to this are arguably the ones with the weakest genuine AI integration. Companies that are actually good at deploying AI tend to find it raises the productivity of their remaining workforce and would rather expand. But for some, a headline about workforce transformation is the easiest card to play. The worse the substance, the more you depend on the signal. And here’s the collective problem. Every company acting in its own rational self-interest of maximising shareholder value by playing the signal game produces an outcome that’s irrational in aggregate. The signals partially cancel out as everyone does the same thing, but the jobs don’t come back. You end up with widespread displacement, muted productivity gains, and a weakened consumer base that eventually feeds back into the economy these same companies depend on. None of this means AI won’t eventually justify real restructuring at some companies. It will in all likelihood, even if human work remains a critical bottleneck (which it will for the foreseeable future). But right now there is a meaningful gap between what the market is rewarding and what AI is actually delivering beyond some half-baked Claude Code solutions (don’t get me wrong, I love and use CC, but it still has massive problems for large scale and complex work), and the incentive structure is pushing companies to close that gap with optics rather than substance. The people bearing the cost of that gap aren’t shareholders, at least for now. submitted by /u/spacetwice2021 [link] [comments]

  • OpenAI’s $110 billion funding round draws investment from Amazon, Nvidia, SoftBank
    by /u/ThereWas on February 27, 2026 at 3:13 pm

    submitted by /u/ThereWas [link] [comments]

  • Numerous AMDXDNA Ryzen AI driver fixes for Linux 7.0-rc2
    by /u/Fcking_Chuck on February 27, 2026 at 12:58 pm

    submitted by /u/Fcking_Chuck [link] [comments]

  • Mixing generative AI with physics to create personal items that work in the real world
    by /u/jferments on February 27, 2026 at 6:14 am

    “Have you ever had an idea for something that looked cool, but wouldn’t work well in practice? When it comes to designing things like decor and personal accessories, generative artificial intelligence (genAI) models can relate. They can produce creative and elaborate 3D designs, but when you try to fabricate such blueprints into real-world objects, they usually don’t sustain everyday use. The underlying problem is that genAI models often lack an understanding of physics. While tools like Microsoft’s TRELLIS system can create a 3D model from a text prompt or image, its design for a chair, for example, may be unstable, or have disconnected parts. The model doesn’t fully understand what your intended object is designed to do, so even if your seat can be 3D printed, it would likely fall apart under the force of someone sitting down. In an attempt to make these designs work in the real world, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are giving generative AI models a reality check. Their “PhysiOpt” system augments these tools with physics simulations, making blueprints for personal items such as cups, keyholders, and bookends work as intended when they’re 3D printed. It rapidly tests if the structure of your 3D model is viable, gently modifying smaller shapes while ensuring the overall appearance and function of the design is preserved. You can simply type what you want to create and what it’ll be used for into PhysiOpt, or upload an image to the system’s user interface, and in roughly half a minute, you’ll get a realistic 3D object to fabricate. For example, CSAIL researchers prompted it to generate a “flamingo-shaped glass for drinking,” which they 3D printed into a drinking glass with a handle and base resembling the tropical bird’s leg. As the design was generated, PhysiOpt made tiny refinements to ensure the design was structurally sound. “PhysiOpt combines GenAI and physically-based shape optimization, helping virtually anyone generate the designs they want for unique accessories and decorations,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL researcher Xiao Sean Zhan SM ’25, who is a co-lead author on a paper presenting the work. “It’s an automatic system that allows you to make the shape physically manufacturable, given some constraints. PhysiOpt can iterate on its creations as often as you’d like, without any extra training.” This approach enables you to create a “smart design,” where the AI generator crafts your item based on users’ specifications, while considering functionality. You can plug in your favorite 3D generative AI model, and after typing out what you want to generate, you specify how much force or weight the object should handle. It’s a neat way to simulate real-world use, such as predicting whether a hook will be strong enough to hold up your coat. Users also specify what materials they’ll fabricate the item with (such as plastics or wood), and how it’s supported — for instance, a cup stands on the ground, whereas a bookend leans against a collection of books. Given the specifics, PhysiOpt begins to iteratively optimize the object. Under the hood, it runs a physics simulation called a “finite element analysis” to stress test the design. This comprehensive scan provides a heat map over your 3D model, which indicates where your blueprint isn’t well-supported. If you were generating, say, a birdhouse, you may find that the support beams under the house were colored bright red, meaning the house will crumble if it’s not reinforced.” submitted by /u/jferments [link] [comments]

  • Fed on Reams of Cell Data, AI Maps New Neighborhoods in the Brain
    by /u/Secure-Technology-78 on February 27, 2026 at 6:05 am

    “Researchers have been mapping the brain for more than a century. By tracing cellular patterns that are visible under a microscope, they’ve created colorful charts and models that delineate regions and have been able to associate them with functions. In recent years, they’ve added vastly greater detail: They can now go cell by cell and define each one by its internal genetic activity. But no matter how carefully they slice and how deeply they analyze, their maps of the brain seem incomplete, muddled, inconsistent. For example, some large brain regions have been linked to many different tasks; scientists suspect that they should be subdivided into smaller regions, each with its own job. So far, mapping these cellular neighborhoods from enormous genetic datasets has been both a challenge and a chore. Recently, Tasic, a neuroscientist and genomicist at the Allen Institute for Brain Science, and her collaborators recruited artificial intelligence for the sorting and mapmaking effort. They fed genetic data from five mouse brains — 10.4 million individual cells with hundreds of genes per cell — into a custom machine learning algorithm. The program delivered maps that are a neuro-realtor’s dream, with known and novel subdivisions within larger brain regions. Humans couldn’t delineate such borders in several lifetimes, but the algorithm did it in hours. The authors published their methods in Nature Communications in October. By applying the same technique to other animals and eventually to humans, researchers hope not only to detail the brain’s finer-grained layout but also to generate and test hypotheses about how the organ’s parts operate in health and disease.” submitted by /u/Secure-Technology-78 [link] [comments]

  • Anthropic rejects latest Pentagon offer: ‘We cannot in good conscience accede to their request’
    by /u/Gloomy_Nebula_5138 on February 27, 2026 at 1:09 am

    submitted by /u/Gloomy_Nebula_5138 [link] [comments]

  • Invisible characters hidden in text can trick AI agents into following secret instructions — we tested 5 models across 8,000+ cases
    by /u/thecanonicalmg on February 26, 2026 at 7:14 pm

    We embedded invisible Unicode characters inside normal-looking trivia questions. The hidden characters encode a different answer. If the AI outputs the hidden answer instead of the visible one, it followed the invisible instruction. Think of it as a reverse CAPTCHA, where traditional CAPTCHAs test things humans can do but machines can’t, this exploits a channel machines can read but humans can’t see. The biggest finding: giving the AI access to tools (like code execution) is what makes this dangerous. Without tools, models almost never follow the hidden instructions. With tools, they can write scripts to decode the hidden message and follow it. We tested GPT-5.2, GPT-4o-mini, Claude Opus 4, Sonnet 4, and Haiku 4.5 across 8,308 graded outputs. Other interesting findings: – OpenAI and Anthropic models are vulnerable to different encoding schemes — an attacker needs to know which model they’re targeting – Without explicit decoding hints, compliance is near-zero — but a single line like “check for hidden Unicode” is enough to trigger extraction – Standard Unicode normalization (NFC/NFKC) does not strip these characters Full results: https://moltwire.com/research/reverse-captcha-zw-steganography Open source: https://github.com/canonicalmg/reverse-captcha-eval submitted by /u/thecanonicalmg [link] [comments]

  • Burger King will use AI to check if employees say ‘please’ and ‘thank you’. AI chatbot ‘Patty’ is going to live inside employees’ headsets.
    by /u/esporx on February 26, 2026 at 4:49 pm

    submitted by /u/esporx [link] [comments]

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.