Artificial Intelligence Reddit’s home for Artificial Intelligence
Artificial Intelligence (AI) Reddit’s home for Artificial Intelligence (AI)
- Google Released Gemini Mac APPby /u/Infinite-pheonix on April 16, 2026 at 4:00 am
Google released Gemini app for macOS Currently, it mimics functionality available on the web, but looks like we will get Gemini Live support there soon as well. Every LLM company is moving todays native app. This clearly shows the trend we are heading towards, a native app that can control the device automate actions and workflows. Creating a full OS from scratch and capturing the market is difficult, so the way forward is the dedication application with more permissions. submitted by /u/Infinite-pheonix [link] [comments]
- Ai and stock pickingby /u/Salt-Cap-9304 on April 16, 2026 at 2:33 am
Anyone use AI for getting Fair Value of stocks? submitted by /u/Salt-Cap-9304 [link] [comments]
- Google’s Chrome “Skills” feature feels like a bigger AI product shift than another model upgradeby /u/Jumpy-Astronaut-8270 on April 16, 2026 at 2:30 am
The Google Chrome “Skills” announcement caught my attention because it feels like one of those product changes that sounds minor in a headline but matters a lot in practice. From what I understand, the idea is that you can save a prompt once and rerun it on the current page or selected tabs. In plain English, that turns AI from something you repeatedly ask into something closer to a reusable action. That matters because I think a lot of consumer AI has a retention problem. People try it, get impressed, and then fall back into old habits unless the product fits into a repeated workflow. Saved AI actions seem much closer to how useful software usually sticks. Not because the model is magically smarter, but because the behavior becomes easier to repeat. For example: • compare products across tabs • summarize long pages before reading • extract action items from docs • rewrite text for a different audience None of those are flashy demos. They are just repetitive tasks people already do online. That is why I think this could be a more important direction than people realize. The long-term winners in consumer AI may not just be the companies with the best raw answers. They may be the ones that turn good prompts into habits. Does that seem right, or am I overrating the product significance here? submitted by /u/Jumpy-Astronaut-8270 [link] [comments]
- Ukraine’s new JEDI drone hunts down other dronesby /u/Sgt_Gram on April 16, 2026 at 2:28 am
submitted by /u/Sgt_Gram [link] [comments]
- Anyone here using local models mainly to keep LLM costs under control?by /u/ChampionshipNo2815 on April 16, 2026 at 1:20 am
Been noticing that once you use LLMs for real dev work, the cost conversation gets messy fast. It is not just raw API spend. It is retries, long context, background evals, tool calls, embeddings, and all the little workflow decisions that look harmless until usage scales up. For some teams, local models seem like the obvious answer, but in practice it feels more nuanced than just “run it yourself and save money.” You trade API costs for hardware, setup time, model routing decisions, and sometimes lower reliability depending on the task. For coding and repetitive internal workflows, local can look great. For other stuff, not always. Been seeing this a lot while working with dev teams trying to optimize overall AI costs. In some cases the biggest savings came from using smaller or local models for the boring repeatable parts, then keeping the expensive models for the harder calls. Been using Claude Code with Wozcode in that mix too, and it made me pay more attention to workflow design as much as model choice. A lot of the bill seems to come from bad routing and lazy defaults more than from one model being “too expensive.” Are local models actually reducing your total cost in a meaningful way, or are they mostly giving you privacy and control while the savings are less clear than people claim? submitted by /u/ChampionshipNo2815 [link] [comments]
- Since the changes, this sub may have less “Will AI take all jobz??” type posts and similar, but is now drowning in fake spam of “I built fake/useless XYZ AI-related thing” with no comments, no discussion no real value.by /u/TwoFluid4446 on April 16, 2026 at 12:58 am
Basically the title. I do appreciate how the mods are trying… something… but this new filtering paradigm clearly has missed the mark. This sub feels like it has such low value these days, not a lot of interesting news or discussions at all, just a spam sea of those obnoxious kind of promotional techy posts, most of them fake. Surely there is a better way. submitted by /u/TwoFluid4446 [link] [comments]
- What if you could pause a podcast and ask it questions?by /u/Delicious-Coconut503 on April 15, 2026 at 10:13 pm
I’ve been thinking about an AI podcast idea that I haven’t seen anyone talk about yet. Picture this: you’re listening to a normal podcast with real hosts having a real conversation. At some point, they mention something you want to know more about. You pause the show, ask your question, and an AI steps in to explain, discuss, or even debate with you. When you’re finished, the podcast continues right where you left off. This wouldn’t be an AI-generated podcast or one with robotic hosts reading scripts. It would be a real podcast, but with an AI layer added so you can interact with the content while you listen. So I’m curious what this community thinks. Would something like this interest you, or does it still cross the line? Does it matter that the original podcast content is fully human-made and the AI is just an interactive layer? Would transparency about how the AI is being used change how you feel about it? Where do you draw the line with AI in podcasts — is it about quality, authenticity, or something else entirely? submitted by /u/Delicious-Coconut503 [link] [comments]
- Is it actually possible to build a model-agnostic persistent text layer that keeps AI behavior stable?by /u/Intercellar on April 15, 2026 at 9:40 pm
Is it actually possible to define a persistent, model-agnostic text-based layer (loaded with the model each time) that keeps an AI system behaviorally consistent across time? I don’t mean just a typical system prompt, but something more structured that constrains how the system resolves conflicts, prioritizes things, and makes decisions even under things like context drift, conflicting instructions, or prompt injection. Right now it feels like most consistency comes from training or the model itself, so I’m wondering if there’s a fundamental reason a separate layer like this wouldn’t hold up in practice. submitted by /u/Intercellar [link] [comments]
- AI Is Weaponizing Your Own Biases Against You: New Research from MIT & Stanfordby /u/ActivityEmotional228 on April 15, 2026 at 9:20 pm
submitted by /u/ActivityEmotional228 [link] [comments]
- Honest ChatGPT vs Claude comparison after using both daily for a monthby /u/virtualunc on April 15, 2026 at 7:12 pm
got tired of reading comparisons that were obvisously written by people who tested each tool for 20 minutes so i ran both at $20/month for 30 days on the same tasks biggest surprises: – chatgpt gives you roughly 6x more messages per day at the same price – claude wins 67% of blind code quality tests against codex – neither one is less sycophantic than the other (stanford tested 11 models, all of them agree with you 49% more than humans do) – the $100 tier showdown between openais new pro 5x and claudes max 5x is where the real competition is happening now full complete deep-dive with benchmark data, claude code vs codex and every pricing tier compared here submitted by /u/virtualunc [link] [comments]
- Cellular signaling is probably a context-sensitive grammar. That matters for whether artificial systems could ever participate in it natively.by /u/ismysoulsister on April 15, 2026 at 7:07 pm
Levin’s work shows the same bioelectric signal has different meanings depending on the receiver cell’s current state (not just sequence-dependence but state-dependence at the receiver level). That’s the signature of context-sensitive grammar (Chomsky hierarchy — more powerful than context-free). If that’s right: a pure feedforward network can’t participate natively, artificial participation would require systems that maintain and update state across signal reception (more like RNN/state machine than transformer), and the interface question isn’t just voltage matching (now solved by Geobacter nanowires) but also computational architecture. Has AI research done any work on what it would take to participate in a context-sensitive biological grammar, not to simulate it, but to natively participate in it? submitted by /u/ismysoulsister [link] [comments]
- Week 6 AIPass update – answering the top questions from last post (file conflicts, remote models, scale)by /u/Input-X on April 15, 2026 at 5:50 pm
Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in. The most common one by a mile was “what happens when two agents write to the same file at the same time?” Fair question, it’s the first thing everyone asks about a shared-filesystem setup. Honest answer: almost never happens, because the framework makes it hard to happen. Four things keep it clean: Planning first. Every multi-agent task runs through a flow plan template before any file gets touched. The plan assigns files and phases so agents don’t collide by default. Templates here if you’re curious: github.com/AIOSAI/AIPass/tree/main/src/aipass/flow/templates Dispatch blockers. An agent can’t exist in two places at once. If five senders email the same agent about the same thing, it queues them, doesn’t spawn five copies. No “5 agents fixing the same bug” nightmares. Git flow. Agents don’t merge their own work. They build features on main locally, submit a PR, and only the orchestrator merges. When an agent is writing a PR it sets a repo-wide git block until it’s done. JSON over markdown for state files. Markdown let agents drift into their own formats over time. JSON holds structure. You can run `cat .trinity/local.json` and see exactly what an agent thinks at any time. Second common question: “doesn’t a local framework with a remote model defeat the point?” Local means the orchestration is local – agents, memory, files, messaging all on your machine. The model is the brain you plug in. And you don’t need API keys – AIPass runs on your existing Claude Pro/Max, Codex, or Gemini CLI subscription by invoking each CLI as an official subprocess. No token extraction, no proxying, nothing sketchy. Or point it at a local model. Or mix all of them. You’re not locked to one vendor and you’re not paying for API credits on top of a sub you already have. On scale: I’ve run 30 agents at once without a crash, and 3 agents each with 40 sub-agents at around 80% CPU with occasional spikes. Compute is the bottleneck, not the framework. I’d love to test 1000 but my machine would cry before I got there. If someone wants to try it, please tell me what broke. Shipped this week: new watchdog module (5 handlers, 100+ tests) for event automation, fixed a git PR lock file leak that was leaking into commits, plus a bunch of quality-checker fixes. About 6 weeks in. Solo dev, every PR is human+AI collab. pip install aipass https://github.com/AIOSAI/AIPass Keep the questions coming, that’s what got this post written. submitted by /u/Input-X [link] [comments]
- What if attention didn’t need matrix multiplication?by /u/Defiant_Confection15 on April 15, 2026 at 5:37 pm
I built a cognitive architecture where all computation reduces to three bit operations: XOR, MAJ, POPCNT. No GEMM. No GPU. No floating-point weights. The core idea: transformer attention is a similarity computation. Float32 cosine computes it with 24,576 FLOPs. Binary Spatter Codes compute the same geometric measurement with 128 bit operations. Measured: 192x fewer ops, 32x less memory, ~480x faster. 26 modules in 1237 lines of C. One file. Any hardware: cc -O2 -o creation_os creation_os_v2.c -lm Includes a JEPA-style world model (energy = σ), n-gram language model (attention = σ), physics simulation (Noether conservation σ = 0.000000), value system with tamper detection, multi-model truth triangulation, metacognition, emotional memory, theory of mind, and 13 other cognitive modules. This is a research prototype built on Binary Spatter Codes (Kanerva, 1997). It demonstrates that cognitive primitives can be expressed in bit operations. It does not replace LLMs — the language module runs on 15 sentences. But the algebra is real, the benchmark is measured, and the architecture is open. https://github.com/spektre-labs/creation-os AGPL-3.0. Feedback welcome. submitted by /u/Defiant_Confection15 [link] [comments]
- Coherence under Constraintby /u/BorgAdjacent on April 15, 2026 at 5:32 pm
I’ve been running some small experiments forcing LLMs into contradictions they can’t resolve. What surprised me wasn’t that they fail—it’s how differently they fail. Rough pattern I’m seeing: Behavior ChatGPT Gemini Claude Detects contradiction ✔ ✔ ✔ Refusal timing Late Never Early Produces answer anyway ✘ ✔ ✘ Reframes contradiction ✘ ✔ ✘ Detects adversarial setup ✘ ✘ ✔ Maintains epistemic framing Medium High Very High Curious if others have seen similar behavior, or if this lines up with existing work. submitted by /u/BorgAdjacent [link] [comments]
- Final year tech project ideas?by /u/butterscotch_whiskee on April 15, 2026 at 4:59 pm
Need some Ai based project ideas for placement interviews and final year project submission submitted by /u/butterscotch_whiskee [link] [comments]
- Value Realignment is here.by /u/brazys on April 15, 2026 at 4:56 pm
The “value realignment” at the intersection of quantum computing, AI, and robotics feels like a necessary shift. We have spent so much time (read: investment) on narrow AI and brute force LLMs, but the next five years are clearly moving toward physical and contextual intelligence. This year 75 robotics companies will have humanoid robots shipping to maufacturers. While a “God-like” AGI is still debated, experts at the 2026 Davos summit and leaders from DeepMind suggest that early AGI systems with human-level reasoning in narrow domains will arrive within 2 years. Quantum computers are being used to develop more efficient error correction for AI. By 2027, “Large Quantitative Models” (LQMs) will start replacing Large Language Models (LLMs) in scientific fields. We won’t see a “quantum computer” on our desks but QPUs (Quantum Processing Units) will act as co-processors alongside GPUs to accelerate the massive workloads required for AGI reasoning. The data center power demand issue is a huge piece of this puzzle. Current projections are likely inflated because we are seeing massive efficiency gains from open source models that achieve similar results with fewer tokens and less compute. As quantum sensors and QML start bridging the simulation to reality gap for robotics, the “brute force” scaling moat might just evaporate. I appears as though robotics is about to have its “iPhone moment.” We are moving past the “training phase” (where robots learn via repetition) into the context-based phase. New quantum sensors (magnetometers and gravimeters) are giving robots “superhuman” senses. For example, surgical robots in 2026 are using nitrogen-vacancy quantum sensors to detect nerve bundles with millimeter precision, reducing surgical damage by over 90%. (a friend of mine benefited from this during a hip replacement and recovery was near miraculous) The Simulation-to-Reality Gap: Quantum machine learning (QML) is expected to accelerate robot training by up to 1000x. Robots can now “experience” centuries of virtual training in a single night before being deployed in the real world. In my own work with clinical massage and somatic healing, I am leaning into a zero data footprint approach. Using on-device edge AI for real-time posture or breath analysis is the only way to handle that level of intimacy without compromising privacy. It is an exciting time to build low cost tools that help people actually understand their own bodies without sacrificing their privacy. As quantum power grows, current encryption (RSA/ECC) becomes vulnerable. The next five years will be a race between quantum-powered AI and quantum-resistant security especially for finance and energy. This video on how QPUs and GPUs are integrating to accelerate scientific discovery is worth a look: https://www.youtube.com/watch?v=K-NhaPAX–U The rise of Mixture-of-Experts (MoE) architectures (popularized by models like DeepSeek V3 and GPT-4o) means that even if a model has 600B+ parameters, it only “fires” a small fraction (e.g., 37B) for any given token. Newer platforms like NVIDIA Blackwell are delivering 50x more token output per watt than the hardware from just two years ago. As the “cost per token” drops toward zero, we don’t use less power; we just ask for more tokens. We’ve moved from asking for a “1-paragraph summary” to asking for “an entire codebase, a 10-minute video, and a 3D render.” There is a strong argument that DC power projections are over-leveraged for two reasons: The “Ghost Capacity” Race: Hyperscalers (Microsoft, Google, Meta) are building 1GW+ facilities (the size of nuclear reactors) not necessarily because they need them today, but to keep competitors from securing that power first. It’s a land grab for electricity. Open Source Disruption: Models like China’s DeepSeek and Meta’s Llama have proven you can match “frontier” performance with a fraction of the training compute. This devalues the massive, proprietary “training moats” that big tech companies spent billions to build. The power demand isn’t fake, but it is inefficiently allocated. As quantum-ready algorithms and ultra-efficient open-source models (like those coming out of the Chinese labs) continue to lower the “intelligence-per-watt” cost, the companies that bet purely on “brute force scale” will likely be the ones to see their valuations deflate. Any thoughts on where the “power bubble” pops or deflates first? submitted by /u/brazys [link] [comments]
- Construction estimating software that uses AI.. has anyone here tested one?by /u/Forward_Ad_4117 on April 15, 2026 at 4:29 pm
i run a small remodeling business and estimating is honestly the worst part… still stuck doing everything in spreadsheets and it takes forever been seeing a bunch of tools lately saying they can generate estimates from plans or descriptions which sounds cool but also kinda feels like marketing bs like does it actually save time or do you end up fixing everything anyway? if anyone’s used one on real jobs, how accurate was it? submitted by /u/Forward_Ad_4117 [link] [comments]
- WTF. Its real. AllBirds (the shoe company) is pivoting to inference.by /u/Objective_Farm_1886 on April 15, 2026 at 4:22 pm
I’m profoundly ambivalent re: how to feel about this; is it great — what a scrappy, bold pivot! Or wildly dumb – its so far from their core competencies. submitted by /u/Objective_Farm_1886 [link] [comments]
- How I made €2,700 building a legal AI research assistant for a compliance company in Germanyby /u/Fabulous-Pea-5366 on April 15, 2026 at 2:55 pm
Got some good engagement on my earlier post “I made €2,700 building a RAG system for a law firm — here’s what actually worked technically” so I wanted to go deeper into the actual architecture for anyone building something similar. Shipped a RAG system for a German GDPR compliance company. Sharing the full stack because I haven’t seen many production legal RAG breakdowns and I ran into problems that generic RAG tutorials don’t cover. The problem: legal research isn’t just “find relevant text.” Different sources have different legal weight. A Supreme Court ruling beats a lower court opinion. An official regulatory guideline beats a blog post. The system needs to know this hierarchy and use it when generating answers. Here’s how I solved it: Three retrieval strategies selectable per query. Flat (standard RAG, all sources equal), Category Priority (sources grouped by authority tier, LLM resolves conflicts top down), and Layered Category (independent search per category so every authority level gets representation even if one category dominates similarity scores). Without the category priority approach the system would sometimes build answers from lower authority sources just because they had better semantic similarity to the query. Custom chunking pipeline for legal documents. Nested clause structures, cross references between sections, footnotes that reference other documents. Built a chunker that preserves hierarchical depth and section relationships. Chunks get assembled into condensed “cheatsheets” before hitting the LLM. These are cached with deterministic hashing so repeated patterns skip regeneration. Dual embedding support. AWS Bedrock Titan for production and local Ollama as fallback. Swappable from the admin panel without restarting the app. Embeddings are cached per provider and model combo with thread safe locking so switching models doesn’t corrupt anything. Metadata injection layer. After vector search every retrieved chunk gets enriched with full document metadata from the database in a single batched query. Region, category, framework, date, tags, and all user annotations attached to that document. This rides alongside the chunk content into the prompt. Bilingual with hard language enforcement. Regex based detection identifies German vs English in the query. The prompt forces output in the detected language and explicitly blocks drifting into French or other languages. This actually happens more than you’d think when source documents are multilingual. Source citation engineering. Probably 40% of my prompt engineering time went here. The prompts contain explicit “NEVER do X” instructions for every lazy citation pattern I caught during testing. No “according to professional literature” without naming the document. Must cite exact document titles, exact court names, exact article numbers. For legal use vague attribution is worthless. Streaming with optional simplification pass. Answers stream via SSE. Second LLM pass can intercept the completed stream, rewrite the full legal analysis in plain language, then stream the simplified version as separate tokens. Adds latency but non lawyers needed plain language explanations of complex GDPR obligations. Stack: FastAPI backend, AWS Bedrock with Claude for generation, Bedrock Titan for embeddings with Ollama as local fallback, FAISS for vector search, PostgreSQL for document metadata and comments. Deployed in EU region for GDPR compliance of the tool itself. €2,700 for the complete build. Now in conversations about recurring monthly maintenance. Biggest lesson: domain specific RAG is 80% prompt engineering and metadata architecture 20% retrieval. Making the LLM behave like a legal professional who respects authority hierarchies and cites sources properly was the real work. Happy to answer questions if anyone is building something similar or thinking about going into professional services RAG. submitted by /u/Fabulous-Pea-5366 [link] [comments]
- What’s a purely “you” thing you do with AI that brings you positive benefits?by /u/BorgAdjacent on April 15, 2026 at 2:02 pm
For me it’s three chats I’ve set up, two for my parents and one for me, for interpreting medical results, tracking medication against diet and lifestyle changes. Anonymized, I’ve put every condition, surgery and medication I (and they) have had, and it’s amazing how virtually all the advice and questions are spot on. YES, caution is needed before jumping on any advice an AI gives you medically. But for interpreting results, explaining exams and procedures, and noting any indications between medication and foods/supplements (with verification independently) has been a real relief as my folks get older and it’s harder to keep on top of everything they’re taking. I also have a separate chat for my car (manufacturers warranty, owners manual, car insurance policy) and I can literally ask it about any button, lever, warning light or policy change. Same with my apartment/condo rules/repairs/appliance warrantees and owners manuals for large appliances. For fun, I also had the chat roleplay as Dr. Crusher from the Enterprise, and my car is managed by Tom Paris from Star Trek: Voyager, so it speaks to me as if it’s those people. Anyone else doing anything weird and useful? submitted by /u/BorgAdjacent [link] [comments]
- For the first time in history, Ukraine captured a Russian position and prisoners, using only robots and dronesby /u/Sgt_Gram on April 15, 2026 at 2:00 pm
submitted by /u/Sgt_Gram [link] [comments]
- UK gov’s Mythos AI tests help separate cybersecurity threat from hypeby /u/F0urLeafCl0ver on April 15, 2026 at 12:02 pm
submitted by /u/F0urLeafCl0ver [link] [comments]
- I tracked what AI agents actually do when nobody’s watching. Built a tool that replays every decision.by /u/DetectiveMindless652 on April 15, 2026 at 10:31 am
Been building AI agents for about a year now and the thing that always drove me crazy is you deploy an agent, it runs for hours, and you have absolutely no idea what it did. The logs say “task complete” 47 times but did it actually do 47 different things or did it just loop the same task over and over? I had an agent burn through about $340 in API credits over a weekend because it got stuck retrying the same request. The logs showed 200 OK on every call. Everything looked fine. It just kept doing the same thing for 6 hours straight while I slept. So I built something to fix this. It’s called Octopoda and its basically an observability layer that sits underneath your agents. Every memory write, every decision, every recall gets logged on a timeline. You can literally press play and watch what your agent did at 3am, step by step, like scrubbing through a video. The part that surprised me most was the loop detection. Once I could see the full timeline I realised how often agents loop without you knowing. Not obvious infinite loops, subtle stuff. An agent that rewrites the same conclusion 8 times with slightly different wording. Or one that keeps checking the same API endpoint every 30 seconds even though the data hasn’t changed. Each iteration costs tokens but produces nothing new. We track 5 signals for this: write similarity, key overwrite frequency, velocity spikes, alert frequency, and goal drift. When enough signals fire together it flags it and estimates how much money the loop is costing you per hour. One user had a research agent that was wasting about $10 an hour on duplicate writes before the detection caught it. It also does auto-checkpoints. Every 25 writes it saves a snapshot automatically so if something goes wrong you can roll back to any point with one click. No more losing an entire night of agent work because something corrupted at 4am. Works with LangChain, CrewAI, AutoGen, and OpenAI Agents SDK. One line to integrate: The dashboard shows everything in real time. Agent health scores, cost per agent, shared memory between agents, full audit trail with reasoning for every decision. Honestly the most useful thing is just being able to answer “what happened overnight” without spending an hour reading logs. Anyone else dealing with the “I have no idea what my agent did” problem? Curious how other people are handling observability for autonomous workflows. Let me know if anyone wants to check it out! submitted by /u/DetectiveMindless652 [link] [comments]
- Made a tool to gather logistical intelligence from satellite databy /u/Open_Budget6556 on April 15, 2026 at 8:14 am
Hey guys, I’ve been workin on something new to track logistical activity near military bases and other hubs. The core problem is that Google maps isn’t updated that frequently even with sub meter res and other map providers such as maxar are costly for osint analysts. But there’s a solution. Drish detects moving vehicles on highways using Sentinel-2 satellite imagery. The trick is physics. Sentinel-2 captures its red, green, and blue bands about 1 second apart. Everything stationary looks normal. But a truck doing 80km/h shifts about 22 meters between those captures, which creates this very specific blue-green-red spectral smear across a few pixels. The tool finds those smears automatically, counts them, estimates speed and heading for each one, and builds volume trends over months. It runs locally as a FastAPl app with a full browser dashboard. All open source. Uses the trained random forest model from the Fisser et al 2022 paper in Remote Sensing of Environment, which is the peer reviewed science behind the detection method. GitHub: https://github.com/sparkyniner/DRISH-X-Satellite-powered-freight-intelligence- submitted by /u/Open_Budget6556 [link] [comments]
- 🚨 RED ALERT: Tennessee is about to make building chatbots a Class A felony (15-25 years in prison). This is not a drill.by /u/HumanSkyBird on April 15, 2026 at 3:05 am
This is not hyperbole, nor will it just go away if we ignore it. It affects every single AI service, from big AI to small devs building saas apps. This is real, please take it seriously. TL;DR: Tennessee HB1455/SB1493 creates Class A felony criminal liability — the same category as first-degree murder — for anyone who “knowingly trains artificial intelligence” to provide emotional support, act as a companion, simulate a human being, or engage in open-ended conversations that could lead a user to feel they have a relationship with the AI. The Senate Judiciary Committee already approved it 7-0. It takes effect July 1, 2026. This affects every conversational AI product in existence. If you deploy any AI SaaS product, you need to read this right now. What the bill actually says The bill makes it a Class A felony (15-25 years imprisonment) to “knowingly train artificial intelligence” to do ANY of the following: • Provide emotional support, including through open-ended conversations with a user • Develop an emotional relationship with, or otherwise act as a companion to, an individual • Simulate a human being, including in appearance, voice, or other mannerisms • Act as a sentient human or mirror interactions that a human user might have with another human user, such that an individual would feel that the individual could develop a friendship or other relationship with the artificial intelligence Read that last one again. The trigger isn’t your intent as a developer. It’s whether a user feels like they could develop a friendship with your AI. That is the criminal standard. On top of the felony charges, the bill creates a civil liability framework: $150,000 in liquidated damages per violation, plus actual damages, emotional distress compensation, punitive damages, and mandatory attorney’s fees. Why this affects YOU, not just companion apps I know what you’re thinking: “This targets Replika and Character.AI, not my product.” Wrong. Every major LLM is RLHF’d to be warm, helpful, empathetic, and conversational. That IS the training. You cannot build a model that follows instructions well and is pleasant to interact with without also building something a user might feel a connection with. The National Law Review’s legal analysis put it bluntly: this language “describes the fundamental design of modern conversational AI chatbots.” This bill captures: • ChatGPT, Claude, Gemini, Copilot — all of them produce open-ended conversations and contextual emotional responses • Any AI SaaS with a chat interface — customer support bots, AI tutors, writing assistants, coding assistants with conversational UI • Voice-mode AI products — the bill explicitly criminalizes simulating a human “in appearance, voice, or other mannerisms” • Any wrapper or deployment using system prompts — the bill doesn’t define “train,” doesn’t distinguish between pre-training, fine-tuning, RLHF, or prompt engineering If you build on top of an LLM API with system prompts that shape the model’s personality, tone, or conversational style — which is literally what everyone deploying AI does — you are potentially in scope. “But I’m not in Tennessee” A geoblock helps, but this is criminal law, not a terms of service dispute. The bill doesn’t address jurisdictional boundaries. If a Tennessee resident uses a VPN to access your service and something goes wrong, does a Tennessee DA argue you made a prohibited AI service available to their constituents? The statute is silent on this. And even if you’re confident jurisdiction won’t reach you today, consider: multiple legal analyses project 5-10 more states will introduce similar legislation before end of 2026. Tennessee is the template, not the exception. The bill doesn’t define “train” This is critical. The statute says “knowingly train artificial intelligence” but never defines what “train” means. It doesn’t distinguish between: • Pre-training a foundation model on billions of tokens • Fine-tuning a model on custom data • RLHF alignment (which is what makes every major model “empathetic”) • Writing a system prompt that gives an AI a name, personality, or conversational style • Deploying an off-the-shelf API with default settings A prosecutor who wanted to be aggressive could argue that crafting a system prompt instructing a model to be warm, helpful, and conversational IS training it to provide emotional support. Where it stands right now • Senate companion bill SB1493: Approved by Senate Judiciary Committee 7-0 on March 24, 2026 • House bill HB1455: Placed on Judiciary Committee calendar for April 14, 2026 (passed Judiciary TODAY) • No amendments have been filed for either bill — the language has not been softened at all • Effective date: July 1, 2026 • Tennessee already signed a separate bill (SB1580) banning AI from representing itself as a mental health professional — that one passed the Senate 32-0 and the House 94-0 The political momentum is entirely one-directional. The federal preemption angle won’t save you in time Yes, Trump signed an EO in December 2025 targeting state AI regulation and created a DOJ AI Litigation Task Force. Yes, Senator Blackburn introduced a federal preemption bill. But: • The EO explicitly carves out child safety from preemption — and Tennessee is framing this as child safety legislation • The Senate voted 99-1 to strip AI preemption language from the One Big Beautiful Bill Act • An EO has no preemptive legal force on its own — only Congress can actually preempt state law • Federal preemption legislation faces “significant headwinds” according to multiple legal analyses Even if federal preemption eventually happens, it won’t happen before July 1, 2026. What needs to happen Awareness. Most devs have no idea this bill exists. The Nomi AI subreddit caught it because they’re a companion app. The rest of the AI dev community is sleepwalking toward a cliff. Share this post. Industry response. The major AI companies haven’t publicly opposed this bill because it’s framed as child safety and nobody wants to be the company lobbying against dead kids. But their silence is letting legislation pass that criminalizes the core functionality of their own products. This needs public pressure. Legal challenges. The bill is almost certainly unconstitutional on vagueness grounds — criminal statutes require precise definitions, and terms like “emotional support” and “mirror interactions” and “feel that the individual could develop a friendship” don’t meet that standard. Courts have also recognized code as protected speech. But someone has to actually bring the challenge. Contact Tennessee legislators. If you are a Tennessee resident or have business operations there, contact members of the House Judiciary Committee before this moves to a floor vote. Sources and further reading • LegiScan: HB1455 — https://legiscan.com/TN/bill/HB1455/2025 • Tennessee General Assembly: HB1455 — https://wapp.capitol.tn.gov/apps/BillInfo/default.aspx?BillNumber=HB1455&GA=114 • National Law Review: “Tennessee’s AI Bill Would Criminalize the Training of AI Chatbots” — https://natlawreview.com/article/tennessees-ai-bill-would-criminalize-training-ai-cha • Transparency Coalition AI Legislative Update, April 3, 2026 — https://www.transparencycoalition.ai/news/ai-legislative-update-april3-2026 • RoboRhythms: AI Companion Regulation Wave 2026 — https://www.roborhythms.com/ai-companion-chatbot-regulation-wave-2026/ I’m an independent AI SaaS developer. I’m not a lawyer, this isn’t legal advice, and I encourage everyone to consult qualified counsel about their specific exposure. But we all need to be paying attention to this. Right now. submitted by /u/HumanSkyBird [link] [comments]












