Schneier on Security

Schneier on Security A blog covering security and security technology.

  • Friday Squid Blogging: New Giant Squid Video
    by Bruce Schneier on April 17, 2026 at 9:05 pm

    Pretty fantastic video from Japan of a giant squid eating another squid. As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. Blog moderation policy.

  • Mythos and Cybersecurity
    by B. Schneier on April 17, 2026 at 11:02 am

    Last week, Anthropic pulled back the curtain on Claude Mythos Preview, an AI model so capable at finding and exploiting software vulnerabilities that the company decided it was too dangerous to release to the public. Instead, access has been restricted to roughly 50 organizations—Microsoft, Apple, Amazon Web Services, CrowdStrike and other vendors of critical infrastructure—under an initiative called Project Glasswing. The announcement was accompanied by a barrage of hair-raising anecdotes: thousands of vulnerabilities uncovered across every major…

  • Human Trust of AI Agents
    by Bruce Schneier on April 16, 2026 at 9:41 am

    Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems…

  • Defense in Depth, Medieval Style
    by Bruce Schneier on April 15, 2026 at 10:47 am

    This article on the walls of Constantinople is fascinating. The system comprised four defensive lines arranged in formidable layers: The brick-lined ditch, divided by bulkheads and often flooded, 15­-20 meters wide and up to 7 meters deep. A low breastwork, about 2 meters high, enabling defenders to fire freely from behind. The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers. The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage. …

  • Upcoming Speaking Engagements
    by Bruce Schneier on April 14, 2026 at 4:01 pm

    This is a current list of where and when I am scheduled to speak: I’m speaking at DemocracyXChange 2026 in Toronto, Ontario, Canada, on April 18, 2026. I’m speaking at the SANS AI Cybersecurity Summit 2026 in Arlington, Virginia, USA, at 9:40 AM ET on April 20, 2026. I’m speaking at the Greater Good Gathering in New York City, USA, on Tuesday, April 21, 2026. I’m speaking at the Nemertes [Next] Virtual Conference Spring 2026, a virtual event, on April 29, 2026. I’m speaking at RightsCon 2026 in Lusaka, Zambia, on May 6 and 7, 2026. …

  • How Hackers Are Thinking About AI
    by Bruce Schneier on April 14, 2026 at 10:49 am

    Interesting paper: “What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation.” Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers…

  • On Anthropic’s Mythos Preview and Project Glasswing
    by Bruce Schneier on April 13, 2026 at 4:52 pm

    The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them. There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations…

  • AI Chatbots and Trust
    by Bruce Schneier on April 13, 2026 at 10:10 am

    All the leading AI chatbots are sycophantic, and that’s a problem: Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically ­ they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them. One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language…

  • Friday Squid Blogging: Squid Overfishing in the South Pacific
    by Bruce Schneier on April 10, 2026 at 9:03 pm

    Regulation is hard: The South Pacific Regional Fisheries Management Organization (SPRFMO) oversees fishing across roughly 59 million square kilometers (22 million square miles) of the South Pacific high seas, trying to impose order on a region double the size of Africa, where distant-water fleets pursue species ranging from jack mackerel to jumbo flying squid. The latter dominated this year’s talks. Fishing for jumbo flying squid (Dosidicus gigas) has expanded rapidly over the past two decades. The number of squid-jigging vessels operating in SPRFMO waters rose from 14 in 2000 to more than 500 last year, almost all of them flying the Chinese flag. Meanwhile, reported catches have fallen markedly, from more than 1 million metric tons in 2014 to about 600,000 metric tons in 2024. Scientists worry that fishing pressure is outpacing knowledge of the stock. …

  • Sen. Sanders Talks to Claude About AI and Privacy
    by Bruce Schneier on April 10, 2026 at 10:41 am

    Claude is actually pretty good on the issues.

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.