Application and Cybersecurity

  • Defend the Airport
    by Floris Duvekot on June 4, 2025 at 2:02 pm

    Every day, millions of passengers depend on a vast, complex airport ecosystem to get from Point A to Point B. From airline check-ins and baggage handling to air traffic control and terminal operations, the aviation sector is an intricate web of interconnected third-party providers, technologies, and stakeholders. In this high-stakes environment, a cybersecurity breach is not a single point of failure, it’s a ripple effect waiting to happen.

  • Securing LLMs Against Prompt Injection Attacks – A Technical Primer for AI Security Teams
    by dshetty@securityinnovation.com (Dinesh Shetty) on May 14, 2025 at 5:54 pm

    Introduction Large Language Models (LLMs) have rapidly become integral to applications, but they come with some very interesting security pitfalls. Chief among these is prompt injection, where cleverly crafted inputs make an LLM bypass its instructions or leak secrets. Prompt injection in fact is so wildly popular that, OWASP now ranks prompt injection as the #1 AI security risk for modern LLM applications as shown in their OWASP GenAI top 10. We’ve provided a higher-level overview about Prompt Injection in our other blog, so in this one we’ll focus on the concept with the technical audience in mind. Here we’ll explore how LLMs can be vulnerable at the architectural level and the sophisticated ways attackers exploit them. We’ll also examine effective defenses, from system prompt design to “sandwich” prompting techniques. We’ll also discuss a few tools that can help test and secure LLMs.  

  • LLM Prompt Injection – What’s the Business Risk, and What to Do About It
    by dshetty@securityinnovation.com (Dinesh Shetty) on May 9, 2025 at 1:07 pm

    The rise of generative AI offers incredible opportunities for businesses. Large Language Models can automate customer service, generate insightful analytics, and accelerate content creation. But alongside these benefits comes a new category of security risk that business leaders must understand: Prompt Injection Attacks. In simple terms, a prompt injection is when someone feeds an AI model malicious or deceptive input that causes it to behave in an unintended, and often harmful way. This isn’t just a technical glitch, it’s a serious threat that can lead to brand embarrassment, data leaks, or compliance violations if not addressed. As organizations rush to adopt AI capabilities, ensuring the security of those AI systems is now a board-level concern. In this post we’ll provide a high-level overview of prompt injection risks, why they matter to your business, and how Security Innovation’s GenAI Penetration Testing and related services help mitigate these threats so you can innovate safely.  

  • Quest Accepted: Setting Up a Pentesting Environment for the Meta Quest 2
    by Cosmo Mailhot on April 28, 2025 at 1:27 pm

    With the advent of commercially available virtual reality headsets, such as the Meta Quest, the integration of virtual and augmented reality into our daily lives feels closer than ever before. As these devices become more common, so too will the need to secure and protect the data collected and stored by them. The intention of this blog post is to establish a baseline security testing environment for Meta Quest 2 applications and is split into three sections: Enabling Developer Mode, Establishing an Intercepting Proxy, and Injecting Frida Gadget. The Quest 2 runs on a modified version of the Android Open Source Project (AOSP) in addition to proprietary software developed by Meta, allowing the adoption of many established Android testing methods.  

  • LLM Security by Design: Involving Security at Every Stage of Development
    by Fabian Vilela on April 4, 2025 at 12:46 pm

    As large language models (LLMs) become increasingly prevalent in businesses and applications, the need for robust security measures has never been greater. An LLM, if not properly secured, can pose significant risks in terms of data breaches, model manipulation, and even regulatory compliance issues. This is where engaging an external security company becomes crucial. In this blog, we will explore the key considerations for companies looking to hire a security team to assess and secure their LLM-powered systems, as well as the specific tasks that should be undertaken at different stages of the LLM development lifecycle.

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.