International Association for Cryptologic Research

International Association for Cryptologic Research

  • Adaptively-Secure Proxy Re-Encryption with Tight Security
    on March 27, 2026 at 8:18 am

    ePrint Report: Adaptively-Secure Proxy Re-Encryption with Tight Security Chen Qian, Shuo Chen, Shuai Han (Bi-Directional) Proxy Re-Encryption ($\mathsf{PRE}$) is a public-key encryption scheme that allows a proxy, holding a re-encryption key from $i$ to $j$, to transform a ciphertext intended for $i$ into one intended for $j$. $\mathsf{PRE}$ has numerous applications, including secure data sharing and cloud computing. However, most existing $\mathsf{PRE}$ schemes experience significant security degradation when adversaries are allowed to adaptively corrupt re-encryption or secret keys. Prior to this work, only a few $\mathsf{PRE}$ schemes achieved quasi-polynomial security loss in the adaptive setting, and even those were limited to restricted re-encryption strategies. In this paper, we propose four distinct $\mathsf{PRE}$ schemes with tight security guarantees in the adaptive setting, based on the $\mathsf{MDDH}$ assumption: – $\mathsf{PRE}_0$, $\mathsf{PRE}_1$: Single- and multi-challenge $\mathsf{aHRA}$-secure $\mathsf{PRE}$ schemes with tight security focusing on efficient constructions. – $\mathsf{PRE}_2$, $\mathsf{PRE}_3$: Single- and multi-challenge $\mathsf{aCCA}$-secure $\mathsf{PRE}$ schemes with (almost) tight security focusing on $\mathsf{CCA}$-type security. To achieve tightly $\mathsf{CCA}$-secure $\mathsf{PRE}$ schemes, we introduce a novel concept called tag-based language-malleable $\mathsf{NIZK}$ with special simulation soundness. This primitive provides simulation-sound $\mathsf{NIZK}$ while preserving a restricted form of malleability. We construct both one-time and unbounded versions of this primitive under the $\mathsf{MDDH}$(Matrix Decisional Diffie-Hellman) assumption.

  • Hadal: Centralized Label DP Training without a Trusted Party
    on March 27, 2026 at 8:12 am

    ePrint Report: Hadal: Centralized Label DP Training without a Trusted Party James Choncholas, Stanislav Peceny, Amit Agarwal, Mariana Raykova, Baiyu Li, Karn Seth We explore distributed training in a setting where features are held by one party and labels are held by another. In this context, we focus on label Differential Privacy (DP), where the labels require privacy protection from the other party who learns the trained model. Previous approaches struggle to train accurate models in high-privacy settings (i.e. when $\epsilon \leq 1$), or typically require a trusted third party. To eliminate this trusted party while preserving model utility, we present PostScale, a novel Homomorphic Encryption (HE)-based protocol suited for high-privacy regimes with ciphertext multiplicative depth of two. Our protocol is suitable for a wide variety of models in the semi-honest setting and avoids leaking the model architecture as well as costly ciphertext operations like bootstrapping and rotations. We also present a multi-party sampling protocol for generating DP noise, and Hadal, a general-purpose dataflow-based framework for encrypted computation implementing our protocols. Hadal repurposes existing tools for use with HE, including comprehensive performance profiling capabilities, dual execution modes (eager and deferred), graph compiler-based optimization, and hyperparameter tuning. Our techniques achieve model utility similar to centralized DP while reducing communication by over 90% (from 1 TB to 8 GB per batch) and training time by 99% (from 54 minutes to 33 seconds) compared to related work that protects both features and labels. These improvements unlock larger models; we train Bert-tiny of Devlin et al. (2019), with 6.5 MB of parameters, in 20 ms per example in a LAN setting.

  • Cryptanalysis of the Lightweight Stream Cipher RRSC
    on March 27, 2026 at 8:12 am

    ePrint Report: Cryptanalysis of the Lightweight Stream Cipher RRSC Shivarama K. N., Susil Kumar Bishoi This paper presents a security evaluation of the RRSC lightweight stream cipher in its 64-bit and 128-bit variants. The analysis examines the key update process, internal component interactions, and diffusion behavior during initialization, supported by an avalanche study. Based on these observations, several cryptanalytic scenarios are explored, including time-memory-data trade-off attacks, full key-recovery attacks in the known-plaintext setting, and partial key-recovery attacks targeting the linear feedback shift register and nonlinear feedback shift register components. It is shown that the effective key space is reduced from \(2^{128}\) to \(2^{96}\) for the 128-bit variant and from \(2^{64}\) to \(2^{48}\) for the 64-bit variant.

  • Confidential Transfers for Multi-Purpose Tokens on the XRP Ledger
    on March 27, 2026 at 8:12 am

    ePrint Report: Confidential Transfers for Multi-Purpose Tokens on the XRP Ledger Murat Cenk, Aanchal Malhotra, Joseph A. Akinyele We introduce Confidential Transfers for Multi-Purpose Tokens (Confidential MPTs) on the XRP Ledger, a cryptographic extension of the XLS-33 token standard that enables confidential balances and hidden transfer amounts while preserving public supply verifiability. The protocol replaces plaintext per-account balances with EC–ElGamal ciphertexts and employs non-interactive zero-knowledge proofs to enforce transfer correctness, balance sufficiency, and the invariant OutstandingAmount ≤ MaxAmount without requiring decryption by validators. Confidentiality is scoped to transaction amounts and account balances; sender and receiver identities remain public, preserving XRPL’s account-based execution model. Our design maintains full compatibility with existing MPT semantics: public and confidential balances coexist, issuance rules remain unchanged, and theissuer’s designated second account is treated identically to other holders. The protocol further supports issuer-controlled operations, including freeze and clawback, without weakening supply soundness. To accommodate regulatory and institutional requirements, Confidential MPTs provide cryptographic auditability through an on-chain selective-disclosure model based on multi-ciphertext balance representations and equality proofs, while remaining compatible with simpler issuer-mediated audit models. We present a complete protocol specification, a security analysis under standard discrete-logarithm assumptions, and an open-source reference implementation (mpt-crypto) that realizes the required cryptographic primitives. Experimental evaluation demonstrates that confidential transfers can be verified within XRPL validator performance constraints, with proof sizes and verification costs suitable for production deployment.

  • Oblivious SpaceSaving: Heavy-Hitter Detection over Fully Homomorphic Encryption
    on March 27, 2026 at 8:12 am

    ePrint Report: Oblivious SpaceSaving: Heavy-Hitter Detection over Fully Homomorphic Encryption Sohaib .., Divyakant Agrawal, Amr El Abbadi Heavy-hitter detection is a fundamental primitive in stream analytics, with applications in network monitoring, telemetry, and large-scale data systems. In many practical deployments, this computation must be maintained continuously on remote infrastructure that offers higher availability and centralized operational control, even when the underlying streams contain sensitive identifiers or proprietary activity patterns. Existing privacy-preserving approaches either incur substantial statistical noise or rely on multi-server trust assumptions. Fully Homomorphic Encryption (FHE) offers an attractive alternative by enabling exact computation over encrypted data on a single untrusted server, but the high cost of encrypted comparisons has historically made stateful stream processing impractical. We present Oblivious SpaceSaving, a privacy-preserving reformulation of the classical Space-Saving algorithm for fully encrypted execution. Our central idea is the Moving Floor abstraction, which exploits a monotonicity invariant in the summary state to replace repeated magnitude comparisons with equality-based selection against a tracked encrypted floor. We further combine this with parallel victim selection and a hierarchical asynchronous ingestion pipeline, yielding an end-to-end encrypted heavy-hitter architecture that preserves the deterministic accuracy guarantees of the original algorithm. Our design reduces the cost of encrypted updates by up to $2.74\times$ over a naive oblivious baseline and sustains end-to-end encrypted ingestion throughputs of up to 4.30 items/s with sub-second amortized latency. These results show that, with the right algorithmic reformulation, classical streaming summaries can be made practically viable under fully encrypted execution, bringing privacy-preserving stream analytics significantly closer to deployment.

  • CatCrypt: From Rust to Cryptographic Security in Lean
    on March 27, 2026 at 8:12 am

    ePrint Report: CatCrypt: From Rust to Cryptographic Security in Lean Bas Spitters We describe the methodology and scope of CatCrypt, a library for machine-checked cryptographic security proofs in Lean. CatCrypt provides an end-to-end pipeline from Rust reference implementations to security proofs in the computational model in Lean. The translation from Rust to Lean is done using the Hax tool. CatCrypt covers 172 cryptographic protocols and constructions with machine-checked security theorems in the computational model. Of these, 110 have the full Rust-to-Lean pipeline. All bounds have been systematically cross-referenced against their published sources (IETF RFCs, NIST standards, and academic papers). Some proofs were ported from SSProve (Rocq), EasyCrypt, ProVerif, CryptoVerif and Squirrel; most are independent formalisations with no prior machine-checked treatment. CatCrypt also includes a verified Lean implementation of a substantial part of the hax transpiler pipeline. This work is an experiment of what can be done by a researcher working with GenAI. Until recently, the formalization of one protocol required months of expert effort. In contrast, the whole of CatCrypt was developed in a period of two months. Because it was developed with AI, we develop a new methodology to increase confidence that the specifications are correct. Moreover, we will continue to audit the code in the coming months to gain even more confidence in the specification of the results. We hope this work will facilitate the adoption of formal methods in the development of security-critical software. This is especially urgent due to AI’s increased hacking capabilities, the explosion of AI generated software and the ongoing post-quantum transition, which requires the development of new cryptographic protocols and their secure implementation.

  • Triangulating Meet-in-the-Middle Attack
    on March 27, 2026 at 8:06 am

    ePrint Report: Triangulating Meet-in-the-Middle Attack Boxin Zhao, Qingliang Hou, Lingyue Qin, Xiaoyang Dong To penetrate more rounds with Meet-in-the-Middle (MitM) attack, the neutral words are usually subject to some linear constraints, e.g., Sasaki and Aoki’s initial structure technique. At CRYPTO 2021, Dong et al. found the neutral words can be nonlinearly constrained. They introduced a table-based method to precompute and store the solution space of the neutral words, which led to a huge memory complexity. In this paper, we find some nonlinearly constrained neutral words can be solved efficiently by Khovratovich et al.’s triangulation algorithm (TA). Furthermore, motivated by the structured Gaussian elimination paradigm developed by LaMacchia et al. and Bender et al., we improve the TA to deal with the case when there are still many unprocessed equations, but no variable exists in only one equation (the original TA will terminate). Then, we introduce the new MitM attack based on our improved TA, called triangulating MitM attack. As applications, the memory complexities of the single-plaintext key-recovery attacks on 4-/5-round AES-128 are significantly reduced from $2^{80}$ to the practical $2^{24}$ or from $2^{96}$ to $2^{40}$. Besides, a series of new one/two-plaintext attacks are proposed for reduced AES-192/-256 and Rijndael-EM, which are the basic primitives of NIST PQC candidate FAEST. A partial key-recovery experiment is conducted on 4-round AES-128 to verify the correctness of our technique. For AES-256-DM, the memory complexity of the 10-round preimage attack is reduced from $2^{56}$ to $2^{8}$, thus an experiment is also implemented. Without our technique, the impractical memories $2^{80}$ or $2^{56}$ of previous attacks in the precomputation phase will always prevent any kind of (partial) experimental simulations. In the full version, we extend our techniques to sponge functions.

  • Proving modern code-based dual attacks with second-order techniques
    on March 27, 2026 at 8:06 am

    ePrint Report: Proving modern code-based dual attacks with second-order techniques Charles Meyer-Hilfiger In code-based cryptography, dual attacks for solving the decoding problem have recently been improved. They are now competitive and beat information set decoders for a significant regime. These recent dual attacks, starting from Carrier et al. (Asiacrypt 2022), work by reducing decoding to an LPN problem where the secret and the noise involve parts of the error vector coming from the decoding problem. However, currently, the analysis of all these dual attacks is heuristic. In the original Asiacrypt 2022 work, a simple LPN modeling was used to carry out the analysis but Meyer-Hilfiger and Tillich (TCC 2023) showed that this assumption could not be used. Consequently, they proposed an alternative analysis based on Fourier theory and on heuristically modeling the weight enumerator of a random linear code as a Poisson variable. The analysis of the newest and most efficient dual attack, doubleRLPN, introduced by Carrier et al. (Eurocrypt 2024) also relies on this technique and on this model. Our main contribution is to devise a variant of doubleRLPN that we can fully prove without using any model. We show that our variant has the same performance, up to polynomial factors, as the original doubleRLPN algorithm. The final algorithm and its analysis are also simpler. Our technique involves flipping the coordinates of the noisy codeword and observing the fine changes in the amount of noise in the related LPN problem to reconstruct the entire error. The analysis is based on the second-order behavior of the bias of the noise which was already used in the original analysis. Secondly, the performance of our algorithm, as it was the case for doubleRLPN, heavily depends on having access to a good code along with an efficient decoder. We instantiate this code by choosing a Cartesian product of a constant (instead of sublinear in the original proposal by Carrier et al.) number of random linear codes. We use a decoder based on blockwise error enumeration that was already used by Guo et al. (Asiacrypt 2014). We show that our approach is optimal up to polynomial (instead of superpolynomial) factors.

  • Efficiency Improvement of Deniable FHE: Tighter Deniability Analysis and TFHE-based Construction
    on March 27, 2026 at 8:06 am

    ePrint Report: Efficiency Improvement of Deniable FHE: Tighter Deniability Analysis and TFHE-based Construction Towa Toyooka, Yohei Watanabe, Mitsugu Iwamoto Fully homomorphic encryption (FHE) is a cryptographic scheme that can take ciphertexts as inputs and compute a new ciphertext of a function of the underlying messages without decryption. FHE has been attracting attention along with the growing interest in privacy-preserving technologies. In terms of privacy-preserving technology, deniable encryption is also important. Deniable encryption enables a user, who may be forced to reveal the messages corresponding to the user’s public ciphertexts, to lie about which messages the user encrypted. Agrawal et al. (CRYPTO 2021) introduced deniable FHE (DFHE) that combines FHE with deniable encryption, and proposed a transformation from an FHE scheme that satisfies specific special requirements, called special FHE, to a DFHE scheme. They also showed a construction of a special FHE scheme based on the BGV (Brakerski–Gentry–Vaikuntanathan) scheme. However, in the construction by Agrawal et al., one must store all the extensive randomness used for encryption in order to lie, and a bootstrapping operation, which takes a long time to execute, is a bottleneck in execution speed. In this paper, we show that by providing a tighter upper bound on deniability, we can reduce the size of the stored randomness and the required number of bootstrapping in the construction by Agrawal et al. In addition, we show that TFHE (Chillotti et al., J. Cryptol., 2020; Joye, CT-RSA 2024), which is known as a FHE scheme with fast bootstrapping, satisfies the requirements of special FHE, and thus can realize a faster DFHE scheme than the BGV-based construction.

  • Registration-Optimized Dynamic Group Time-based One-time Passwords for Offline Mobile Access
    on March 26, 2026 at 1:24 am

    ePrint Report: Registration-Optimized Dynamic Group Time-based One-time Passwords for Offline Mobile Access Jiaqing Guo, Xuelian Cao, Zengpeng Li, Yong Zhou, Zheng Yang, Jianying Zhou Mobile access within public finance and enterprise environments often requires lightweight anonymous authentication, allowing users to prove authorization without disclosing their identities. Group Time-based One-Time Passwords (GTOTP) has recently been proposed as a lightweight primitive meeting this need with post-quantum security. To address dynamic group membership, Cao et al. introduced DGTOne, the first dynamic GTOTP construction. It employs chameleon hashes to precompute a fixed set of Merkle-tree leaves (mount points), into which conventional TOTP verification points (VPs) contributed by group members are adaptively inserted. However, DGTOne partitions mount points by time epochs, so they can expire and become unusable, causing capacity waste due to unpredictable join times. Moreover, its outsourced proof generation requires verifiers to be online each epoch to fetch refreshed credentials from Registration Authority (RA), defeating offline verification needed in mobile access. We address these limitations with two new schemes. First, we propose NWDGT, a no-wastage DGTOTP design that constructs Merkle trees of members’ verification points (VP-trees) on demand, eliminating expired mount points at the cost of added handling latency. To mitigate this latency, we introduce LWDGT, which instantiates multiple small one-time signature (OTS) trees whose leaves (OTS public keys) serve as mount points. New members’ VPs are signed immediately using unused leaves, achieving low wastage. We formally prove that the wastage rate of LWDGT is, with overwhelming probability, lower than that of DGTOne. By modeling the registration process and optimizing OTS-tree size, for deployments with up to 500 members (209 initially, 20 added monthly), LWDGT reduces mount point wastage rate by 10.2% over one year compared to DGTOne.

  • Gryphes: Hybrid Proofs for Modular SNARKs with Applications to zkRollups
    on March 26, 2026 at 1:24 am

    ePrint Report: Gryphes: Hybrid Proofs for Modular SNARKs with Applications to zkRollups Jiajun Xin, Samuel Cheung On Tin, Christodoulos Pappas, Yongjin Huang, Dimitrios Papadopoulos We address the challenge of constructing a proof system capable of handling multiple computations that involve diverse types of tasks, such as scalable zkRollup applications. A central dilemma in this design is the trade-off between generality and efficiency: while arithmetic circuit-based SNARKs offer fast proofs but limited flexibility, zkVMs provide general-purpose programmability at the cost of considerable overhead for circuit translation. We observe that typical workloads for such applications can be naturally divided into two parts: (1) diverse, task and data-dependent application logic, and (2) computationally intensive cryptographic operations, e.g., hashes, that are common and repetitive. To optimize for both efficiency and adaptability, we propose Gryphes, a hybrid framework that composes matrix lookup, a generalization of lookup arguments, together with SNARK solutions tailored for cryptographic operations. At the heart of Gryphes is a novel and efficient linking protocol, enabling seamless, efficient composition of matrix lookup + Plonk with general commit-and-prove SNARKs. By integrating Gryphes with Groth16 for signatures and RSA accumulators for membership proofs, we build a zkRollup prototype that achieves efficient proving, constant-size proofs, and dynamic support for thousands of transaction types. This includes our matrix lookup implementation incorporated with Plonk, as well as practical optimizations, comprehensive benchmarks, and open-sourced code. Our results demonstrate that Gryphes strikes a very good balance between functionality and efficiency, offering highly expressive and practical zkRollup systems.

  • A Note on HCTR++
    on March 25, 2026 at 2:12 pm

    ePrint Report: A Note on HCTR++ Mustafa Khairallah A recent Accordion mode have been proposed by Öztürk et al.: HCTR++ construction proposed in [OKY26, Cryptology ePrint Archive, Paper 2026/383]. I identify a fundamental correctness flaw in the Specifically, I demonstrate that the decryption algorithm (Algorithm 2) does not correctly invert the encryption algorithm (Algorithm 1), rendering the scheme undecryptable as specified. The authors have acknowledged the use of AI to refine the conclusion section of their paper. I have discovered this vulnerability completely independently of any AI tools. However, as an exercise, I have provided the algorithm to both ChatGPT and Claude (free versions) in retrospect, to see if they can identify the flaw, and I report my comments/observations. I wish to emphasis that the authors have made no claims or acknowledgment of using AI tools beyond drafting and refining the introduction and conclusion sections, and I make no such claims either. The purpose of this note is point out the vulnerability (mistake) in the design, and to look into how free AI models approach finding it. I would like to also point out that the authors have since updated their design, and this note only refers to the original version. I have not studied the updated design and make no claims about it. Any comments made in this note are my own and do not reflect on the opinions of any affiliations or funding agencies.

  • Performance Analysis of Parameterizable HQC Hardware Architecture
    on March 25, 2026 at 2:12 pm

    ePrint Report: Performance Analysis of Parameterizable HQC Hardware Architecture Nishant Pandey, Sanjay Deshpande, Dixit Dutt Bohra, Debapriya Basu Roy, Dip Sankar Banerjee, Jakub Szefer This work presents a constant-time hardware design for HQC (Hamming Quasi-Cyclic), a code-based key encapsulation mechanism selected for standardization by NIST’s Post-Quantum Cryptography process. While existing hardware implementations of HQC have achieved limited performance due to area constraints, our work demonstrates that high performance can be attained with minimal hardware overhead using higher datawidth. We present a fully parameterizable, flexible data width, hardware design, configurable for both performance targets and security levels, implementing HQC key generation, encapsulation, and decapsulation in Verilog for FPGA deployment. The three operational modules share a common SHAKE256 hash core to minimize area overhead while maintaining throughput. Our design significantly outperforms existing HQC hardware implementations in terms of latency, while achieving a similar or smaller value of the area-time (AT) product compared to existing implementations. The improved performance results from the optimizations introduced in the sparse polynomial multiplier and fixed weight vector generator modules. We achieve upto 35% improvement in the AT product when compared to other most efficient unified HQC hardware designs in the literature. For our fastest configuration targeting HQC-1 (the L1 security level), key generation completes in 0.020 ms, encapsulation in 0.040 ms, and decapsulation in 0.081 ms when implemented on a Xilinx Artix 7 FPGA, showcasing a 40% improvement in latency when compared against the fastest design, while maintaining a competitive area footprint.

  • Three-Move Blind Signatures in Pairing-Free Groups
    on March 25, 2026 at 2:12 pm

    ePrint Report: Three-Move Blind Signatures in Pairing-Free Groups Yanbo Chen We propose the first blind signature scheme that simultaneously achieves the following properties: – It uses a pairing-free group and random oracles in a black-box manner; – It provably achieves concurrent security based on standard assumptions (DDH) without the algebraic group model (AGM); – It requires only three moves. Moreover, the public key, signature, and communication of our scheme all consist of only a constant number of group/field elements. Prior to our work, black-box, three-move pairing-free schemes were only known in the AGM. A recent line of work proposed and optimized schemes without the AGM, but they all require at least four moves.

  • Efficient Compilers for Verifiable Dynamic Searchable Symmetric Encryption
    on March 25, 2026 at 2:12 pm

    ePrint Report: Efficient Compilers for Verifiable Dynamic Searchable Symmetric Encryption Chaya Ganesh, Sikhar Patranabis, Raja Rakshit Varanasi We construct compilers to generically transform any dynamic Searchable Symmetric Encryption (DSSE) scheme that is secure against a semi-honest server into one that is secure against a malicious servers, thus yielding a Verifiable dynamic SSE (VDSSE). Our compilers achieve optimal overheads while preserving forward and backward privacy, which are the standard and widely accepted security notions for DSSE. We focus on optimizing communication overheads and client storage requirements. Our first compiler $\mathsf{FLASH}$ incurs $O(1)$ communication overhead between the client and the server, which is optimal, while incurring mild storage overhead at the client. Our second compiler $\mathsf{BOLT}$ incurs $O(1)$ storage overhead at the client while incurring mild communication overhead. Towards this, we define a new authenticated data structure called a set commitment and we provide an efficient instantiation of this primitive. We prototype implement our compilers and report on their performance over real-world databases. Our experiments validate that our compilers incur concretely low overheads on top of existing semi-honest DSSE schemes, and yield practically efficient VDSSE schemes that scale to very large databases.

  • On the Security of Constraint-Friendly Map-to-Curve Relations
    on March 25, 2026 at 6:36 am

    ePrint Report: On the Security of Constraint-Friendly Map-to-Curve Relations Youssef El Housni, Benedikt Bünz Groth, Malvai, Miller and Zhang (Asiacrypt 2025) introduced constraint-friendly map-to-elliptic-curve-group relations that bypass the inner cryptographic hash when hashing to elliptic curve groups inside constraint systems, achieving substantial reductions in circuit size. Their security proof works in the Elliptic Curve Generic Group Model (EC-GGM). We identify three gaps. First, the security bound is not explicitly analyzed, and the bounds stated for the concrete instantiations are loose. Second, the EC-GGM does not capture the algebraic structure of most deployed curves; we exhibit a concrete signature forgery using the parameters claimed secure. Third, the construction requires a congruence condition on the field that is not satisfied by all deployed curves; we extend it to any field. As a countermeasure we propose a y-increment variant that neutralises the algebraic attack, removes the field restriction, and preserves a comparable constraint count. We implement and benchmark both constructions in the open-source gnark (Go) library; the attack is additionally demonstrated via a self-contained SageMath simulation and confirmed at the circuit level against the authors’ own Noir (Rust) implementation.

  • FROSTLASS: Flexible Ring-Oriented Schnorr-like Thresholdized Linkably Anonymous Signature Scheme
    on March 25, 2026 at 6:30 am

    ePrint Report: FROSTLASS: Flexible Ring-Oriented Schnorr-like Thresholdized Linkably Anonymous Signature Scheme Joshua Babb, Brandon Goodell, Rigo Salazar, Freeman Slaughter, Luke Szramowski FROST is a pragmatic method of thresholdizing Schnorr signatures, permitting a threshold quorum of $t$ signers out of $n$ total individuals to sign for a message. This scheme improved on the state of the art, resulting in an efficient protocol that aborts in the presence of up to $t-1$ malicious users with strong resilience against chosen-message attacks, assuming the hardness of the discrete logarithm problem. In this work, we build upon the foundation introduced in FROST by presenting FROSTLASS, which additionally enjoys novel linkability criteria and anonymity guarantees under the general one-more discrete logarithm problem, utilizing a “Schnorr-shaped hole” technique to prove desirable security results. This scheme is highly practical, tailor-made for use on-chain in the Monero cryptocurrency; indeed, we also showcase a Rust implementation for this protocol, demonstrating its real-world application to improve the security and usability of Monero.

  • Tailored Limb Counts, Faster Arithmetic: Improved TMVP Decompositions for Curve5453 and Curve6071
    on March 25, 2026 at 6:30 am

    ePrint Report: Tailored Limb Counts, Faster Arithmetic: Improved TMVP Decompositions for Curve5453 and Curve6071 Murat Cenk, N. Gamze Orhon Kılıç, Halil Kemal Taşkın, Oğuz Yayla Curve5453 and Curve6071 are Montgomery curves over the primes $2^{545}-3$ and $2^{607}-1$, providing 271- and 302-bit classical security, respectively. Their TMVP-based field multiplication in 10-limb representation costs 77 multiplications. We reduce this to 60 for Curve5453 ($22\%$ fewer) using a 9-limb radix-$2^{61}$ representation, and to 54 for Curve6071 ($30\%$ fewer) using a 12-limb radix-$2^{51}$ representation with hierarchical block-level TMVP. Choosing the limb count to produce $3 \times 3$ Toeplitz blocks aligns the structure with the size-3 TMVP formula, computing each block product in 6 multiplications rather than 9. Portable C implementations benchmarked on ARM64 and x86-64 confirm speedups of up to $16\%$ in field multiplication and $13\%$ in scalar multiplication. On ARM64, Curve5453 reaches $90.6\%$ of OpenSSL’s assembly-optimized NIST P-521 ECDH throughput with 12 additional bits of classical security, and Curve6071 delivers 302-bit classical security at $80.8\%$ of P-521’s throughput.

  • SoK: Updatable Public-Key Encryption
    on March 25, 2026 at 6:30 am

    ePrint Report: SoK: Updatable Public-Key Encryption Mark Manulis, Daniel Slamanig, Federico Valbusa Updatable (public-key) encryption is a broad concept covering (public-key) encryption schemes whose keys can evolve over time to support secure key rotation and limit the impact of key compromise. The essential feature is that the encryption keys (and possibly also ciphertexts) can be updated from one epoch to the next via so called update tokens. This concept is useful in various applications, among them secure outsourced storage, secure messaging or low-latency forward-secret key-exchange protocols. The term, however, is used with varying meanings across the literature. Some works define key-updatable schemes, where only the public and secret keys evolve. Others extend this idea by also allowing ciphertexts to be updated during key evolution. Variants further differ in how evolution is triggered: in some schemes, the receiver performs key updates locally, while in others, the sender initiates the evolution by embedding update information in ciphertexts. Beyond achieving forward secrecy, many formulations also aim for post-compromise security, ensuring that once a compromised key is updated, future ciphertexts regain confidentiality under the new key. In this paper, we systematize this field with a focus on updatable public-key encryption schemes. Our aim is to first provide a taxonomy that sheds light into the currently fragmented terminology. It then compares their formal definition, syntaxes and formal security models found in the literature, clarifies their interrelations, and identifies common design patterns underlying current schemes. Beyond mapping the definitional landscape we provide a comparative analysis of existing instantiations, focusing on their properties and efficiency, and highlighting their main trade-offs. The paper concludes with open challenges outlining directions for advancing the field.

  • Analyzing the WebRTC Ecosystem and Breaking Authentication in DTLS-SRTP
    on March 25, 2026 at 6:30 am

    ePrint Report: Analyzing the WebRTC Ecosystem and Breaking Authentication in DTLS-SRTP Martin Bach, Vukašin Karadžić, Lukas Knittel, Robert Merget, Jean Paul Degabriele DTLS-SRTP was designed to secure real-time media communication and is found in prominent audio and video call platforms, including Zoom, Teams, and Google Meet. Notably, it is part of Web Real-Time Communication (Web-RTC), a web standard enabling real-time communication in the browser. To this end, WebRTC uses multiple technologies, including HTTP, TLS, SDP, ICE, STUN, TURN, UDP, TCP, DTLS, (S)RTP, (S)RTCP, and SCTP. This amalgamation of technologies results in an overly complex system that is very challenging to audit systematically and automatically. As a result, the security of deployments of this core modern communication technology remains largely unexplored. In this work, we aim to close this gap by developing an automated MitM testing framework (DTLS-MitM-Scanner (DMS)) to test the DTLS channel of a DTLS-SRTP connection. We use our framework to study the current state of the ecosystem in a case study spanning 24 service providers across their browser and mobile applications. Our analysis puts special emphasis on the authentication mechanism in DTLS-SRTP, where we test for 19 potential vulnerabilities that could lead to authentication bypasses for both the client and server. We find that among the 33 tested media server implementations, 19 contained vulnerabilities allowing an attacker to break authentication at the DTLS layer. For 9 of the affected systems, which serve hundreds of millions of users, we could also demonstrate that they could be exploited by an attacker to retrieve media data, assuming only Man-in-the-Middle capabilities. We highlight the impact of these vulnerabilities by building a Proof-of-Concept exploit to listen to Webex video conference calls.

Share Websitecyber
We are an ethical website cyber security team and we perform security assessments to protect our clients.