International Association for Cryptologic Research
- Benchmarking SLH-DSA: A Comparative Hardware Analysis Against Classical Digital Signatures for Post-Quantum Securityon December 18, 2025 at 4:24 pm
ePrint Report: Benchmarking SLH-DSA: A Comparative Hardware Analysis Against Classical Digital Signatures for Post-Quantum Security Jayalaxmi H, H M Brunda, Sumith Subraya Nayak, Sathya M, Anirudh S Hegde The advent of large-scale quantum computers poses a fundamental threat to widely deployed public-key cryptographic schemes such as RSA and elliptic curve digital signatures. In response, the National Institute of Standards and Technology has standardized several post-quantum cryptographic algorithms, including the Stateless Hash-Based Digital Signature Algorithm (SLH-DSA) specified in FIPS 205. While SLH-DSA offers strong, conservative security guarantees based solely on cryptographic hash functions, its practical adoption depends on a clear understanding of its hardware cost and performance characteristics relative to classical standards. This paper presents a unified hardware benchmarking study of SLH-DSA against RSA, DSA, ECDSA, and EdDSA. All algorithms are implemented at the register-transfer level in Verilog HDL and synthesized on the same Xilinx Artix-7 FPGA platform to ensure a fair comparison. The evaluation focuses on key hardware metrics, including logic utilization, memory usage, DSP consumption, operational latency, maximum clock frequency, and throughput for key generation, signing, and verification. The results demonstrate that SLH-DSA is logic- and memory-intensive, with significantly higher signing latency and larger signature sizes compared to classical schemes. However, its verification performance is highly competitive, and its public key size remains extremely small. In contrast, classical schemes are primarily arithmetic-bound and rely heavily on DSP resources. The findings highlight that SLH-DSA represents a viable post-quantum solution for applications prioritizing long-term security assurance and efficient verification, such as firmware authentication and digital archiving, despite its higher signing cost.
- Post-Quantum Security of the Sum of Even-Mansouron December 18, 2025 at 4:24 pm
ePrint Report: Post-Quantum Security of the Sum of Even-Mansour YanJin Tan, JunTao Gao, XueLian Li The Sum of Even-Mansour (SoEM) construction was proposed by Chen et al. at Crypto 2019. This construction implements a pseudorandom permutation via the modular addition of two independent Even-Mansour structures and can spawn multiple variants by altering the number of permutations or keys. It has become the design basis for some symmetric schemes, such as the nonce-based encryption scheme CENCPP* and the nonce-based message authentication code scheme nEHTm. This paper provides a proof of the quantum security of the SoEM21 construction under the Q1 model: when an attacker has quantum access to the random permutations but only classical access to the keyed construction, the SoEM21 construction ensures security of up to \(n/3\) bits. This exactly matches the complexity \(O(2^{n/3})\) of the quantum key recovery attack in the Q1 model recently proposed by Li et al., thus establishing a tight bound.
- Random-Access AEAD for Fast Lightweight Online Encryptionon December 18, 2025 at 4:24 pm
ePrint Report: Random-Access AEAD for Fast Lightweight Online Encryption Andrés Fábrega, Julia Len, Thomas Ristenpart, Gregory Rubin We study the problem of random-access authenticated encryption. In this setting, one wishes to encrypt (resp., decrypt) a large payload in an online matter, i.e., using a limited amount of memory, while allowing for the processing of plaintext (resp., ciphertext) segments to be in a random order. Prior work has studied online AE for in-order (streaming) encryption and decryption, and later work added additional constraints to support random access decryption. The result is complicated notions that are not built from the start to account for random access. We thus provide a new, clean-state treatment to the random-access setting. We introduce random-access authenticated encryption (raAE) schemes, which captures AEAD that provides random-access encryption and decryption. We introduce formal security definitions for raAE schemes that cover confidentiality, integrity, and commitment. We prove relationships with existing notions, showing that our simpler treatment does not sacrifice achievable security. Our implications also result in the first treatment of commitment security for online AEAD as well, an increasingly important security goal for AEAD. We then exercise our formalization with a practice-motivated case study: FIPS-compliant raAE. We introduce an raAE scheme called FLOE (Fast Lightweight Online Encryption) that is FIPS compliant, compatible with existing AES-GCM APIs that mandate random nonces, and yet can provide secure, random-access, committing encryption of orders of magnitude more data than naive approaches that utilize AES-GCM. FLOE was designed in close collaboration with leading cloud data platform Snowflake, where it will soon be used in production to protect sensitive data.
- High Exponents May Not Suffice to Patch AIM (On Attacks, Weak Parameters, and Patches for AIM2)on December 18, 2025 at 4:18 pm
ePrint Report: High Exponents May Not Suffice to Patch AIM (On Attacks, Weak Parameters, and Patches for AIM2) Yimeng Sun, Shiyao Chen, Guowei Liu, Meiqin Wang, Chao Niu The growth of advanced cryptographic applications has driven the development of arithmetization-oriented (AO) ciphers over large finite fields, which are designed to minimize multiplicative complexity. However, this design advantage of AO ciphers could also serve as an attack vector. For instance, the \textsf{AIM} one-way function in the post-quantum signature \AIMer proposed at CCS 2023 has been broken by several works soon after its publication. The designers then promptly developed secure patches and proposed an enhanced version, \textsf{AIM2}, which was updated to the latest version of \AIMer that has been selected as one of the winners of the Korean PQC Competition in early 2025. In this paper, we focus on the algebraic security of AIM2 over $\mathbb{F}_{2^n}$. First, we introduce a resultant-minimized model that reduces eliminations by using a non-$k$ based substitution strategy and linearized-polynomial decomposition, achieving an attack time complexity of $2^{188.76}$ ($2^{195.05}$) primitive calls of \textsf{AIM2-III} when $\omega=2$ ($\omega=2.373$), indicating that the designers have been over-optimistic in the evaluation of their security margin; Second, we propose a subfield reduction technique for the case that exponents approach subfield extension sizes, equation degrees collapse sharply, \textit{e.g.,} the exponent $e_2=141\mapsto 13$ in \textsf{AIM2-V} when considering the subfield $\mathbb{F}_{2^{128}}$. This can lower the algebraic attack complexity to $2^{295.97}$ primitive calls at $\omega=2$, which improves upon designers’ estimated bound of Gr\”{o}bner basis attack by about $2^{100}$. Besides, based on our attack methods, we have identified some weak parameter choices, which could provide concrete design guidance for \textsf{AIM2} construction, especially for the exponent of its Mersenne S-box. Finally, to address the potential vulnerabilities, we further propose \textsf{AIM2-patch} with a simple secure patch on \textsf{AIM2}, which can prevent key elimination, neutralize linearized-polynomial decomposition, and raise algebraic attack complexity, while incurring negligible overheads in \AIMer scheme.
- HHGS: Forward-secure Dynamic Group Signatures from Symmetric Primitiveson December 18, 2025 at 3:42 pm
ePrint Report: HHGS: Forward-secure Dynamic Group Signatures from Symmetric Primitives Xuelian Cao, Zheng Yang, Daniel Reijsbergen, Jianting Ning, Junming Ke, Zhiqiang Ma, Jianying Zhou Group signatures allow a group member to sign messages on behalf of the group while preserving the signer’s anonymity, making them invaluable for privacy-sensitive applications. As quantum computing advances, post-quantum security in group signatures becomes essential. Symmetric primitives (SP) offer a promising pathway due to their simplicity, efficiency, and well-understood security foundations. In this paper, we introduce the first \textit{forward-secure dynamic group signature} (FSDGS) framework relying solely on SP. We begin with \textit{hierarchical hypertree group signatures} (HHGS), a basic scheme that securely organizes keys of one-time signatures (OTS) in a hypertree using puncturable pseudorandom functions to enable on-demand key generation and forward security, dynamic enrollment, and which provides resilience against attacks that exploit registration patterns by obfuscating the assignment and usage of keys. We then extend this foundation to HHGS^+, which orchestrates multiple HHGS instances in a generic way, significantly extending the total signing capacity to $O(2^{60})$, which outperforms HHGS’s closest competitors while keeping signatures below 8 kilobytes. We prove the security of both schemes in the standard model. Our results outline a practical SP-driven pathway toward post-quantum-secure group signatures suitable for resource-constrained client devices.
- ARION: Attention-Optimized Transformer Inference on Encrypted Dataon December 18, 2025 at 3:42 pm
ePrint Report: ARION: Attention-Optimized Transformer Inference on Encrypted Data Linhan Yang, Jingwei Chen, Wangchen Dai, Shuai Wang, Wenyuan Wu, Yong Feng Privacy-preserving Transformer inference (PPTI) is essential for deploying large language models (LLMs) such as BERT and LLaMA in sensitive domains. In these models, the attention mechanism is both the main source of expressiveness and the dominant performance bottleneck under fully homomorphic encryption (FHE), due to large ciphertext matrix multiplications and the softmax nonlinearity. This paper presents Arion, a non-interactive FHE-based PPTI protocol that specifically optimizes the computation of encrypted attention. First, for the three consecutive ciphertext matrix multiplications in multi-head attention, we introduce the double Baby-Step Giant-Step algorithm, which significantly reduces the number of ciphertext rotations. On BERT-Base, Arion achieves an 82.5% reduction in rotations over the state-of-the-art PPTI protocol MOAI (2025), corresponding to a 5.7x reduction in rotation cost. Second, we propose a linear–nonlinear fusion technique tailored to the softmax evaluation in attention. By decomposing softmax into shift-by-maximum, exponentiation, and reciprocal sub-steps and fusing them with the surrounding encrypted matrix operations, Arion enables efficient attention evaluation while remaining compatible with diverse ciphertext packing formats. We implement Arion using Lattigo and first evaluate attention kernels on popular LLMs including BERT-Tiny, BERT-Base, and LLaMA, confirming the practicality and scalability of the proposed optimizations for encrypted attention computation. For end-to-end applications, on classification tasks for several benchmark datasets, Arion attains accuracy comparable to plaintext inference and yields up to 2.5x end-to-end speedups over MOAI for BERT-Base.
- Breaking UOV Encryption: Key Recovery Attack On Olivieron December 18, 2025 at 3:36 pm
ePrint Report: Breaking UOV Encryption: Key Recovery Attack On Olivier Emanuele Cornaggia The Oil and Vinegar (OV) trapdoor is widely used in signature schemes such as UOV and MAYO. Recently, Esposito et al. proposed OliVier, an encryption scheme based on this trapdoor. However, the OV trapdoor was originally designed for signatures, and adapting it to encryption introduces inherent challenges. We identify two such challenges and analyze how OliVier addresses the first, while showing that the unresolved second challenge enables a practical key-recovery attack. We conclude that any scheme using the OV trapdoor for encryption must also solve this second problem, for which no efficient solution is currently known.
- How to Compare Bandwidth Constrained Two-Party Secure Messaging Protocols: A Quest for A More Efficient and Secure Post-Quantum Protocolon December 18, 2025 at 3:36 pm
ePrint Report: How to Compare Bandwidth Constrained Two-Party Secure Messaging Protocols: A Quest for A More Efficient and Secure Post-Quantum Protocol Benedikt Auerbach, Yevgeniy Dodis, Daniel Jost, Shuichi Katsumata, Rolfe Schmidt Transitioning existing classical two-party secure messaging protocols to post-quantum protocols has been an active movement in practice in recent years: Apple’s PQ3 protocol and the recent Triple Ratchet protocol being investigated by the Signal team with academics (Dodis et al. Eurocrypt’25). However, due to the large communication overhead of post-quantum primitives, numerous design choices non-existent in the classical setting are being explored, rendering comparison of secure messaging protocols difficult, if not impossible. In this work, we thus propose a new pragmatic metric to measure how secure a messaging protocol is given a particular communication pattern, enabling a concrete methodology to compare secure messaging protocols. We uncover that there can be no “optimal” protocol, as different protocols are often incomparable with the respect to worst-case (adversarial) messaging behaviors, especially when faced with real-world bandwidth constraints. We develop a comprehensive framework to experimentally compare various messaging protocols under given bandwidth limits and messaging behaviors. Finally, we apply our framework to compare several new and old messaging protocols. Independently, we also uncover untapped optimizations which we call opportunistic sending, leading to better post-quantum messaging protocols. To capture these optimizations, we further propose sparse continuous key agreement as a fundamental building block for secure messaging protocols, which could be of independent interest.
- On the Pitfalls of Modeling Individual Knowledgeon December 18, 2025 at 3:36 pm
ePrint Report: On the Pitfalls of Modeling Individual Knowledge Wojciech Ciszewski, Stefan Dziembowski, Tomasz Lizurej, Marcin Mielniczuk The concept of knowledge has been central in cryptography, especially within cryptographic proof systems. Traditionally, research in this area considers an abstract \emph{prover} defending a claim that it knows a message $M$. Recently, a stronger concept—termed “individual” (Dziembowski et al., CRYPTO’23) or “complete” (Kelkar et al., CCS’24) knowledge—has emerged. This notion ensures the prover physically stores $M$ on a machine that it controls. As we argue in the paper, this concept also appears in earlier work on “non-outsourceable puzzles” (Miller et al., CCS’15), which implicitly assumes that performing quickly complex computation on a string $M$ implies storing it on a single machine. In this line of work, the authors typically rely on the algorithms whose computation requires a massive number of queries to a hash function $H$. This paper highlights a subtle issue in the modeling used in some of these papers, more concretely, the assumption that H can be modeled as an atomic random oracle on long messages. Unfortunately, this does not correspond well to how the hash functions are constructed in practice. For example, the real-world hash functions (e.g., Merkle-Damgard or sponge-based) allow partial evaluation on long inputs, violating this assumption. Another example is the hashing used in Bitcoin mining, which permits similar precomputation. This undermines some protocols relying on individual knowledge. We demonstrate practical attacks against Miller et al.’s and Kelkar et al.’s schemes based on this observation, and discuss secure alternatives. Our alternative constructions, which are modifications of the original ones, avoid reliance on the random oracle behavior of hash functions on long messages. In the full version of this paper, we will provide their formal security analysis in the individual cryptography model of Dziembowski et al. (CRYPTO’23).
- Accelerating FrodoKEM in Hardwareon December 18, 2025 at 3:36 pm
ePrint Report: Accelerating FrodoKEM in Hardware Sanjay Deshpande, Patrick Longa, Jakub Szefer FrodoKEM, a conservative post-quantum key encapsulation mechanism based on the plain Learning with Errors (LWE) problem, has been recommended for use by several government cybersecurity agencies and is currently undergoing standardization by the International Organization for Standardization (ISO). Despite its robust security guarantees, FrodoKEM’s performance remains one of the main challenges to its widespread adoption. This work addresses this concern by presenting a fully standard-compliant, high-performance hardware implementation of FrodoKEM targeting both FPGA and ASIC platforms. The design introduces a scalable parallelization architecture that supports run-time configurability across all twelve parameter sets, covering three security levels (L1, L3, L5), two PRNG variants (SHAKE-based and AES-based), and both standard and ephemeral modes, alongside synthesis-time tunability through a configurable performance parameter to balance throughput and resource utilization. For security level L1 on Xilinx Ultrascale+ FPGA, the implementation achieves 3,164, 2,846, and 2,614 operations per second for key generation, encapsulation, and decapsulation, respectively, representing the fastest standard-compliant performance reported to date while consuming only 27.8K LUTs, 64 DSPs, and 8.1K flip-flops. These results significantly outperform all prior specification-compliant implementations and even surpass non-compliant designs that sacrifice specification adherence for speed. Furthermore, we present the first ASIC evaluation of FrodoKEM using the NANGATE45 45 nm technology library, achieving 7,194, 6,471, and 5,943 operations per second for key generation, encapsulation, and decapsulation, respectively, with logic area of 0.235 mm$^2$. The ASIC implementation exhibits favorable sub-linear area scaling and competitive energy efficiency across different performance parameter configurations, establishing a baseline for future comparative studies. The results validate FrodoKEM’s practical viability for deployment in high-throughput, resource-constrained, and power-sensitive cryptographic applications, demonstrating that conservative post-quantum security can be achieved without compromising performance.
- Completing Policy-based Anonymous Tokens: Private Bits, Public Metadata and more…on December 18, 2025 at 3:30 pm
ePrint Report: Completing Policy-based Anonymous Tokens: Private Bits, Public Metadata and more… David Kretzler, Yong Li, Codrin Ogreanu Anonymous tokens are cryptographic protocols for restricting the access to online resources to eligible users. After proving eligibility to the token issuer, the client receives a set of tokens. Later, it can prove eligibility to a resource provider by sending one of the tokens received from the issuer. The anonymous token protocol ensures that the resource provider cannot link received tokens to their issuance, even if it colludes with the token issuer. Recently, Faut et al. (EuroS\&P’25) introduced the concept of policy-based anonymous tokens, in which an issuer provides a single pre-token to a client, who can locally derive multiple tokens according to a publicly announced policy. The major advantage of policy-based tokens is that the communication complexity of the issuance phase is constant. While the work of Faut et al. constitutes a promising step in a new direction, their protocol still lacks several desirable properties known from standard anonymous tokens — most notably, the ability to bind a pre-token and all tokens derived from it to a private metadata bit or a publicly known metadata string. In this work, we present a new framework for policy-based anonymous token schemes in the random oracle model. Our framework includes two concretely practical constructions — one based on equivalence class signatures and one on algebraic MACs — as well as a communication-optimized, though less practical, construction based on zkSNARKs. All three constructions can be configured to support private metadata bits, public metadata, or both. We formalize the notion of policy-based anonymous tokens with a private metadata bit and public metadata, and we prove security of the two primary constructions: the equivalence-class-signature-based scheme and the algebraic-MAC-based scheme. Finally, we provide an experimental evaluation and comparison of all our constructions alongside the most relevant related work. Our results demonstrate that our two primary constructions achieve significant efficiency improvements over the scheme of Faut et al., both in terms of computation communication.
- Leakage-Resilient Multi-Party Computation: Protecting the Evaluator in Circuits Garblingon December 18, 2025 at 3:30 pm
ePrint Report: Leakage-Resilient Multi-Party Computation: Protecting the Evaluator in Circuits Garbling Francesco Berti, Itamar Levi Garbling schemes allow two parties to compute a joint function on private inputs without revealing them. Yet, a semi-honest garbler might exploit hardware/software sidechannel leakages from the evaluator. An alarming threat with no concrete solution yet. Using the homomorphic properties of ElGamal encryption, we can prevent such leakage-based attacks.
- PRGUE Schemes: Efficient Updatable Encryption With Robust Security From Symmetric Primitiveson December 18, 2025 at 3:30 pm
ePrint Report: PRGUE Schemes: Efficient Updatable Encryption With Robust Security From Symmetric Primitives Elena Andreeva, Andreas Weninger Securing sensitive data for long-term storage in the cloud is a challenging problem. Updatable encryption (UE) enables changing the encryption key of encrypted data in the cloud while the plaintext and all versions of the key remain secret from the cloud storage provider, making it an efficient alternative for companies that seek to outsource their data storage. The most secure UE schemes to date follow robust security models, such as the one by Boyd et al. from CRYPTO 2020, and rely exclusively on asymmetric cryptography, thus incurring a substantial performance cost. In contrast, the Nested UE construction of Boneh et al. from ASIACRYPT 2020 achieves much better efficiency with symmetric cryptography, but it provides weaker security guarantees. Boyd et al. further suggest that attaining robust UE security inherently requires the use of asymmetric cryptography. In this work, we show for the first time that symmetric UE schemes are not inherently limited in their security and can achieve guarantees on par with, and even beyond, Boyd’s UE model. To this end, we extend Boyd’s framework to encompass the class of ciphertext-dependent UE schemes and introduce indistinguishability-from-random (IND\$) as a stronger refinement of indistinguishability. While our IND\$ notion primarily streamlines the proofs of advanced security properties within the model, it yields practical privacy advantages: ciphertexts do not exhibit a recognizable structure that could otherwise distinguish them from arbitrary data. We then introduce two robustly secure symmetric UE constructions, tailored to different target security levels. Our schemes are built on a novel design paradigm that combines symmetric authenticated encryption with ciphertext re-randomization, leveraging for the first time the use of pseudorandom number generators in a one-time-pad style. This approach enables both robust security and high efficiency, including in AES-based implementations. Our first scheme, PUE-List, delivers encryption up to 600× faster than prior asymmetric schemes of similar robustness, while matching Boneh et al.’s efficiency and achieving the stronger security level of Boyd et al. Our second scheme, PUE-One, further boosts performance with constant-time decryption 24× faster than all previously known UE schemes, overcoming the main bottleneck in Boneh’s design, while trading off some security, yet still significantly surpassing the guarantees of Boneh’s Nested scheme.
- Certified-Everlasting Quantum NIZK Proofson December 18, 2025 at 3:24 pm
ePrint Report: Certified-Everlasting Quantum NIZK Proofs Nikhil Pappu We study non-interactive zero-knowledge proofs (NIZKs) for NP satisfying: 1) statistical soundness, 2) computational zero-knowledge and 3) certified-everlasting zero-knowledge (CE-ZK). The CE-ZK property allows a verifier of a quantum proof to revoke the proof in a way that can be checked (certified) by the prover. Conditioned on successful certification, the verifier’s state can be efficiently simulated with only the statement, in a statistically indistinguishable way. Our contributions regarding these certified-everlasting NIZKs (CE-NIZKs) are as follows: – We identify a barrier to obtaining CE-NIZKs in the CRS model via generalizations of known interactive proofs that satisfy CE-ZK. – We circumvent this by constructing CE-NIZK from black-box use of NIZK for NP satisfying certain properties, along with OWFs. As a result, we obtain CE-NIZKs for NP in the CRS model, based on polynomial hardness of the learning with errors (LWE) assumption. – In addition, we observe that the aforementioned barrier does not apply to the shared EPR model. Consequently, we present a CE-NIZK for NP in this model based on any statistical binding hidden-bits generator, which can be based on LWE. The only quantum computation in this protocol involves single-qubit measurements of the shared EPR pairs.
- HQC Beyond the Standard: Ciphertext Compression and Refined DFR Analysison December 18, 2025 at 3:24 pm
ePrint Report: HQC Beyond the Standard: Ciphertext Compression and Refined DFR Analysis Sebastian Bitzer, Jean-Christophe Deneuville, Emma Munisamy, Bharath Purtipli, Stefan Ritterhoff, Antonia Wachter-Zeh Hamming Quasi-Cyclic (HQC), recently selected by NIST for standardization, does not employ ciphertext compression, unlike its lattice-based counterpart Kyber. In lattice-based encryption, ciphertext compression is a standard post-processing step, typically implemented through coefficient-wise rounding. In contrast, analogous methods have not yet been explored in code-based cryptography. We address this gap by developing techniques to reduce ciphertext sizes in schemes defined over the Hamming metric, with a particular focus on HQC. To support this approach, the decryption failure rate (DFR) analysis is generalized. Specifically, we revisit the modeling of the error that must be correctable with probability $2^{-\lambda}$ to achieve $\lambda$ bits of security; previously, only tractable under an independence assumption. We propose a more accurate model of the error distribution, which takes dependencies between the coefficients into account. Confirmed by extensive simulations, the proposed model sharpens the DFR analysis and, hence, our understanding of the security of HQC. Building on this generalized framework, we present a ciphertext compression mechanism that enables a precise DFR analysis and is therefore transparent with respect to security. This is achieved by carefully designing a quantization code with a direct-product structure, aligned with HQC’s error-correcting code. For the parameters proposed in the round 4 submission, our techniques reduce HQC ciphertext sizes by up to 4.7%; a proof-of-concept implementation confirms that this improvement comes without noticeable loss in efficiency. Reductions of up to 10% are achievable through a trade-off with public-key size.
- Tight Generic PRF Security of HMAC and NMACon December 18, 2025 at 3:24 pm
ePrint Report: Tight Generic PRF Security of HMAC and NMAC Yaobin Shen, Xiangyang Zhang, Lei Wang, Dawu Gu HMAC and its variant NMAC are among the most widely used methods for keying a cryptographic hash function to obtain a PRF or a MAC. Yet, even after nearly three decades of research, their generic PRF security still remains poorly understood, where the compression function of the underlying hash function is treated as a black box and accessible to the adversary. Although a series of works have exploited compression function queries to mount generic attacks, proving tight bounds on the generic PRF security of HMAC and NMAC remains a challenging open question until now. In this paper, we establish tight bounds on the generic PRF security of HMAC and NMAC. Our bounds capture the influence of the number of construction queries, the number of compression function queries, and the maximal block length of a message on their security. The proofs are carried out in the multi-user setting and the bounds hold regardless of the number of users. In addition, we present matching attacks to demonstrate that our bounds are essentially tight. Taken together, our results close a longstanding gap in the generic PRF security analysis of HMAC and NMAC.
- TSS-PV: Traceable Secret Sharing with Public Verifiabilityon December 18, 2025 at 3:24 pm
ePrint Report: TSS-PV: Traceable Secret Sharing with Public Verifiability Duc Anh Luong, Jong Hwan Park, Changmin Lee, Hyoseung Kim High-value custodial systems require both Public Verifiability (PVSS) to audit key distribution and Traceability (TSS) to identify insider leakage via black-box “reconstruction boxes.” Existing schemes achieve one property but not both, leaving practical systems exposed to either undetectable dealer misbehavior or untraceable share leakage. Combining these properties introduces the “Provenance Paradox”: a verifiability-aware reconstruction box with access to verification predicates and public transcripts can reject dummy shares used for tracing because they have no provenance in the public transcript. We present TSS-PV, the first publicly verifiable traceable secret sharing scheme that resolves this paradox. Our key insight is to inject indistinguishable dummy shares during the sharing phase itself, ensuring they are committed to the public transcript before any reconstruction box is constructed. We formalize syntax and security under a modular adversarial model: public verifiability holds against fully malicious dealers and parties; traceability identifies leaking parties after honest distribution; and non-imputability prevents a malicious dealer from framing honest parties. Both tracing properties assume a verifiability-aware (perfect) reconstruction box. We instantiate TSS-PV over cyclic groups using Schnorr-based NIZKs and a recent generic tracing framework (CRYPTO’24). Public verification costs scale linearly in the number of parties; tracing costs are quadratic. A Curve25519 prototype on commodity hardware demonstrates practicality: for $32\text{ – }256$ parties, distribution verification completes in $14\text{ – }127$ ms, tracing in $0.24\text{ – }76$ s, and trace verification in $0.15\text{ – }25$ s.
- \textsc{Npir}: High-Rate PIR for Databases with Moderate-Size Recordson December 18, 2025 at 3:24 pm
ePrint Report: \textsc{Npir}: High-Rate PIR for Databases with Moderate-Size Records Yuliang Lin, Baosheng Wang, Yi Wang, Rongmao Chen Private information retrieval (PIR) is a widely used technique in privacy-preserving applications that enables users to retrieve records from a database without revealing any information about their queries. This study focuses on a type of PIR that has a high ratio between the size of the record retrieved by the client and the server’s response. Although significant progress has been made in high-rate PIR in recent years, the computational overhead on the server side remains rather high. This results in low server throughput, particularly for applications involving databases with moderate-size records (i.e. tens of kilobytes), such as private advertising system. In this paper, we present \textsc{Npir}, a high-rate single-server PIR that is based on NTRU encoding and outperforms the state-of-the-art Spiral (Menon \& Wu, S\&P 2022) and NTRUPIR (Xia \& Wang, EuroS\&P 2024) in terms of server throughput for databases with moderate-size records. In specific, for databases ranging from 1 GB to 32 GB with 32 KB records, the server throughput of \textsc{Npir} is 1.50 to 2.84 times greater than that of Spiral and 1.77 to 2.55 times greater than that of NTRUPIR. To improve server throughput without compromising the high-rate feature, we propose a novel tool called NTRU packing, which compresses the constant terms of underlying polynomials of multiple NTRU encodings into a single NTRU encoding, thereby reducing the size of the server’s response. Furthermore, \textsc{Npir} naturally supports batch processing for moderate-size records, and can easily handle retrieving for records of varying sizes.tions, we advance secure communication protocols under challenging conditions.
- On the Equivalence of Polynomial Commitments for an Identical Polynomial under Different Baseson December 18, 2025 at 3:24 pm
ePrint Report: On the Equivalence of Polynomial Commitments for an Identical Polynomial under Different Bases Dengji Ma, Jingyu Ke, Sinka Gao, Guoqiang Li We propose a Pairing-based Polynomial Consistency Protocol (PPCP) that verifies the equivalence of polynomial commitments generated under different basis representations, such as the coefficient and Lagrange bases. By leveraging pairing relations, PPCP proves that two commitments correspond to an identical underlying polynomial vector without revealing the polynomial itself. This enables efficient proof aggregation and recursive composition across heterogeneous SNARK systems that adopt distinct polynomial encodings.
- Scalable Private Set Intersection over Distributed Encrypted Dataon December 18, 2025 at 1:48 am
ePrint Report: Scalable Private Set Intersection over Distributed Encrypted Data Seunghun Paik, Nirajan Koirala, Jack Nero, Hyunjung Son, Yunki Kim, Jae Hong Seo, Taeho Jung Finding intersections across sensitive data is a core operation in many real-world data-driven applications, such as healthcare, anti-money laundering, financial fraud, or watchlist applications. These applications often require large-scale collaboration across thousands or more independent sources, such as hospitals, financial institutions, or identity bureaus, where all records must remain encrypted during storage and computation, and are typically outsourced to dedicated/cloud servers. Such a highly distributed, large-scale, and encrypted setting makes it very challenging to apply existing solutions, e.g., (multi-party) private set intersection (PSI) or private membership test (PMT). In this paper, we present Distributed and Outsourced PSI (DO-PSI), an efficient and scalable PSI protocol over outsourced, encrypted, and highly distributed datasets. Our key technique lies in a generic threshold fully homomorphic encryption (FHE) based framework that aggregates equality results additively, which ensures high scalability to a large number of data sources. In addition, we propose a novel technique called \textit{nonzero-preserving mapping}, which maps a zero vector to zero and preserves nonzero values. This allows homomorphic equality tests over a smaller base field, substantially reducing computation while enabling higher-precision representations. We implement DO-PSI and conduct extensive experiments, showing that ours substantially outperforms existing methods in both computation and communication overheads. Our protocol handles a billion-scale set distributed and outsourced to a thousand data owners within one minute, directly reflecting large-scale deployment scenarios, and achieves up to an 11.16$\times$ improvement in end-to-end latency over prior state-of-the-art methods.




