International Association for Cryptologic Research
- The AIIP Problem: Toward a Post-Quantum Hardness Assumption from Affine Iterated Inversion over Finite Fieldson September 5, 2025 at 11:42 am
ePrint Report: The AIIP Problem: Toward a Post-Quantum Hardness Assumption from Affine Iterated Inversion over Finite Fields MINKA MI NGUIDJOI Thierry Emmanuel We introduce the Affine Iterated Inversion Problem (AIIP), a new candidate hard problem for post-quantum cryptography, based on inverting iterated polynomial maps over finite fields. Given a polynomial f ∈ Fq[x] of degree d ≥ 2, an iteration parameter n, and a target y ∈ Fq, AIIP requires finding an input x such that f(n)(x) = y, where f(n) denotes the n-fold composi tion of f. We establish the computational hardness of AIIP through two independent analytical frameworks: first, by establishing a formal connection to the Discrete Logarithm Problem in the Jacobian of hyperelliptic curves of exponentially large genus; second, via a polynomial time reduction to solving structured systems of multivariate quadratic (MQ) equations. The f irst construction provides number-theoretic evidence for hardness by embedding an AIIP in stance into the arithmetic of a high-genus curve, while the second reduction proves worst-case hardness relative to the NP-hard MQ problem. For the quadratic case f(x) = x2 + α, we show that the induced MQ system is heuristically indistinguishable from a random system, and we formalize a sufficient condition for its pseudorandomness under a standard cryptographic assumption. We provide a detailed security analysis against classical and quantum attacks, derive concrete parameters for standard security levels, and discuss the potential of AIIP as a foundation for digital signatures and public-key encryption. This dual hardness foundation, rooted in both algebraic geometry and multivariate algebra, positions AIIP as a versatile and promising primitive for post-quantum cryptography.
- HE-SecureNet: An Efficient and Usable Framework for Model Training via Homomorphic Encryptionon September 5, 2025 at 11:42 am
ePrint Report: HE-SecureNet: An Efficient and Usable Framework for Model Training via Homomorphic Encryption Thomas Schneider, Huan-Chih Wang, Hossein Yalame Energy-efficient edge devices are essential for the widespread deployment of machine learning (ML) services. However, their limited computational capabilities make local model training infeasible. While cloud-based training offers a scalable alternative, it raises serious privacy concerns when sensitive data is outsourced. Homomorphic Encryption (HE) enables computation directly on encrypted data and has emerged as a promising solution to this privacy challenge. Yet, current HE-based training frameworks face several shortcomings: they often lack support for complex models and non-linear functions, struggle to train over multiple epochs, and require cryptographic expertise from end users. We present HE-SecureNet, a novel framework for privacy-preserving model training on encrypted data in a single-client–server setting, using hybrid HE cryptosystems. Unlike prior HE-based solutions, HE-SecureNet supports advanced models such as Convolutional Neural Networks and handles non-linear operations including ReLU, Softmax, and MaxPooling. It introduces a level-aware training strategy that eliminates costly ciphertext level alignment across epochs. Furthermore, HE-SecureNet automatically converts ONNX models into optimized secure C++ training code, enabling seamless integration into privacy-preserving ML pipeline—without requiring cryptographic knowledge. Experimental results demonstrate the efficiency and practicality of our approach. On the Breast Cancer dataset, HE-SecureNet achieves a 5.2× speedup and 33% higher accuracy compared to ConcreteML (Zama) and TenSEAL (OpenMined). On the MNIST dataset, it reduces CNN training latency by 2× relative to Glyph (Lou et al., NeurIPS’20), and cuts communication overhead by up to 66× on MNIST and 42× on CIFAR-10 compared to MPC-based solutions.
- MegaBlocks: Breaking the Logarithmic I/O-Overhead Barrier for Oblivious RAMon September 5, 2025 at 11:42 am
ePrint Report: MegaBlocks: Breaking the Logarithmic I/O-Overhead Barrier for Oblivious RAM Gilad Asharov, Eliran Eiluz, Ilan Komargodski, Wei-Kai Lin Oblivious RAM (ORAM) is a central cryptographic primitive that enables secure memory access while hiding access patterns. Among existing ORAM paradigms, hierarchical ORAMs were long considered impractical despite their asymptotic optimality. However, recent advancements (FutORAMa, CCS’23) demonstrate that hierarchical ORAM-based schemes can be made efficient given sufficient client-side memory. In this work, we present a new hierarchical ORAM construction that achieves practical performance without requiring large local memory. From a theoretical standpoint, we identify that there is a gap in the literature concerning the asymmetric setting, where the logical word size is asymptotically smaller than the physical memory block size. In this scenario, the best-known construction (OptORAMa, J.\ ACM ’23,) turns every logical query into $O(\log N)$ physical memory accesses (quantity known as “I/O overhead”), whereas the lower bound of Komargodski and Lin (CRYPTO’21) implies that $\Omega(\log N /\log\log N)$ accesses are needed. We close this gap by constructing an optimal ORAM for the asymmetric setting, achieving an I/O overhead of $O(\log N / \log\log N)$. Our construction features exceptionally small constants (between 1 and 4, depending on the block size) and operates without requiring large local memory. We implement our scheme and compare it to PathORAM (CCS’13) and FutORAMa, demonstrating significant improvement. For 1TB logical memory, our construction obtains $\times 10$-$\times 30$ reduction in I/O overhead and bandwidth compared to PathORAM, and $\times 7$–$\times 26$ improvement over FutORAMa. This improvement applies when those schemes weren’t designed to operate on large blocks, as in our settings, and the exact improvement depends on the physical block size and the exact local memory available.
- Query-Optimal IOPPs for Linear-TIme Encodable Codeson September 5, 2025 at 11:36 am
ePrint Report: Query-Optimal IOPPs for Linear-TIme Encodable Codes Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Matan Shtepel We present the first IOPP for a linear-time encodable code that achieves linear prover time and $O(\lambda)$ query complexity, for a broad range of security parameters $\lambda$. No prior work is able to simultaneously achieve this efficiency: it either supports linear-time encodable codes but with worse query complexity [FICS; ePrint 2025], or achieves $O(\lambda)$ query complexity but only for quasilinear-time encodable codes [Minzer, Zheng; FOCS 2025]. Furthermore, we prove a matching lower bound that shows that the query complexity of our IOPP is asymptotically optimal (up to additive factors) for codes with constant rate. We obtain our result by tackling a ubiquitous subproblem in IOPP constructions: checking that a batch of claims hold. Our novel solution to this subproblem is twofold. First, we observe that it is often sufficient to ensure that, with all but negligible probability, most of the claims hold. Next, we devise a new `lossy batching’ technique which convinces a verifier of the foregoing promise with lower query complexity than that required to convince it that all the claims hold. This method differs significantly from the line-versus-point test used to achieve query-optimal IOPPs (for quasilinear-time encodable codes) in prior work [Minzer, Zheng; FOCS 2025], and may be of independent interest. Our IOPP can handle all codes that support efficient codeswitching [Ron-Zewi, Rothblum; JACM 2024], including several linear-time encodable codes. Via standard techniques, our IOPP can be used to construct the first (to the best of our knowledge) IOP for NP with $O(n)$ prover time and $O(\lambda)$ query complexity. We additionally show that our IOPP (and by extension the foregoing IOP) is round-by-round tree-extractable and hence can be used to construct a SNARK in the random oracle model with $O(n)$ prover time and $O(\lambda \log n)$ proof size.
- Symmetric Group-Based Public-Key Cryptosystem with Large Prime Modulion September 5, 2025 at 11:36 am
ePrint Report: Symmetric Group-Based Public-Key Cryptosystem with Large Prime Moduli Kaveh Dastouri We introduce a novel public-key cryptosystem based on the symmetric groups $S_{p_1} \times S_{p_2} $, where \( p_1, p_2 \) are large primes. The modulus \( N = f(\lambda_1) \cdot f(\lambda_2) \), with partitions \( \lambda_1 \in P(p_1) \), \( \lambda_2 \in P(p_2) \), and \( f(\lambda_i) = |C_{\lambda_i}| \cdot m_1(\lambda_i) \), leverages conjugacy class sizes to ensure large prime factors, including \( p_1, p_2 \). A partition selection strategy using non-repeated composition numbers guarantees robust security, surpassing RSA by supporting multiple large primes and deterministic key generation. Efficient decryption is achieved via known factorizations, and a lightweight symmetric hash primitive provides message authentication. We provide rigorous security analysis, practical implementation, and comparisons to multi-prime RSA, advancing algebraic cryptography for modern applications.
- Secure Agentson September 5, 2025 at 11:30 am
ePrint Report: Secure Agents Nakul Khambhati, Joonwon Lee, Gary Song, Rafail Ostrovsky, Sam Kumar Organizations increasingly need to pool their sensitive data for collaborative computation while keeping their own data private from each other. One approach is to use a family of cryptographic protocols called Secure Multi-Party Computation (MPC). Another option is to use a set of cloud services called clean rooms. Unfortunately, neither approach is satisfactory. MPC is orders of magnitude more resource-intensive than regular computation, making it impractical for workloads like data analytics and AI. Clean rooms do not give users the flexibility to perform arbitrary computations. We propose and develop an approach and system called a secure agent and utilize it to create a virtual clean room, Flexroom, that is both performant and flexible. Secure agents enable parties to create a phantom identity that they can collectively control, using maliciously secure MPC, which issues API calls to external services with parameters that remain secret from all participating parties. Importantly, in Flexroom, the secure agent uses MPC not to perform the computation itself, but instead merely to orchestrate the computation in the cloud, acting as a distinct trusted entity jointly governed by all parties. As a result, Flexroom enables collaborative computation with unfettered flexibility, including the ability to use convenient cloud services. By design, the collaborative computation runs at plaintext speeds, so the overhead of Flexroom will be amortized over a long computation.
- A Note on Feedback-PRF Mode of KDF from NIST SP 800-108on September 5, 2025 at 11:30 am
ePrint Report: A Note on Feedback-PRF Mode of KDF from NIST SP 800-108 Ritam Bhaumik, Avijit Dutta, Tetsu Iwata, Ashwin Jha, Kazuhiko Minematsu, Mridul Nandi, Yu Sasaki, Meltem Sönmez Turan, Stefano Tessaro We consider FB-PRF, one of the key derivation functions defined in NIST SP 800-108 constructed from a pseudorandom function in a feedback mode. The standard allows some flexibility in the specification, and we show that one specific instance of FB-PRF allows an efficient distinguishing attack.
- Cryptanalysis of ChiLow with Cube-Like Attackson September 5, 2025 at 1:06 am
ePrint Report: Cryptanalysis of ChiLow with Cube-Like Attacks Shuo Peng, Jiahui He, Kai Hu, Zhongfeng Niu, Shahram Rasoolzadeh, Meiqin Wang Proposed in EUROCRYPT~2025, \chilow is a family of tweakable block ciphers and a related PRF built on the novel nonlinear $\chichi$ function, designed to enable efficient and secure embedded code encryption. The only key-recovery results of \chilow are from designers which can reach at most 4 out of 8 rounds, which is not enough for a low-latency cipher like \chilow: more cryptanalysis efforts are expected. Considering the low-degree $\chichi$ function, we present three kinds of cube-like attacks on \chilow-32 under both single-tweak and multi-tweak settings, including \begin{itemize} \item[-] a \textit{conditional cube attack} in the multi-tweak setting, which enables full key recovery for 5-round and 6-round instances with time complexities $2^{32}$ and $2^{120}$, data complexities $2^{23.58}$ and $2^{40}$, and negligible memory requirements, respectively. \item[-] a \textit{borderline cube attack} in the multi-tweak setting, which recovers the full key of 5-round \chilow-32 with time, data, and memory complexities of $2^{32}$, $2^{18.58}$, and $2^{33.56}$, respectively. For 6-round \chilow-32, it achieves full key recovery with time, data, and memory complexities of $2^{34}$, $2^{33.58}$, and $2^{54.28}$, respectively. Both attacks are practical. \item [-] an \textit{integral attack} on 7-round \chilow-32 in the single-tweak setting. By combining a 4-round borderline cube with three additional rounds, we reduce the round-key search space from $2^{96}$ to $2^{73}$. Moreover, we present a method to recover the master key based on round-key information, allowing us to recover the master key for 7-round \chilow-32 with a time complexity of $2^{127.78}$. \end{itemize} All of our attacks respect security claims made by the designers. Though our analysis does not compromise the security of the full 8-round \chilow, we hope that our results offer valuable insights into its security properties.
- Breaking Omertà: On Threshold Cryptography, Smart Collusion, and Whistleblowingon September 5, 2025 at 1:06 am
ePrint Report: Breaking Omertà: On Threshold Cryptography, Smart Collusion, and Whistleblowing Mahimna Kelkar, Aadityan Ganesh, Aditi Partap, Joseph Bonneau, S. Matthew Weinberg Cryptographic protocols often make honesty assumptions—e.g., fewer than $t$ out of $n$ participants are adversarial. In practice, these assumptions can be hard to ensure, particularly given monetary incentives for participants to collude and deviate from the protocol. In this work, we explore combining techniques from cryptography and mechanism design to discourage collusion. We formalize protocols in which colluders submit a cryptographic proof to whistleblow against their co-conspirators, revealing the dishonest behavior publicly. We provide general results on the cryptographic feasibility, and show how whistleblowing fits a number of applications including secret sharing, randomness beacons, and anonymous credentials. We also introduce smart collusion—a new model for players to collude. Analogous to blockchain smart contracts, smart collusion allows colluding parties to arbitrarily coordinate and impose penalties on defectors (e.g., those that blow the whistle). We show that unconditional security is impossible against smart colluders even when whistleblowing is anonymous and can identify all colluding players. On the positive side, we construct a whistleblowing protocol that requires only a small deposit and can protect against smart collusion even with roughly $t$ times larger deposit.
- Compact Lattice-Coded (Multi-Recipient) Kyber without CLT Independence Assumptionon September 5, 2025 at 1:06 am
ePrint Report: Compact Lattice-Coded (Multi-Recipient) Kyber without CLT Independence Assumption Shuiyin Liu, Amin Sakzad This work presents a joint design of encoding and encryption procedures for public key encryptions (PKEs) and key encapsulation mechanism (KEMs) such as Kyber, without relying on the assumption of independent decoding noise components, achieving reductions in both communication overhead (CER) and decryption failure rate (DFR). Our design features two techniques: ciphertext packing and lattice packing. First, we extend the Peikert-Vaikuntanathan-Waters (PVW) method to Kyber: $\ell$ plaintexts are packed into a single ciphertext. This scheme is referred to as P$_\ell$-Kyber. We prove that the P$_\ell$-Kyber is IND-CCA secure under the M-LWE hardness assumption. We show that the decryption decoding noise entries across the $\ell$ plaintexts (also known as layers) are mutually independent. Second, we propose a cross-layer lattice encoding scheme for the P$_\ell$-Kyber, where every $\ell$ cross-layer information symbols are encoded to a lattice point. This way we obtain a \emph{coded} P$_\ell$-Kyber, where the decoding noise entries for each lattice point are mutually independent. Therefore, the DFR analysis does not require the assumption of independence among the decryption decoding noise entries. Both DFR and CER are greatly decreased thanks to ciphertext packing and lattice packing. We demonstrate that with $\ell=24$ and Leech lattice encoder, the proposed coded P$_\ell$-KYBER1024 achieves DFR $<2^{-281}$ and CER $ = 4.6$, i.e., a decrease of CER by $90\%$ compared to KYBER1024. If minimizing CPU runtime is the priority, our C implementation shows that the E8 encoder provides the best trade-off among runtime, CER, and DFR. Additionally, for a fixed plaintext size matching that of standard Kyber ($256$ bits), we introduce a truncated variant of P$_\ell$-Kyber that deterministically removes ciphertext components carrying surplus information bits. Using $\ell=8$ and E8 lattice encoder, we show that the proposed truncated coded P$_\ell$-KYBER1024 achieves a $10.2\%$ reduction in CER and improves DFR by a factor of $2^{30}$ relative to KYBER1024. Finally, we demonstrate that constructing a multi-recipient PKE and a multi-recipient KEM (mKEM) using the proposed truncated coded P$_\ell$-KYBER1024 results in a $20\%$ reduction in bandwidth consumption compared to the existing schemes.
- LEAF: Compact and Efficient Blind Signature from Code-based Assumptionson September 5, 2025 at 1:06 am
ePrint Report: LEAF: Compact and Efficient Blind Signature from Code-based Assumptions Yi-Fu Lai, Edoardo Persichetti Recently, Hanzlik, Lai, Paracucchi, Slamanig, Tang proposed several blind signature frameworks, collectively named Tanuki(s) (Asiacrypt’25), built upon cryptographic group actions. Their work introduces novel techniques and culminates in a concurrently secure blind signature framework. Straightforward instantiations based on CSIDH (CSI-FiSh) and LESS yield signature sizes of 4.5 KB and 64 KB respectively, providing the first efficient blind signatures in the isogeny-based and code-based literature allowing concurrent executions. In this work, we improve the code-based instantiations by using the canonical form of linear equivalent codes by a careful treatment. However, the canonical form does not naturally support a group action structure, which is central to the security proofs of Tanuki(s). Consequently and unfortunately, the original security guarantees do not directly apply. To address this, we develop two distinct non-black-box reductions for both blindness and the one-more unforgeability. In the end, the improvements do not compromise the security. This results in a concurrently secure code-based blind signature scheme with a compact signature size of 4.4 KB, which is approximately 1% smaller than the isogeny-based one. We also provide a C implementation where the signing time in 99ms and 268 Mcycles on an Intel i7 2.3~GHz CPU. We also look forward to our approaches benefiting advanced constructions built on top of LESS in the future.
- IronDict: Transparent Dictionaries from Polynomial Commitmentson September 5, 2025 at 1:06 am
ePrint Report: IronDict: Transparent Dictionaries from Polynomial Commitments Hossein Hafezi, Alireza Shirzad, Benedikt Bünz, Joseph Bonneau We present IronDict, a transparent dictionary construction based on polynomial commitment schemes. Transparent dictionaries enable an untrusted server to maintain a mutable dictionary and provably serve clients lookup queries. A major open challenge is supporting efficient auditing by lightweight clients. Previous solutions either incurred high server costs (limiting throughput) or high client lookup verification costs, hindering them from modern messaging key transparency deployments with billions of users. Our construction makes black-box use of a generic multilinear polynomial commitment scheme and inherits its security notions, i.e. binding and zero-knowledge. We implement our construction with the recent KZH scheme and find that a dictionary with $1$ billion entries can be verified on a consumer-grade laptop in $35$ ms, a $300\times$ improvement over the state of the art, while also achieving $150{,}000\times$ smaller proofs ($8$ KB). In addition, our construction ensures perfect privacy with concretely efficient costs for both the client and the server. We also show fast-forwarding techniques based on incremental verifiable computation (IVC) and checkpoints to enable even faster client auditing.
- PriSrv+: Privacy and Usability-Enhanced Wireless Service Discovery with Fast and Expressive Matchmaking Encryptionon September 5, 2025 at 1:06 am
ePrint Report: PriSrv+: Privacy and Usability-Enhanced Wireless Service Discovery with Fast and Expressive Matchmaking Encryption Yang Yang, Guomin Yang, Yingjiu Li, Pengfei WU, Rui Shi, Minming Huang, Jian Weng, HweeHwa Pang, Robert H. Deng Service discovery is a fundamental process in wireless networks, enabling devices to find and communicate with services dynamically, and is critical for the seamless operation of modern systems like 5G and IoT. This paper introduces PriSrv+, an advanced privacy and usability-enhanced service discovery protocol for modern wireless networks and resource-constrained environments. PriSrv+ builds upon PriSrv (NDSS’24), by addressing critical limitations in expressiveness, privacy, scalability, and efficiency, while maintaining compatibility with widely-used wireless protocols such as mDNS, BLE, and Wi-Fi. A key innovation in PriSrv+ is the development of Fast and Expressive Matchmaking Encryption (FEME), the first matchmaking encryption scheme capable of supporting expressive access control policies with an unbounded attribute universe, allowing any arbitrary string to be used as an attribute. FEME significantly enhances the flexibility of service discovery while ensuring robust message and attribute privacy. Compared to PriSrv, PriSrv+ optimizes cryptographic operations, achieving 7.62$\times$ faster for encryption and 6.23$\times$ faster for decryption, and dramatically reduces ciphertext sizes by 87.33$\%$. In addition, PriSrv+ reduces communication costs by 87.33$\%$ for service broadcast and 86.64$\%$ for anonymous mutual authentication compared with PriSrv. Formal security proofs confirm the security of FEME and PriSrv+. Extensive evaluations on multiple platforms demonstrate that PriSrv+ achieves superior performance, scalability, and efficiency compared to existing state-of-the-art protocols.
- Compressed verification for post-quantum signatures with long-term public keyson September 5, 2025 at 1:00 am
ePrint Report: Compressed verification for post-quantum signatures with long-term public keys Gustavo Banegas, Anaëlle Le Dévéhat, Benjamin Smith Many signature applications—such as root certificates, secure software updates, and authentication protocols—involve long-lived public keys that are transferred or installed once and then used for many verifications. This key longevity makes post-quantum signature schemes with conservative assumptions (e.g., structure-free lattices) attractive for long-term security. But many such schemes, especially those with short signatures, suffer from extremely large public keys. Even in scenarios where bandwidth is not a major concern, large keys increase storage costs and slow down verification. We address this with a method to replace large public keys in GPV-style signatures with smaller, private verification keys. This significantly reduces verifier storage and runtime while preserving security. Applied to the conservative, short-signature schemes \Wave and \Squirrels, our method compresses \Squirrels[-I] keys from \SI{665}{\kilo\byte} to \SI{20.7}{\kilo\byte} and \Wave[822] keys from \SI{3.5}{\mega\byte} to \SI{207.97}{\kilo\byte}.
- Simple threshold decryption secure against adaptive corruptionson September 5, 2025 at 1:00 am
ePrint Report: Simple threshold decryption secure against adaptive corruptions Victor Shoup We present a practical, non-interactive threshold decryption scheme. It can be proven CCA secure with respect to adaptive corruptions in the random oracle model under a standard computational assumption, namely, the DDH assumption. Our scheme, called TDH2a, is a minor tweak on the TDH2 scheme presented by Shoup and Gennaro at Eurocrypt 1998, which was proven secure against static corruptions under the same assumptions. The design and analysis of TDH2a are based on a straightforward extension of the simple information-theoretic argument underlying the security of the Cramer-Shoup encryption scheme presented at Crypto 1998.
- A Template SCA Attack on the Kyber/ML-KEM Pair-Pointwise Multiplicationon September 5, 2025 at 1:00 am
ePrint Report: A Template SCA Attack on the Kyber/ML-KEM Pair-Pointwise Multiplication Sedric Nkotto Kyber a.k.a ML-KEM has been stardardized by NIST under FIPS-203 and will definetely in the coming years be implemented in several commercial products. However the resilience of implementations against side channel attacks is still an open and practical concern. One of the drawbacks of the ongoing side channel analysis research related to PQC schemes is the availability of open source datasets. Luckily some opensource datasets start popping up. For instance the one recently published by Rezaeezade et al. in [2]. This dataset captures power consumption during a pair- pointwise multiplication occuring in the course of ML-KEM decapsulation process and involving the decapsulation (sub)key and ciphertexts. In this paper we present a template side channel attack targetting that operation, which yields a complete recovery of the decapsulation secret (sub)key.
- TACITA: Threshold Aggregation without Client Interactionon September 5, 2025 at 1:00 am
ePrint Report: TACITA: Threshold Aggregation without Client Interaction Varun Madathil, Arthur Lazzaretti, Zeyu Liu, Charalampos Papamanthou Secure aggregation enables a central server to compute the sum of client inputs without learning any individual input, even in the presence of dropouts or partial participation. This primitive is fundamental to privacy-preserving applications such as federated learning, where clients collaboratively train models without revealing raw data. We present a new secure aggregation protocol, TACITA, in the single-server setting that satisfies four critical properties simultaneously: (1) one-shot communication from clients with no per-instance setup, (2) input-soundness, i.e. the server cannot manipulate the ciphertexts, (3) constant-size communication per client, independent of the number of participants per-instance, and (4) robustness to client dropouts Previous works on secure aggregation – Willow and OPA (CRYPTO’25) that achieve one-shot communication do not provide input soundness, and allow the server to manipulate the aggregation. They consequently do not achieve full privacy and only achieve Differential Privacy guarantees at best. We achieve full privacy at the cost of assuming a PKI. Specifically, TACITA relies on a novel cryptographic primitive we introduce and realize: succinct multi-key linearly homomorphic threshold signatures (MKLHTS), which enables verifiable aggregation of client-signed inputs with constant-size signatures. To encrypt client inputs, we adapt the Silent Threshold Encryption (STE) scheme of Garg et al. (CRYPTO 2024) to support ciphertext-specific decryption and additive homomorphism. We formally prove security in the Universal Composability framework and demonstrate practicality through an open-source proof-of-concept implementation, showing our protocol achieves scalability without sacrificing efficiency or requiring new trust assumptions.
- BitPriv: A Privacy-Preserving Protocol for DeFi Applications on Bitcoinon September 5, 2025 at 1:00 am
ePrint Report: BitPriv: A Privacy-Preserving Protocol for DeFi Applications on Bitcoin Ioannis Alexopoulos, Zeta Avarikioti, Paul Gerhart, Matteo Maffei, Dominique Schröder Bitcoin secures over a trillion dollars in assets but remains largely absent from decentralized finance (DeFi) due to its restrictive scripting language. The emergence of BitVM, which enables verification of arbitrary off-chain computations via on-chain fraud proofs, opens the door to expressive Bitcoin-native applications without altering consensus rules. A key challenge for smart contracts executed on a public blockchain, however, is the privacy of data: for instance, bid privacy is crucial in auctions and transaction privacy is leveraged in MEV-mitigation techniques such as proposer-builder separation. While different solutions have been recently proposed for Ethereum, these are not applicable to Bitcoin. In this work, we present BitPriv, the first Bitcoin-compatible protocol to condition payments based on the outcome of a secure two-party computation (2PC). The key idea is to let parties lock collateral on-chain and to evaluate a garbled circuit off-chain: a cut-and-choose mechanism deters misbehavior and any violation can be proven and punished on-chain via BitVM. This design achieves security against rational adversaries, ensuring that deviation is irrational under financial penalties. We showcase the new class of applications enabled by BitPriv as well as evaluate its performance through a privacy-preserving double-blind marketplace in Bitcoin. In the optimistic case, settlement requires only two transactions and under \$3 in fees; disputes are more expensive (≈\$507) with their cost tied to the specific BitVM implementation, but their mere feasibility acts as a strong deterrent. BitPriv provides a blueprint for building enforceable, privacy-preserving DeFi primitives on Bitcoin without trusted hardware, sidechains, or protocol changes.
- Information-Theoretic Random-Index PIRon September 5, 2025 at 12:54 am
ePrint Report: Information-Theoretic Random-Index PIR Sebastian Kolby, Lawrence Roy, Jure Sternad, Sophia Yakoubov A Private Information Retrieval (PIR) protocol allows a client to learn the $i$th row of a database held by one or more servers, without revealing $i$ to the servers. A Random-Index PIR (RPIR) protocol, introduced by Gentry et al. (TCC 2021), is a PIR protocol where, instead of being chosen by the client, $i$ is random. This has applications in e.g. anonymous committee selection. Both PIR and RPIR protocols are interesting only if the communication complexity is smaller than the database size; otherwise, the trivial solution where the servers send the entire database suffices. Unlike PIR, where the client must send at least one message (to encode information about $i$), RPIR can be executed in a single round of server-to-client communication. In this paper, we study such one-round, information-theoretic RPIR protocols. The only known construction in this setting is SimpleMSRPIR (Gentry et al.), which requires the servers to communicate approximately $\frac{N}{2}$ bits, $N$ being the database size. We show an $\Omega(\sqrt{N})$ lower bound on communication complexity for one-round two-server information-theoretic RPIR, and a sublinear upper bound. Finally, we show how to use a sublinear amount of database-independent correlated randomness among multiple servers to get near-optimal online communication complexity (the size of one row plus the size of one index description per server).
- How Hard Can It Be to Formalize a Proof? Lessons from Formalizing CryptoBox Three Times in EasyCrypton September 3, 2025 at 11:54 pm
ePrint Report: How Hard Can It Be to Formalize a Proof? Lessons from Formalizing CryptoBox Three Times in EasyCrypt François Dupressoir, Andreas Hülsing, Cameron Low, Matthias Meijers, Charlotte Mylog, Sabine Oechsner Provable security is a cornerstone of modern cryptography, aiming to provide formal and precise security guarantees. However, for various reasons, security proofs are not always properly verified, possibly leading to unwarranted security claims and, in the worst case, deployment of insecure constructions. To further enhance trust and assurance, machine-checked cryptography makes these proofs more formal and rigorous. Unfortunately, the complexity of writing machine-verifiable proofs remains prohibitively high in many interesting use-cases. In this paper, we investigate the sources of this complexity, specifically examining how the style of security definitions influences the difficulty of constructing machine-verifiable proofs in the context of game-playing security. Concretely, we present a new security proof for the generic construction of a PKAE scheme from a NIKE and AE scheme, written in a code-based, game-playing style à la Bellare and Rogaway, and compare it to the same proof written in the style of state-separating proofs, a methodology for developing modular game-playing security proofs. Additionally, we explore a third “blended” style designed to avoid anticipated difficulties with the formalization. Our findings suggest that the choice of definition style impacts proof complexity—including, we argue, in detailed pen-and-paper proofs—with trade-offs depending on the proof writer’s goals.