1 Introduction

Secure multi-party computation enables a set of parties to mutually run a protocol that computes some function f on their private inputs, while preserving a number of security properties. Two of the most important properties are privacy and correctness. The former implies data confidentiality; namely, nothing leaks by the protocol execution but the computed output. The later requirement implies that no corrupted party or parties can cause the output to deviate from the specified function. It is by now well known how to securely compute any efficient functionality [3, 4, 25, 47, 51] in various models and under the stringent simulation-based definitions (following the ideal/real paradigm). Security is typically proved with respect to two adversarial models, the semi-honest model (where the adversary follows the instructions of the protocol but tries to learn more than it should from the protocol transcript) and the malicious model (where the adversary follows an arbitrary polynomial-time strategy), and feasibility results are known in the presence of both types of attacks. The initial model considered for secure computation was of a static adversary where the adversary controls a fixed subset of the parties (who are called corrupted) before the protocol begins. In a stronger corruption model, the adversary is allowed to choose which parties to corrupt throughout the protocol execution and as a function of its view; such an adversary is called adaptive.

These feasibility results apply in most cases on stand-alone security, where a single set of parties run a single execution of the protocol. Moreover, the security of most cryptographic protocols proved in the stand-alone setting does not remain intact if many instances of the protocol are executed concurrently [39]. The strongest (but also the most realistic) setting for concurrent security is known by Universally Composable (UC) [4]. This setting considers the execution of an unbounded number of concurrent protocols in an arbitrary and adversarially controlled network environment. Unfortunately, stand-alone secure protocols typically fail to remain secure in the UC setting. In fact, without assuming some trusted setup, UC is impossible to achieve for most tasks [8, 10, 39]. Consequently, UC protocols have been constructed under various trusted setup assumptions in a long series of works; see [1, 7, 12, 13, 16, 36, 43] for few examples.

In this work, we are interested in understanding the intrinsic complexity of UC secure computation. Identifying the general assumptions required for a particular cryptographic task provides an abstraction of the functionality and the specific hardness that is exploited to obtain a secure realization of the task. The expressive nature of general assumptions allows the use of a large number of concrete assumptions of our choice, even one that may not have been considered at the time of designing the protocols. Constructions that are based on general assumptions are proved in two flavors:

Black-box usage :

A construction is black-box if it refers only to the input/output behavior of the underlying primitives.

Non-black-box usage :

A construction is non-black-box if it uses the code computing the functionality of the underlying primitives.

Typically, non-black-box constructions have been employed to demonstrate feasibility and derive the minimal assumptions required to achieve cryptographic tasks. Specifically, Lin et al. [43] provided a unified framework and minimal conditions under which UC security is feasible in a general setup. Moreover, the work of Damgard et al. [20] focused on identifying the necessary and sufficient assumptions for UC secure computation, in terms of both setup and computational assumptions. The former work identified the weakest assumptions in any setup known thus far, whereas the latter work identified tight upper and lower bounds on the hardness assumptions for the concrete common reference string and key registration models. Nevertheless, since both of these works rely on non-black-box techniques, an important theoretic question is whether or not non-black-box usage of the underlying primitives is necessary. Besides its theoretic importance, obtaining black-box constructions is related to the efficiency of the protocol as an undesirable effect of non-black-box constructions is that they are typically inefficient and unlikely to be implemented in practice.

Fortunately, in a line of works [24, 27, 32, 49] the gap between what is achievable via non-black-box and black-box constructions under minimal assumptions has narrowed. More relevant to our context, the work of Ishai et al. [33] that provided the first black-box constructions of UC protocols in the static and adaptive settings assuming only one-way functions, in a model where all parties have access to an ideal oblivious transfer (OT) functionality. In the adaptive setting, the work of Choi et al. [6] provided a transformation from adaptively secure semi-honest oblivious transfer to one that is secure in the stronger UC setting against malicious adaptive adversaries while assuming that all parties have access to an ideal commitment functionality. These works make progress toward identifying the necessary minimal general computational assumptions in both the static and adaptive UC settings. In particular, it follows that, to answer the motivating question of identifying these minimal assumptions, it suffices to identify the minimal assumptions to realize the ideal oblivious transfer in the static setting as specified in [33] and the ideal commitment in the adaptive setting as specified in [6].

Static setting In the stand-alone (i.e., not UC) static setting, assuming only the existence of semi-honest oblivious transfer it has been shown in [27, 28, 32] how to construct secure multi-party computation protocols while relying on the underlying primitives in a black-box manner. In the UC setting, Canetti et al. [12] presented the first non-black-box constructions of static UC protocols assuming enhanced trapdoor permutations. In a later work, Choi et al. [6] (cf. Proposition 1) provided black-box constructions that are secure against static adversaries, where all parties have access to an ideal commitment functionality. This construction achieves a stronger security notion of straight-line simulation; however, it falls short of achieving static UC security (see more details in Sect. 3).

UC OT was studied in the influential paper by Peikert et al. [48], who presented a black-box framework in the localFootnote 1 common reference string (CRS) model for an oblivious transfer, based on dual-mode public-key encryption (PKE) schemes. Such PKE schemes can be concretely instantiated under the discrete Diffie–Hellman (DDH), quadratic residuosity (QR) and learning with errors (LWE) hardness assumptions. In a follow-up work, Choi et al. [11] present UC OT constructions in the global CRS model assuming DDH, N-residuosity and the Decision Linear Assumption (DLIN).

It is worth noting that while the works of Peikert et al. [48] and Choi et al. [11] provide abstractions of their assumptions, the assumptions themselves are not general enough to help understand the minimal assumptions required to achieve static UC security.

Adaptive Setting The only work that considered a single general assumption that implies adaptive UC security using non-black-box techniques is the result due to Dachman-Soled et al. [16] that shows how to obtain adaptive UC commitments assuming simulatable PKE in the global CRS model.Footnote 2 Moreover, the best-known general assumptions required to achieve black-box UC security are adaptive semi-honest oblivious transfer and UC commitments [6, 18]. Known minimal general assumptions that are required to construct these primitives are (trapdoor) simulatable PKE for adaptive semi-honest oblivious transfer [5] and mixed commitments for UC commitments [18] in the local CRS model. Finally, we remark that the commitment scheme of Damgard and Groth [15] based on Strong RSA is, in fact, an adaptive UC commitment in the global CRS model.

As such prior works leave the following important question open:

What are the minimal (general) assumptions required to construct UC protocols, given only black-box access to the underlying primitives?

We note that this question is already well understood in the static setting when relaxing the black-box requirement. Namely, in [20] Damgård, Nielsen and Orlandi showed how to construct UC commitments assuming only semi-honest oblivious transfer in the global CRS model, while additionally assuming a preprocessing phase where the parties participate in a round-robin manner.Footnote 3 More recently, Lin et al. [44] have improved this result by removing any restricted preprocessing phase. In the same work, the authors showed how to achieve UC security in the global CRS model assuming only the existence of semi-honest oblivious transfer. In particular, this construction shows that static UC security can be achieved without assuming UC commitments when relying on non-black-box techniques.

1.1 Our Results

In this paper, we present a thorough study of black-box UC secure computation in the CRS model for different attack models; details follow. We note that our first and third results hold for the multi-party case, while the second result is for the two-party setting.

1.1.1 Static UC Secure Computation

Our first result is given in the static setting, where we demonstrate the feasibility of UC secure computation based on semi-honest oblivious transfer and extractable commitments. More concretely, we prove how to transform any statically semi-honest secure oblivious transfer into one that is secure in the presence of malicious adversaries, giving only black-box access to the underlying semi-honest oblivious transfer protocol. Our approach is inspired by the protocols from [28, 42], where we observe that it is not required to use the full power of static UC commitments. Instead, we employ a weaker primitive that only requires straight-line input extractability. This weaker notion of security, denoted by extractable commitments [46], can be realized based on any CPA secure PKE. More precisely, we prove the following theorem.

Theorem 1.1

(Informal) Assuming the existence of PKE and semi-honest oblivious transfer, then any functionality can be realized in the CRS model with static UC security, where the underlying primitives are accessed in a black-box manner.

We remark here that this theorem makes a significant progress toward reducing the general assumptions required to construct UC protocols. Previously, the only general assumptions based on which we knew how to construct UC protocols were mixed commitments [17] and dual-mode PKE [48] both of which were tailor-made for the particular application. Toward understanding the required minimal assumptions, we recall the work Damgård and Groth in [15] who showed that the existence of UC commitments in the CRS model implies a stand-alone key agreement protocol. Moreover, under black-box constructions, the seminal work of Impagliazzo and Rudich [34] implies that key agreement cannot be based on one-way functions. Thus, there is reasonable evidence to believe that some public-key primitive is required for UC commitments. In that sense, our assumption regarding PKE is close to being optimal. Nevertheless, it is unknown whether plain model (i.e., without setup) semi-honest oblivious transfer assumption is required.

Our result is shown in two phases. At first, we compile the semi-honest oblivious transfer protocol into a new protocol with intermediate security properties in the presence of malicious adversaries. This transformation is an extension of the transformation from [28] that is only proved for bit oblivious transfer, whereas our proof works for string oblivious transfer. Next, we use the transformed oblivious transfer protocol in order to construct a fully secure (malicious) oblivious transfer. By combining our oblivious transfer protocol with the protocol from [33], we obtain a statically generic UC secure computation.

An important corollary is deduced from the work by Gertner et al. [23], who provided a black-box construction of PKE based on any two-round semi-honest oblivious transfer protocol. Specifically, the combination of their result with ours implies the following corollary, which demonstrates that two-round semi-honest oblivious transfer is sufficient in the CRS model to achieve black-box constructions of UC protocols. Namely,

Corollary 1.2

(Informal) Assuming the existence of two-round semi-honest oblivious transfer, then any functionality can be UC realized in the CRS model, where the oblivious transfer is accessed in a black-box manner.

The work of [6] shows how starting from a semi-honest oblivious transfer it is possible to obtain a black-box construction of an OT protocol that is secure against stand-alone static adversaries in the \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\)-hybrid model. Moreover, \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\) can be directly realized in the \(\mathcal{F}_{{\scriptscriptstyle \mathrm {EXTCOM}}}\)-hybrid using the notion of extractable trapdoor commitments [21, 49].Footnote 4 We do not pursue this approach and instead directly realize OT in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\)-hybrid because the main goal in this work is to identify the minimal assumptions required to construct UC OT. We remark that although the main result in [6] demonstrates UC security against adaptive corruptions, the same analysis fails to extend to the static setting. More concretely, while their protocol might be secure in the static setting (if we replace the underlying primitives with their analogues in the static setting), its security analysis is not sufficient. This is because Choi et al. modularly compose a weaker building block (adaptive semi-honest OT) to construct a UC OT. Furthermore, in the simulation of the final protocol, the simulator invokes the adaptive simulator of the weaker primitive on the fly. Such a simulation cannot be used in the static setting when the building blocks are instantiated with their analogues in the static setting.Footnote 5 We finally remark that the previous works of [6, 28] require a three-step transformation, whereas our transformation is simpler with a single step transformation.

Table 1 Comparison with prior work on UC oblivious transfer in the CRS model against static corruptions

Implications In what follows, we make a sequence of interesting observations that are implied by our result in the static UC setting which are summarized in Table 1.

  • The important result by Canetti et al. [12], which assumes enhanced trapdoor permutations, can be extended assuming only PKE with oblivious ciphertext generation (which is PKE with the special property that a ciphertext can be obliviously sampled without the knowledge of the plaintext, and can be further realized using enhanced trapdoor permutation). In that sense, our result, assuming PKE with oblivious ciphertext generation, can be viewed as an improvement of Canetti et al. [12] when relying on this primitive in a black-box manner.

  • The pair of works by Damgard et al. [20] and Lin et al. [44] demonstrate that non-black-box constructions of UC commitments, and more generally static UC secure computation, can be achieved in the CRS model assuming only semi-honest oblivious transfer. In comparison, our result shows that two-round semi-honest oblivious transfer protocols are sufficient for obtaining black-box UC secure computation in the CRS model. We note here that many semi-honest oblivious transfer protocols indeed involve only two-round of communication, e.g., [22, 29].

  • In [43, 44], Lin, Pass and Venkitasubramaniam provided a unified framework for constructing UC protocols in any “trusted setup” model. Their result is achieved by capturing the minimal requirement that implies UC secure computation in the setup model. More precisely, they introduced the notion of a UC puzzle and showed that any setup model that admits a UC puzzle can be used to securely realize any functionality in the UC setting, while additionally assuming the existence of semi-honest oblivious transfer. Moreover, they showed how to easily construct such puzzles in most models. We remark that our approach can be viewed as providing a framework to construct black-box UC protocols in other UC models. More precisely, we show that any setup model that admits the extractable commitment functionality can be used to securely realize any functionality assuming the existence of semi-honest oblivious transfer. In fact, our result easily extends to the chosen key registration authority (KRA) model [1], where it is assumed the existence of a trusted authority that samples public key–secret key pairs for each party and broadcasts the public key to all parties. We leave it for future work to instantiate our framework in other setup models.

  • The fact that our construction only requires PKE and semi-honest oblivious transfer allows an easy translation of static UC security to various efficient implementations under a wide range of concrete assumptions. Specifically, both PKE and (two-round) semi-honest oblivious transfer can be realized under RSA, factoring Blum integers, LWE, DDH, N-residuosity, p-subgroup and coding assumptions. This is compared to prior results that could be based on the later five assumptions [11, 14, 19, 48].

  • Recently, Maji et al. [46] initiated the study of the cryptographic complexity of secure computation tasks, while characterizing the relative complexity of a task in the UC setting. Specifically, they established a zero–one law that states that any task is either trivial (i.e., it can be reduced to any other task), or complete (i.e., to which any task can be reduced to), where a functionality \(\mathcal{F}\) is said to reduce to another functionality \(\mathcal{G}\), if there is a UC protocol for \(\mathcal{F}\) using ideal access to \(\mathcal{G}\). More precisely, they showed that assuming the existence of semi-honest oblivious transfer, every finite two-party functionality is either trivial or complete. While their main theorem relies on the minimal assumption of semi-honest oblivious transfer, their use of the assumption is non-black-box and they leave it as an open problem to achieve the same while relying on oblivious transfer in a black-box manner. Our result makes progress toward establishing this.

    In more detail, their high-level approach is to identify complete functionalities using four categories, namely (1) \(\mathcal{F}_{\scriptscriptstyle \mathrm {XOR}}\) that abstracts a XOR-type functionality, (2) \(\mathcal{F}_{\scriptscriptstyle \mathrm {CC}}\) that abstracts a simple cut-and-choose functionality, (3) \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) the oblivious transfer functionality and (4) \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\) the commitment functionality. They then show that each category can be used to securely realize any computational task.Footnote 6 Among these reductions, functionalities \(\mathcal{F}_{\scriptscriptstyle \mathrm {XOR}}\) and \(\mathcal{F}_{\scriptscriptstyle \mathrm {CC}}\) rely on oblivious transfer in a non-black-box way. In this work, we improve the reduction in functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {CC}}\). That is, we obtain this improvement by showing that the extractable commitment functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) and semi-honest oblivious transfer can be used in a black-box way to realize functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) and combine this with a reduction presented in [46] that reduces \(\mathcal{F}_{\scriptscriptstyle \mathrm {CC}}\) to the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) functionality in a black-box way.

1.1.2 One-Sided UC Secure Computation

In this stronger attack model, where at most one of the parties is adaptively corrupted [30, 38], we prove that one-sided adaptive UC security is implied by PKE with oblivious ciphertext generation, which implies semi-honest OT. Here we combine two observations: (1) In our malicious static oblivious transfer from the previous result, the actions of the parties depend on their real inputs only in the last phase of the protocol and (2) we do not need a full fledge NCE, rather only need one-sided non-committing encryption (NCE), which we know can be designed based on PKE with oblivious ciphertext generation [9, 17]. In particular, NCE allows secure communication in the presence of adaptive attacks, which implies that the communication can be equivocated once the real message is handed to the simulator. Then, by encrypting part of our statically secure protocol using NCE, we obtain a generic protocol for any two-party functionality under the assumption specified above.Footnote 7 Namely,

Theorem 1.3

(Informal) Assuming the existence of PKE with oblivious ciphertext generation, then any two-party functionality can be realized in the CRS model with one-sided adaptive UC security and black-box access to the PKE.

1.1.3 Adaptive UC Secure Computation

Our last result is in the strongest corruption setting, where any number of parties can be adaptively corrupted. Here we design a new adaptively secure UC commitment scheme under the assumption of PKE with oblivious ciphertext generation, which is the first construction that achieves the stronger notion of adaptive security based on this hardness assumption. Our construction makes a novel usage of such a PKE together with Reed–Solomon codes, where the polynomial shares are encrypted using the PKE with oblivious ciphertext generation. Plugging in our UC commitment protocol into the transformation of [6], that generates adaptive malicious oblivious transfer given adaptive semi-honest oblivious transfer and UC commitments, implies adaptive UC oblivious transfer with malicious security based on semi-honest adaptive oblivious transfer and PKE with oblivious ciphertext generation, using only black-box access to these underlying primitives. That is,

Theorem 1.4

(Informal) Assuming the existence of PKE with oblivious ciphertext generation and adaptive semi-honest oblivious transfer, then any functionality can be realized in the CRS model with adaptive UC security, where the underlying primitives are accessed in a black-box manner.

We further recall the work of Choi et al. [5] that shows that the weakest general known assumption that is required to construct adaptively secure semi-honest oblivious transfer is trapdoor simulatable PKE.Footnote 8 Now, since such an encryption scheme admits PKE with oblivious ciphertext generation, we obtain the following corollary that unifies the two assumptions required to achieve adaptive UC security.

Corollary 1.5

Assuming the existence of (trapdoor) simulatable PKE, then any functionality can be realized in the CRS model with adaptive UC security and black-box access to the PKE.

An additional interesting observation that is implied by our work is that our UC commitment scheme implies a construction that is secure in the adaptive setting when erasures are allowed and under the weaker assumption of PKE. Specifically, instead of obliviously sampling ciphertexts in the commitment phase, the committer encrypts arbitrary plaintexts and then erases the plaintexts and randomness used for these computations. Our proof follows easily for this case as well. Combining our UC commitment scheme together with the semi-honest with erasures OT from [40] and the transformation of Choi et al. [6], we obtain the following result.

Theorem 1.6

(Informal) Assuming the existence of PKE and semi-honest oblivious transfer secure against an adaptive adversary assuming erasures, then any functionality can be realized in the CRS model with adaptive UC security assuming erasures, where the underlying primitives are accessed in a black-box manner.

Noting that OT secure against adaptive adversaries assuming erasures can be realized under assumptions sufficient for achieving the same with respect to the weaker static adversaries, this theorem shows that achieving UC security against adaptive adversaries in the presence of erasures does not require any additional assumption beyond what is required to secure against static adversaries.

Table 2 Comparison with prior work on UC commitments and UC oblivious transfer in the CRS model against adaptive corruptions

Implications Next, we specify a sequence of interesting observations that are implied by our result in the adaptive UC setting which are summarized in Table 2.

  • Previously, Dachman-Soled et al. [16] showed that adaptively secure UC protocols can be constructed in the CRS model assuming the existence of simulatable PKE. Our result improves this result in terms of complexity assumptions by showing that simulatable PKE is sufficient, and provides new constructions based on concrete assumptions that were not known before. Nevertheless, we should point out that while the work of Dachman-Soled et al. is constructed in the global CRS model using a non-black-box construction, our result provides a black-box construction in a CRS model where the length of the reference string is linear in the number of parties.

  • Analogous to our result on static UC security, it is possible to extend this result to the chosen key registration authority (KRA) model, where we assume the existence of a trusted party that samples public keys and secret keys for each party, and broadcasts the public key to all parties.

  • It is important to note that this result provides the first evidence that adaptively secure UC commitment is theoretically easier to construct than stand-alone adaptively secure semi-honest oblivious transfer. Namely, on the one hand, enhanced trapdoor permutations are sufficient to construct PKE with oblivious ciphertext generation which in turn are sufficient to realize adaptive UC commitment in the CRS model by Theorem 1.4. On the other hand, a result due to Lindell and Zarosim [45] (regarding static versus adaptive oblivious transfer) separates adaptively secure oblivious transfer from enhanced trapdoor permutation under black-box reductions.

  • Regarding concrete assumptions, previously, adaptive UC commitments without erasures were constructed based on N-residuosity and p-subgroup hardness assumptions [18] and Strong RSA [15]. On the other hand, our result demonstrates the feasibility of this primitive under DDH, LWE, factoring Blum integers and RSA assumptions. When considering adaptive corruption with erasures, the work of Blazy et al. [2], extending the work of Lindell [41], shows how to construct highly efficient UC commitments based on the DDH assumption. On the other hand, assuming erasures, we are able to construct an adaptive UC commitment scheme based on any CPA secure PKE.

1.2 Subsequent Work

In subsequent work, Kiyoshima et al. [37] improve the results in the work for the static setting in the CRS model where they show that assuming PKE and semi-honest OT, UC is feasible in the global CRS model where there is a single CRS string chosen for all sessions.

2 Preliminaries

Basic notations We denote the security parameter by n. We say that a function \(\mu :\mathbb {N}\rightarrow \mathbb {N}\) is negligible if for every positive polynomial \(p(\cdot )\) and all sufficiently large n it holds that \(\mu (n)<\frac{1}{p(n)}\). We use the abbreviation PPT to denote probabilistic polynomial time. We further denote by \(a\leftarrow A\) the random sampling of a from a distribution A and by [n] the set of elements \(\{1,\ldots ,n\}\). We specify next the definition of computationally indistinguishable.

Definition 2.1

Let \(X=\{X(a,n)\}_{a\in \{0,1\}^*,n\in \mathbb {N}}\) and \(Y=\{Y(a,n)\}_{a\in \{0,1\}^*,n\in \mathbb {N}}\) be two distribution ensembles. We say that X and Y are computationally indistinguishable, denoted \(X{\mathop {\approx }\limits ^\mathrm{c}}Y\), if for every PPT machine D, every \(a\in \{0,1\}^*\), every positive polynomial \(p(\cdot )\) and all sufficiently large n:

$$\begin{aligned} \big |\mathrm{Pr}\left[ D(X(a,n),1^n)=1\right] -\mathrm{Pr}\left[ D(Y(a,n),1^n)=1\right] \big | <\frac{1}{p(n)}. \end{aligned}$$

2.1 Public-Key Encryption Schemes

We specify the definitions of public-key encryption, IND-CPA and public-key encryption with oblivious ciphertext generation.

Definition 2.2

(PKE) We say that \(\Pi =(\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec})\) is a public-key encryption scheme if \(\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec}\) are polynomial-time algorithms specified as follows:

  • \(\mathsf {Gen}\), given a security parameter n (in unary), outputs keys \((\textsc {PK},\textsc {SK})\), where \(\textsc {PK}\) is a public key and \(\textsc {SK}\) is a secret key. We denote this by \((\textsc {PK},\textsc {SK})\leftarrow \mathsf {Gen}(1^n)\).

  • \(\mathsf {Enc}\), given the public key \(\textsc {PK}\) and a plaintext message m, outputs a ciphertext c encrypting m. We denote this by \(c\leftarrow \mathsf {Enc}_{\textsc {PK}}(m)\); when emphasizing the randomness r used for encryption, we denote this by \(c\leftarrow \mathsf {Enc}_{\textsc {PK}}(m;r)\).

  • \(\mathsf {Dec}\), given the public key \(\textsc {PK}\), secret key \(\textsc {SK}\) and a ciphertext c, outputs a plaintext message m s.t. there exists randomness r for which \(c = \mathsf {Enc}_{\textsc {PK}}(m;r)\) (or \(\bot \) if no such message exists). We denote this by \(m \leftarrow \mathsf {Dec}_{\textsc {PK},\textsc {SK}}(c)\).

For a public-key encryption scheme \(\Pi =(\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec})\) and a non-uniform adversary \(\mathcal{A}=(\mathcal{A}_1,\mathcal{A}_2)\), we consider the following indistinguishability game:

$$\begin{aligned}&(\textsc {PK},\textsc {SK})\leftarrow \mathsf {Gen}(1^n).\\&(m_0,m_1,history)\leftarrow \mathcal{A}_1(\textsc {PK})\text{, } \text{ s.t. } |m_0|=|m_1|.\\&c\leftarrow \mathsf {Enc}_{\textsc {PK}}(m_b)\text{, } \text{ where } b\leftarrow _R\{0,1\}.\\&b'\leftarrow \mathcal{A}_2(c,history).\\&\mathcal{A} \text{ wins } \text{ if } b'=b. \end{aligned}$$

Denote by \(\textsc {Adv}_{\Pi ,\mathcal{A}}(n)\) the probability that \(\mathcal{A}\) wins the IND-CPA game.

Definition 2.3

(IND-CPA) A public-key encryption scheme \(\Pi =(\mathsf {Gen}, \mathsf {Enc}, \mathsf {Dec})\) is IND-CPA secure, if for every non-uniform adversary \(\mathcal{A}=(\mathcal{A}_1,\mathcal{A}_2)\) there exists a negligible function \(\mu (\cdot )\) such that for all sufficiently large n’s, \(\textsc {Adv}_{\Pi ,\mathcal{A}}(n) \le \frac{1}{2} + \mu (n).\)

A public-key encryption with the property of oblivious ciphertext generation implies additional two algorithms: (1) oblivious ciphertext generator \(\widetilde{\mathsf {Enc}}\) and (2) a corresponding ciphertext faking algorithm \(\widetilde{\mathsf {Enc}}^{-1}\). Intuitively, the ciphertext faking algorithm is used to explain a legitimately generated ciphertext as an obliviously generated one. Formally,

Definition 2.4

(PKE with oblivious ciphertext generation [17]) A PKE \(\Pi \) with oblivious sampling generation is defined by the tuple \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec},\widetilde{\mathsf {Enc}}, \widetilde{\mathsf {Enc}}^{-1})\) and has the following additional property,

  • Indistinguishability of oblivious and real ciphertexts For any message m in the appropriate domain, consider the experiment \((\textsc {PK},\textsc {SK}) \leftarrow \mathsf {Gen}(1^n)\), \(c_1 \leftarrow \widetilde{\mathsf {Enc}}_{\textsc {PK}}(r_1)\), \(c_2 \leftarrow \mathsf {Enc}_{\textsc {PK}}(m;r_2)\), \(r'_2 \leftarrow \widetilde{\mathsf {Enc}}^{-1}_\textsc {PK}(c_2)\). Then, \((\textsc {PK},r_1,c_1,m) {\mathop {\approx }\limits ^\mathrm{c}}(\textsc {PK},r'_2,c_2,m)\).

To this end, we only employ encryption schemes with perfect decryption. This merely simplifies the analysis and can be relaxed by using PKE with a negligible decryption error instead.

2.2 Secret Sharing

A secret sharing scheme allows distribution of a secret among a group of n players, each of whom in a sharing phase receive a share (or piece) of the secret. In its simplest form, the goal of secret sharing is to allow only subsets of players of size at least \(t+1\) to reconstruct the secret. More formally a \(t+1\)-out-of-n secret sharing scheme comes with a sharing algorithm that on input a secret s outputs n shares \(s_1,\ldots ,s_n\) and a reconstruction algorithm that takes as input \((s_i)_{i \in S},S\) where \(|S| > t\) and outputs either a secret \(s'\) or \(\bot \). In this work, we will use the Shamir’s secret sharing scheme [50] with secrets in \(\mathbb {F}= GF(2^n)\). We present the sharing and reconstruction algorithms below:

Sharing algorithm :

For any input \(s \in \mathbb {F}\), pick a random polynomial \(f(\cdot )\) of degree t in the polynomial field \(\mathbb {F}[x]\) with the condition that \(f(0) = s\) and output \(f(1),\ldots ,f(n)\).

Reconstruction algorithm :

For any input \((s_i')_{i \in S}\) where none of the \(s_i'\) are \(\bot \) and \(|S| > t\), compute a polynomial g(x) such that \(g(i) = s_i'\) for every \(i \in S\). This is possible using Lagrange interpolation where g is given by

$$\begin{aligned} g(x) = \sum _{i \in S} s_i' \prod _{j \in S/\{i\}} \frac{x - j}{i-j}. \end{aligned}$$

Finally the reconstruction algorithm outputs g(0).

Reed–Solomon code For integers tn and field \(\mathbb {F}\), satisfying \(0<t\le n< |\mathbb {F}|\), and a set of n distinct elements \(I = \{x_1,\ldots ,x_n\} \subset \mathbb {F}\), the Reed–Solomon code\(\mathcal{W}_{n,t}\) is defined by

$$\begin{aligned} \big \{ q(x_1),\ldots ,q(x_n)\ |\ q(\cdot ) \text{ is } \text{ a } \text{ degree } t \text{ polynomial } \text{ in } \mathbb {F}[x]\big \}. \end{aligned}$$

The Reed–Solomon code has minimum distance relative distance \(1-\frac{t}{n}\) where a corrupted code word with up to \(\lceil \frac{n-t}{2} \rceil \) errors can be corrected using the Berlekamp–Welch algorithm. It follows easily that Shamir’s secret sharing on \(\mathbb {F}\) as described above results in a sequence of shares in the Reed–Solomon code \(\mathcal{W}_{n,t}\).

2.3 Oblivious Transfer

1-out-of-2 oblivious transfer (OT) is an important functionality in the context of secure computation that is engaged between a sender \(\mathrm{Sen}\) and a receiver \(\mathrm{Rec}\); see Fig. 1 for the description of functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\). In this paper we are interested in reducing the hardness assumptions for general UC secure computation when using only black-box access to the underlying cryptographic primitives, such as the semi-honest OT. We use semi-honest OT as a building block for designing UC protocols in both static and adaptive settings. In the static setting, we refer to the two-round protocol of [22] that is based on PKE with oblivious ciphertext generation (or enhanced trapdoor permutation). In the adaptive setting, we refer to the two-round protocol of [12] that is based on augmented non-committing encryption scheme.

We briefly recall that any two-round semi-honest OT implies PKE. This is demonstrated in two phases, starting with the claim that semi-honest OT implies a key agreement (KA) protocol. This statement has been proved in [23] in the static setting and holds for any number of rounds as well as in the presence of adaptive adversaries. Next, a well-established fact shows that in the static setting a two-round key agreement implies PKE. (In fact, these primitives are equivalent).

Fig. 1
figure 1

Oblivious transfer functionality

2.3.1 Receiver Private Oblivious Transfer

Receiver privacy is a weaker notion than malicious security and only requires that the receiver’s input be hidden even against a malicious sender. It is weaker than malicious security in that it does not require a simulation of the malicious sender that extracts the sender’s inputs. In particular, we will only require that a malicious sender cannot distinguish the cases where the receiver’s input is 0 or 1. Formally stated,

Definition 2.5

(Receiver private OT) Let \(\pi \) be a two-party protocol that is engaged between a sender \(\mathrm{Sen}\) and a receiver \(\mathrm{Rec}\). We say that \(\pi \) is a receiver private oblivious transfer protocol, if for every PPT adversary \(\mathcal{A}\) that corrupts \(\mathrm{Sen}\), the following ensembles are computationally indistinguishable:

  • \(\{\text{ View }_{\mathcal{A},\pi }[\mathcal{A}(1^n), \mathrm{Rec}(1^n,0)]\}_{n \in \mathbb {N}}\)

  • \(\{\text{ View }_{\mathcal{A},\pi }[\mathcal{A}(1^n), \mathrm{Rec}(1^n,1)]\}_{n \in \mathbb {N}}\),

where \(\text{ View }_{\mathcal{A},\pi }[\mathcal{A}(1^n), \mathrm{Rec}(1^n,b)]\) denotes \(\mathcal{A}\)’s view within \(\pi \) whenever the receiver \(\mathrm{Rec}\) inputs the bit b.

We point out that receiver privacy protects the receiver against a malicious sender and should be read as privacy against a malicious sender.

2.3.2 Defensible Private Oblivious Transfer

The notion of defensible privacy was introduced by Haitner in [27, 28]. A defense in a two-party protocol \(\pi = (P_1,P_2)\) execution is an input and random tape provided by the adversary after the execution concludes. A defense for a party controlled by the adversary is said to be good if whenever this party participated honestly in the protocol using this input and random tape, then it would have resulted in the exact same messages that were sent by the adversary. In essence, this defense serves as a proof of honest behavior. Defensible privacy ensures that a protocol is private in the presence of defensible adversaries if the adversary learns nothing more than its prescribed output when it provides a good defense.

We begin with informally describing the notion of good defense for a protocol \(\pi \); we refer to [28] for the formal definition. Let \({\mathsf {trans}}=(q_1,a_1,\ldots ,q_\ell ,a_\ell )\) be the transcript of an execution of a protocol \(\pi \) that is engaged between \(P_1\) and \(P_2\) and let \(\mathcal{A}\) denote an adversary that controls \(P_1\), where \(q_i\) is the ith message from \(P_1\) and \(a_i\) is the ith message from \(P_2\) (that is, \(a_i\) is the response for \(q_i\)). Then we say that (xr) constitutes a good defense of \(\mathcal{A}\) relative to \({\mathsf {trans}}\) if the transcript generated by running the honest algorithm for \(P_1\) with input x and random tape r against \(P_2\)’s messages \(a_1,\ldots ,a_\ell \) results exactly in \({\mathsf {trans}}\).

At a high level, an OT protocol is defensible private with respect to a corrupted sender if no adversary interacting with an honest receiver with input b should be able to learn b, if at the end of the execution the adversary produces any good defense. Similarly, an OT protocol that is defensible private with respect to a corrupted receiver requires that any adversary interacting with an honest sender with input \((s_0,s_1)\) should not be able to learn \(s_{1-b}\), if at the end of the execution the adversary produces a good defense with input b. Below we present a variant of the definition presented in [28]. We stress that while the [28] definition only considers bit OT (i.e., sender’s inputs are bits) we consider string OT.

Definition 2.6

(Defensible private string OT) Let \(\pi \) be a two-party protocol that is engaged between a sender \(\mathrm{Sen}\) and a receiver \(\mathrm{Rec}\). We say that \(\pi \) is a defensible private string oblivious transfer protocol, if for every PPT adversary \(\mathcal{A}\) the following holds,

  1. 1.

    \(\{\Gamma (\text{ View }_{\mathcal{A}}[\mathcal{A}(1^n),\mathrm{Rec}(1^n,U)],U)\} {\mathop {\approx }\limits ^\mathrm{c}}\{\Gamma (\text{ View }_{\mathcal{A}}[\mathcal{A}(1^n),\mathrm{Rec}(1^n,U)],U')\}\) where \(\Gamma (v,*)\) is set to \((v,*)\) if following the execution \(\mathcal{A}\) outputs a good defense for \(\pi \), and \(\bot \) otherwise, and U and \(U'\) are independent random variables uniformly distributed over \(\{0,1\}\). This property is referred to as defensible private with respect to a corrupted sender.

  2. 2.

    \(\{\Gamma (\text{ View }_{\mathcal{A}}[\mathrm{Sen}(1^n,(U^n_0,U^n_1)),\mathcal{A}(1^n)],U^n_{1-b})\} {\mathop {\approx }\limits ^\mathrm{c}}\{\Gamma (\text{ View }_{\mathcal{A}}[\mathrm{Sen}(1^n,(U^n_0,U^n_1)),\mathcal{A}(1^n)],\bar{U}^n)\}\) where \(\Gamma (v,*)\) is set to \((v,*)\) if following the execution \(\mathcal{A}\) outputs a good defense for \(\pi \), and \(\bot \) otherwise, b is the \(\mathrm{Rec}\)’s input in this defense and \(U^n_0,U^n_1,\bar{U}^n\) are independent random variables uniformly distributed over \(\{0,1\}^n\). This property is referred to as defensible private with respect to a corrupted receiver.

In our construction from Sect. 3, we will rely on an OT protocol that is receiver private and defensible private with respect to a corrupted receiver. In [28], Haitner et al. showed how to transform any semi-honest bit OT to one that is defensible private with respect to a corrupted receiver and malicious secure with respect to a corrupted sender. More formally, the following lemma is implicit in the work of [28].

Lemma 2.1

(Implicit in Theorem 4.1 and Corollary 5.3 [28]) Assume the existence of a semi-honest oblivious transfer protocol \(\pi \). Then there exists an oblivious transfer protocol \(\hat{\pi }\) that is defensible private with respect to the receiver and receiver private that relies on the underlying primitive in a black-box manner.

Now, since receiver privacy is implied by malicious security with respect to a corrupted sender, this transformation yields a bit OT protocol with the required security guarantees. Nevertheless, our protocol crucially relies on the fact that the underlying OT is a string OT protocol. We therefore show in “Appendix A” how to transform any bit OT to a string OT protocol while preserving both defensible private with respect to a maliciously corrupted receiver and receiver privacy.

2.4 Commitment Schemes

Commitment schemes are used to enable a party, known as the sender, to commit itself to a value while keeping it secret from the receiver (this property is called hiding). Furthermore, in a later stage when the commitment is opened, it is guaranteed that the “opening” can yield only a single value determined in the committing phase (this property is called binding). In this work, we consider commitment schemes that are statistically binding; namely, while the hiding property only holds against computationally bounded (non-uniform) adversaries, the binding property is required to hold against unbounded adversaries. More precisely, a pair of PPT machines \({\mathsf {Com}}\) is said to be a commitment scheme if the following two properties hold.

  • Computational hiding For every (expected) PPT machine \(R^*\), it holds that the following ensembles are computationally indistinguishable over \(n \in N\).

    • \(\{\textsf {view}_{{\mathsf {Com}}}^{R^*}(v_1,z)\}_{n \in N,v_1, v_2 \in \{0,1\}^n, z\in \{0,1\}^*}\)

    • \(\{\textsf {view}_{{\mathsf {Com}}}^{R^*}(v_2,z)\}_{n\in N,v_1, v_2 \in \{0,1\}^n, z\in \{0,1\}^*}\),

    where \(\textsf {view}_{{\mathsf {Com}}}^{R^*}(v,z)\) denotes the random variable describing the output of \(R^*\) after receiving a commitment to v using \({\mathsf {Com}}\).

  • Statistical binding Informally, the statistical binding property asserts that, with overwhelming probability over the coin tosses of the receiver R, the transcript of the interaction fully determines the value committed to by the sender.

We say that a commitment is valid if there exists a unique committed value that a (potentially malicious) committer can open to successfully. We refer the reader to [26] for more details.

2.5 UC Commitment Schemes

The notion of UC commitments was introduced by Canetti and Fischlin in [8]. The formal description of functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\) is depicted in Fig. 2.

Fig. 2
figure 2

String commitment functionality

2.6 Extractable Commitments

Our result in the static setting requires the notion of (static) extractable UC commitments, which is a weaker security property than UC commitments in the sense that it does not require equivocality. Namely, the simulator is not required to commit to one message and then later convince the receiver that it committed to a different value. It is a real challenge to define this notion since it is hard to capture the notion of extractability in the ideal setting. In what follows, we recall the definition for the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) from [46]. To the best of our knowledge, this is the only definition that captures straight-line extractability, statistically binding and computationally (stand-alone) hiding. Toward introducing this definition, Maji et al. introduced some notions first. More concretely,

Definition 2.7

A protocol is a syntactic commitment protocol if:

  • It is a two-phase protocol between a sender and a receiver (using only plain communication channels).

  • At the end of the first phase (commitment phase), the sender and the receiver output a transcript \({\mathsf {trans}}\). Furthermore, the sender receives an output (which will be used for opening the commitment).

  • In the decommitment phase the sender sends a message \(\gamma \) to the receiver, who extracts an output value \(\mathsf{opening}({\mathsf {trans}},\gamma )\in \{0,1\}^n\cup \{\bot \}\).

Definition 2.8

Two syntactic commitment protocols \((\omega _L,\omega _R)\) form a pair of complementary statistically binding commitment protocols if the following hold:

  • \(\omega _R\) is a statistically binding commitment scheme (with stand-alone security).

  • In \(\omega _L\), at the end of the commitment phase the receiver outputs a string \(z\in \{0,1\}^n\). If the receiver is honest, it is only with negligible probability that there exists \(\gamma \) such that \(\mathsf{opening}({\mathsf {trans}},\gamma )\ne \bot \) and \(\mathsf{opening}({\mathsf {trans}},\gamma )\ne z\).

As noted in [46], \(\omega _L\) by itself is not an interesting cryptographic goal, as the sender can simply send the committed string in the clear during the commitment phase. Nevertheless, in defining \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) below, there exists a single protocol that satisfies both the security guarantees. We are now ready to introduce the notion of extractable commitments in Fig. 3 that is parameterized by \((\omega _L,\omega _R)\). We additionally include a function \({\mathsf {pp}}\) that will be used as an initialization phase to set up the public parameters for \(\omega _L\) and \(\omega _R\).

Fig. 3
figure 3

Extractable commitment functionality

In “Appendix 2.6.1” we show how to realize \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) based on IND-CPA secure PKE.

2.6.1 Extractable Commitments from PKE in the CRS Model

We briefly discuss how to realize the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) functionality in the CRS model. At a high level, we obtain an extractable commitment using a IND-CPA PKE. Loosely speaking, the common reference string contains a public key that is sampled using the key generation algorithm. Moreover, the trapdoor for the CRS is the corresponding secret key. In the real world, no adversary knows that secret key, and hence, it does not know the corresponding CRS trapdoor. In order to implement extractable commitments, our protocol requires from the commitment sender to simply encrypt its message m using the public key that is placed in the CRS. Decommitment is carried out by asking the sender to provide the randomness used to encrypt m.

3 Static UC Secure Computation

In this section, we prove the feasibility of UC secure computation based on semi-honest OT and extractable commitments, where the latter can be constructed based on two-round semi-honest OT (see Sects. 2.3 and 2.6 for more details). More concretely, we prove how to transform any statically semi-honest secure OT into one that is secure in the presence of malicious adversaries, giving only black-box access to the underlying semi-honest OT protocol. Our protocol is a variant of the protocol by Lin and Pass from [42] (which in turn is a variant of the protocol of [28]). In particular, in [42], the authors rely on a strong variant of a commitment scheme known as a CCA secure commitment in order to achieve extraction. We observe that it is not required to use the full power of such commitments, or for that matter UC commitments. Specifically, using a weaker primitive that only implies straight-line input extractability enables to solely rely on semi-honest OT. An important weakening in our commitment scheme compared to CCA secure commitments from [42] is that we allow invalid commitments to be made by the adversary. Our construction obtains a statically UC protocol for any well-formed functionality (see definition in [12]). Namely,

Theorem 3.1

Assume the existence of static semi-honest oblivious transfer. Then for any multi-party well-formed functionality \(\mathcal{F}\), there exists a protocol that UC realizes \(\mathcal{F}\) in the presence of static, malicious adversaries in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\)-hybrid model using black-box access to the oblivious transfer protocol.

The proof of Theorem 3.1 follows from combining our UC OT protocol with the [33] protocol. It seems possible to generalize our theorem to multi-session functionalities. Analogous to [8], this will allow us to extend our corollaries to the global CRS model by additionally assuming CCA encryption scheme; we leave this as future work.

3.1 Static UC Oblivious Transfer

In the following, we discuss a secure implementation of the oblivious transfer functionality (see Fig. 1) with static, malicious security in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\)-hybrid model (where \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) is stated formally in Fig. 3). Our goal in this section is to show that the security of malicious UC OT can be based on UC semi-honest OT, denoted by \(\pi ^{\scriptscriptstyle \mathrm {SH}}_{\scriptscriptstyle \mathrm {OT}}\), and extractable commitments. Our result is shown in two phases. At first, we compile the semi-honest OT protocol \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {SH}}\) into a new protocol with the security properties that are specified in Sect. 2.3.2, extending the [28] transformation into string OT; denote the compiled OT protocol by \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\). This transformation in specified “Appendix A.” In what follows, we use \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\) in order to construct a new protocol \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}\) that is secure in the presence of malicious adversaries.

Our protocol is a variant of first step of the compilation in [6] which in turn is based on the work of [28]. At a high level, the compilation in [6, 28] shows how to amplify the security of an oblivious transfer protocol against receiver corruption from semi-honest to malicious. In comparison, our protocol amplifies the security of an OT protocol that is defensible private against the sender and receiver to full security.

Loosely speaking, the parties first run a coin-tossing protocol in order to generate the input and randomness for both the receiver and the sender. Using cut-and-choose, which requires to repeat this process multiple times, we are able to extract these values in the simulation. The parties then run a sequence of random oblivious transfers using the values generated in the coin-tossing phase. Finally, the sender applies a combiner on the remaining random OT inputs (namely for the positions that were not opened during the cut-and-choose opening phase), in order to transfer its real inputs. Details follow,

Protocol 1

(Protocol \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}\) with static security)

  • Input: The sender \(\mathrm{Sen}\) has input \((v_0,v_1)\) where \(v_0,v_1\in \{0,1\}^n\) and the receiver \(\mathrm{Rec}\) has input \(u\in \{0,1\}\).

  • The protocol:

  1. 1.

    Coin-tossing:

    • Receiver’s random tape generation: The parties use a coin-tossing protocol in order to generate the inputs and random tapes for the receiver.

      • – The receiver commits to 20n strings of appropriate length, denoted by \(a^1_\mathrm{Rec},\ldots ,a^{20n}_\mathrm{Rec}\), by sending \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) the message \((\mathsf{commit}, sid, \widetilde{ssid_i}, a^i_\mathrm{Rec})\) for all \(i\in [n]\).

      • – The sender responds with 20n random strings of appropriate length \(b^1_\mathrm{Rec},\ldots , b^{20n}_\mathrm{Rec}\).

      • – The receiver computes \(r^i_\mathrm{Rec}= a^i_\mathrm{Rec}\oplus b^i_\mathrm{Rec}\) and then interprets \(r^i_\mathrm{Rec}= c_i || \tau ^i_\mathrm{Rec}\) where \(c_i\) determines the receiver’s input for the ith OT protocol, whereas \(\tau ^i_\mathrm{Rec}\) determines the receiver’s random tape used for this execution.

    • Sender’s random tape generation: The parties use a coin-tossing protocol in order to generate the inputs and random tapes for the sender.

      • – The sender commits to 20n strings of appropriate length, denoted by \(a^1_\mathrm{Sen},\ldots ,a^{20n}_\mathrm{Sen}\), by sending \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) the message \((\mathsf{commit}, sid, \widetilde{ssid'_i}, a^i_\mathrm{Sen})\) for all \(i\in [n]\).

      • – The receiver responds with 20n random strings of appropriate length \(b^1_\mathrm{Sen},\ldots , b^{20n}_\mathrm{Sen}\).

      • – The sender computes \(r^i_\mathrm{Sen}= a^i_\mathrm{Sen}\oplus b^i_\mathrm{Sen}\) and then interprets \(r^i_\mathrm{Sen}= s^0_i || s^1_i || \tau ^i_\mathrm{Sen}\) where \((s_i^0,s_i^1)\) determine the sender’s input for the ith OT protocol, whereas \(\tau ^i_\mathrm{Sen}\) determines the sender’s random tape used for this execution.

  2. 2.

    Oblivious transfer:

    • The parties participate in 20n executions of the OT protocol \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\) with the corresponding inputs and random tapes obtained from Stage 2. Let the output of the receiver in the ith execution be \(\tilde{s_i}\).

  3. 3.

    Cut-and-choose:

    • \(\mathrm{Sen}\) chooses a random subset \(q_\mathrm{Sen}= (q^1_\mathrm{Sen},\ldots ,q^n_\mathrm{Sen}) \in \{1,\ldots ,20\}^n\) and sends it to \(\mathrm{Rec}\). The string \(q_\mathrm{Sen}\) is used to define a set of indices \(\Gamma _\mathrm{Sen}\subset \{1,\ldots ,20n\}\) of size n by grouping the indices into blocks of 20 and choosing element \(q^i_\mathrm{Sen}\) index in the ith block. More formally, \(\Gamma _\mathrm{Sen}= \{20i-q^i_\mathrm{Sen}\}_{i\in [n]}\). The receiver then opens the commitments from Stage 1 that correspond to the indices within \(\Gamma _\mathrm{Sen}\); namely, the receiver decommits \(a^i_\mathrm{Rec}\) for all \(i \in \Gamma _\mathrm{Sen}\). \(\mathrm{Sen}\) checks that the decommitted values are consistent with the inputs and randomness used for the OTs in Stage 2 by the receiver, and aborts in case of a mismatch.

    • \(\mathrm{Rec}\) chooses a random subset \(q_\mathrm{Rec}= (q^1_\mathrm{Rec},\ldots ,q^n_\mathrm{Rec}) \in \{1,\ldots ,20\}^n\) and sends it to \(\mathrm{Sen}\). The string \(q_\mathrm{Rec}\) is used to define a set of indices \(\Gamma _\mathrm{Rec}\subset \{1,\ldots ,20n\}\) of size n in the following way: \(\Gamma _\mathrm{Rec}= \{20i-q^i_\mathrm{Rec}\}_{i\in [n]}\). The sender then opens the commitments from Stage 1 that correspond to the indices within \(\Gamma _\mathrm{Rec}\); namely, the sender decommits \(a^i_\mathrm{Sen}\) for all \(i \in \Gamma _\mathrm{Rec}\). \(\mathrm{Rec}\) checks that the decommitted values are consistent with the inputs and randomness used for the OTs in Stage 2 by the sender, and aborts in case of a mismatch.

    • \(\mathrm{Rec}\) commits to another subset \(\Gamma \subset [20n]\) denoted by \((\Gamma ^1,\ldots ,\Gamma ^n)\), by sending \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) the message \((\mathsf{commit}, sid, ssid'_i, \Gamma ^i)\) for all \(i\in [n]\). (The sender will reveal its inputs and randomness that are used in Stage 2 that correspond to the indices in \(\Gamma \) later in Stage 5.)

  4. 4.

    Combiner:

    • Let \(\Delta = [20n] - \Gamma _\mathrm{Rec}- \Gamma _\mathrm{Sen}\). Then for every \(i \in \Delta \), the receiver computes \(\alpha _i = u \oplus c_i\) and sends it to the sender.

    • The sender computes a 10n-out-of-18n secret sharing of \(v_0\), denote the shares by \(\{\rho _i^0\}_{i \in \Delta }\). Analogously, it computes a 10n-out-of-18n secret sharing of \(v_1\), denote the shares by \(\{\rho _i^1\}_{i \in \Delta }\). The sender computes \(\beta _i^b = \rho _i^b \oplus s_i^{b\oplus \alpha _i}\) for all \(b \in \{0,1\}\) and \(i \in \Delta \), and sends the outcome to the receiver.

    • The receiver computes \(\tilde{\rho _i} = \beta _i^u\oplus \tilde{s_i}\) for all \(i\in \Delta \). Denote by \(\rho \) these concatenated bits.

  5. 5.

    Final cut-and-choose:

    • The receiver decommits \(\Gamma \) and the sender sends the inputs and randomness it used in Stage 2 for the coordinates that correspond to \(\Delta \cap \Gamma \). (Note that the sender needs only to reveal the indices that were not decommitted in Stage 3.) \(\mathrm{Rec}\) checks that the sender’s values are consistent with the inputs and randomness used for the OTs in Stage 2 and the combiner computation in Stage 4 made by the sender, and aborts in case of a mismatch.

    • The receiver checks whether \((\tilde{\rho }_i)_{i\in \Delta }\) agrees with some code word \(w \in \mathcal{W}_{18n,10n}\) on 17n locations (where the code \(\mathcal{W}_{18n,10n}\) is induced by the secret sharing construction that we use in Stage 4; see Definition 2.2 for more details). Recall that the minimum distance of the code \(\mathcal{W}_{18n,10n}\) is at least \(18n-10n > 8n\), which implies that there will be at most one such code word w. Furthermore, since we can correct up to \(\frac{18n-10n}{2} = 4n\) errors, any code that is 17n close to a code word can be efficiently recovered using the Berlekamp–Welch algorithm. The receiver outputs that w as its output in the OT protocol. If no such w exists, the receiver returns a default value.

We next prove the following theorem.

Theorem 3.2

Assume that that the compiled \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\) is defensible private (cf. Definition 2.6). Then Protocol 1 UC realizes \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) in the presence of static malicious adversaries in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\)-hybrid model using black-box access to the oblivious transfer protocol.

We recall Lemma 2.1 and “Appendix A” that demonstrate the transformation from semi-honest OT to defensible private string OT. Specifically, our protocol relies on the existence of semi-honest OT and extractable commitments, where the later can be constructed based on any two-round semi-honest OT, e.g., [22], which implies PKE (see Sects. 2.3 and 2.6 for more details). Therefore, an immediate corollary from Theorem 3.2 implies that

Corollary 3.3

Assume the existence of two-round static semi-honest oblivious transfer. Then there exists a protocol that securely realizes \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) in the presence of static malicious adversaries in the CRS model using black-box access to the oblivious transfer protocol.

A high-level proof We first provide an overview of the security proof; the complete proof is found in Sect. 3.2. Loosely speaking, in case the receiver is corrupted the simulator plays the role of the honest sender in Stages 1–3 and extracts the receiver’s input u. Specifically, the simulator extracts all the committed values of the receiver within Stage 1 (relying on the fact that the commitment scheme is extractable) and then uses these values in order to obtain the inputs for the OT executions in Stage 2. Upon completing Stage 2, the simulator records the coordinates for which the receiver deviates from the prescribed input and random tape chosen in the coin-tossing phase. Denoting these set of coordinates by \(\Phi \), we recall that a malicious receiver may obtain both of the sender’s inputs with respect to the OT executions that correspond to the coordinates within \(\Phi \) and \(\Gamma \). On the other hand, it obtains only one of the two inputs with respect to the rest of the OT executions that correspond to the coordinates within \(\Delta - \Phi - \Gamma \). Consequently, the simulator checks how many shares of \(v_0\) and \(v_1\) are obtained by the receiver and completes Stage 4 accordingly. In more detail,

  • If the receiver obtains more than 10n shares of both inputs, then the simulator halts and outputs \({\mathsf {fail}}\) (we prove in Sect. 3.2 that this event only occurs with negligible probability).

  • If the receiver obtains less than 10n shares of both inputs, then the simulator picks two random values for \(v_0\) and \(v_1\) of the appropriate length and completes the interaction, playing the role of the honest sender on these values. Note that in this case the simulator does not need to call the ideal functionality.

  • Finally, if the receiver obtains more than 10n shares for only one input \(u\in \{0,1\}\), then the simulator sends u to the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) and obtains \(v_u\). The simulator then sets \(v_{1-u}\) as a random string of the appropriate length and completes the interaction by playing the role of the honest sender on these values.

Recall that the only difference between the simulation and the real execution is in the way the messages in Stage 4 are generated. Specifically, in the simulation a value u is extracted from the malicious receiver and then fed to the \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) functionality. The simulation is then completed based on the output returned from the functionality. Intuitively, the cut-and-choose mechanism ensures that the receiver cannot deviate from the honest strategy in Stage 2 in more than n OT sessions without getting caught with overwhelming probability. Moreover, the defensible privacy of the OT protocol implies that the receiver can learn at most one of the two inputs of the sender relative to the OT executions in Stage 2 for which the receiver proceeded honestly.

In case the sender is corrupted, the simulator’s strategy is to play the role of the honest receiver with a fixed input 0 until Stage 5 where the simulator extracts the sender’s inputs. More specifically, the simulator first extracts the sender’s input for the OT executions in Stage 1 (relying on the fact that the commitment scheme is extractable). Next, the simulator extracts the shares \(\{\rho ^0_i\}_{i\in \Delta }\) and \(\{\rho ^1_i\}_{i\in \Delta }\) that correspond to inputs \(v_0\) and \(v_1\). To obtain the actual values the simulator checks whether these shares agree with some code word relative to 16n locations. That is,

  • Let \(w_0\) and \(w_1\) denote the corresponding code words. (If there are no such code words that agree with \(v_0\) and \(v_1\) on 16n locations, then the simulator uses a default code word instead.) Next, the simulator checks \(w_0\) and \(w_1\) against the final cut-and-choose. If any of the shares from \(w_b\) are inconsistent with the opened shares that are opened by the sender in the final cut-and-choose, then \(v_b\) is set to a default value; otherwise, \(v_b\) is the value corresponding to the shared secret.

Finally, the simulator sends \((v_0,v_1)\) to the ideal functionality for \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\). Security in this case is reduced to the privacy of the OT receiver. In addition, the difference between the simulation’s strategy and the honest receiver’s strategy is that the simulator extracts the sender’s both inputs in all \(i \in \Delta -\Phi \) and then finds code words that are 16n-close to the extracted values, whereas the honest receiver finds a code word that is 17n-close based on the inputs it received in Stages 2 and 5, and returns it. We thus prove that the value u extracted by the simulator is identical to the reconstructed output of the honest receiver relying on the properties of the secret sharing scheme.

3.2 Proof of Theorem 3.2

Let \(\mathcal{A}\) be a malicious probabilistic polynomial-time real adversary running protocol 1 in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\)-hybrid model. We construct an ideal model adversary \({\mathcal{S}}\) with access to \(\mathcal{F}_{{\scriptscriptstyle \mathrm {OT}}}\) which simulates a real execution of protocol \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}\) with \(\mathcal{A}\) such that no environment \(\mathcal{Z}\) can distinguish the ideal process with \({\mathcal{S}}\) and \(\mathcal{F}_{{\scriptscriptstyle \mathrm {OT}}}\) from a real execution of \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}\) with \(\mathcal{A}\). \({\mathcal{S}}\) starts by invoking a copy of \(\mathcal{A}\) and running a simulated interaction of \(\mathcal{A}\) with environment \(\mathcal{Z}\), emulating the honest party. We separately describe the actions of \({\mathcal{S}}\) for every corruption case.

Simulating the communication with\(\mathcal{Z}\) Every message that \({\mathcal{S}}\) receives from \(\mathcal{Z}\) it internally feeds to \(\mathcal{A}\) and every output written by \(\mathcal{A}\) is relayed back to \(\mathcal{Z}\).

Simulating the corrupted receiver In this case, \({\mathcal{S}}\) proceeds as follows:

  1. 1.

    \({\mathcal{S}}\) emulates functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) in Stage 1 and invokes 20n times the commitment scheme \(\omega _L\) with \(\mathcal{A}\) (that plays the role of the committer), obtaining \(((\widetilde{{\mathsf {trans}}_1},a^1_\mathrm{Rec}),\ldots ,(\widetilde{{\mathsf {trans}}_{20n}},a^{20n}_\mathrm{Rec}))\). It internally records \(a^1_\mathrm{Rec},\ldots ,a^{20n}_\mathrm{Rec}\) and further picks 20n random strings \(b^1_\mathrm{Rec},\ldots ,b^{20n}_\mathrm{Rec}\), forwarding them to the adversary. The simulator also computes \(r^i_\mathrm{Rec}= a^i_\mathrm{Rec}\oplus b^i_\mathrm{Rec}\) and then views \(r^i_\mathrm{Rec}= c_i || \tau ^i_\mathrm{Rec}\) where \(c_i\) is the input an honest receiver must use in the ith OT protocol execution in Stage 2, together with randomness \(\tau ^i_\mathrm{Rec}\).

    Next, the simulator picks 20n random strings \(a^1_\mathrm{Sen},\ldots ,a^{20n}_\mathrm{Sen}\) and emulates the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) by invoking 20n times the commitment phase of \(\omega _R\) with inputs \(a^1_\mathrm{Sen},\ldots ,a^{20n}_\mathrm{Sen}\), against \(\mathcal{A}\) that plays the role of receiver for the commitment scheme. At the end of this phase, \({\mathcal{S}}\) obtains the output \(((\widetilde{{\mathsf {trans}}'_1},\gamma _1),\ldots ,(\widetilde{{\mathsf {trans}}'_{20n}},\gamma _{20n}))\) and receives from the adversary 20n random strings \(b^1_\mathrm{Sen},\ldots , b^{20n}_\mathrm{Sen}\).

  2. 2.

    In Stage 2, the simulator participates with the adversary in 20n executions of the OT protocol \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\), while playing the role of the honest sender. Note that due to the fact that the simulator knows the values of the input and randomness that the honest receiver must use in each of the OT executions, the simulator can identify the coordinates of which the receiver deviates, in which case the receiver learns both the inputs of the sender. We denote this set of coordinates by the set \(\Phi \).

  3. 3.

    In Stage 3, the simulator picks n random numbers \((q^1_\mathrm{Sen},\ldots ,q^n_\mathrm{Sen})\) from \(\{1,\ldots ,20\}^n\) and sends them to the receiver. Upon receiving the decommitments from the receiver, the simulator verifies the decommitments as would the honest sender do with respect to \(\Gamma _\mathrm{Sen}\) and halts in case of a mismatch, outputting the simulated transcript thus far. Next, it receives \((q^1_\mathrm{Rec},\ldots ,q^n_\mathrm{Rec})\) from the receiver and decommits the subset of values that corresponds to the coordinates in \(\Gamma _\mathrm{Rec}\) as determined by \((q^1_\mathrm{Rec},\ldots ,q^n_\mathrm{Rec})\), playing the role of the sender. Finally, it emulates functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) and invokes the commitment scheme \(\omega _L\) with \(\mathcal{A}\) (that plays the role of the committer) n times, obtaining \((({\mathsf {trans}}'_1,\Gamma _1),\ldots ,({\mathsf {trans}}'_{20n},\Gamma _{20n}))\). Let \(\Delta = [20n] - \Gamma _\mathrm{Rec}- \Gamma _\mathrm{Sen}\).

  4. 4.

    In Stage 4, the simulator proceeds as follows. Observe first that \(\Phi \) and \(\Gamma _\mathrm{Sen}\) are disjoint, since otherwise the simulator would have halted in the previous stage. We consider three cases here:

    1. (a)

      \(\Phi \ge n\): In this case, the simulator halts and outputs \({\mathsf {fail}}\).

    2. (b)

      \(\Phi <n\): This implies that \(\Delta - \Phi -\Gamma > 16n\) where by definition, the malicious receiver proceeds according to the honest OT receiver’s strategy with respect to every coordinate in \(\Delta - \Phi - \Gamma \). Note that in this case the adversary learns at most \(\Delta + 2\Phi + 2\Gamma < 20n\) distinct shares of both the sender’s inputs and the simulator knows precisely which share is learned for every coordinate relative to the set \(\Delta - \Phi - \Gamma \). We consider two subcases:

      1. i.

        There exists a bit \(u\in \{0,1\}\) for which the adversary learns 10n shares of \(v_u\). We recall that the adversary might learn both shares for the OT executions that correspond to the coordinates within the set \(\Phi \cup \Gamma \) and exactly one share for every OT execution that corresponds to the coordinates within \(\Delta -\Phi -\Gamma \). Now, since the simulator knows which share is learned for every coordinate within \(\Delta -\Phi -\Gamma \), it can also compute an upper bound on the number of \(v_u\) shares that are obtained by the receiver. In this case, the simulator forwards u to the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) and receives back \(v_u\). The simulator then sets \(v_{1-u}\) as a random string of the appropriate length and completes the interaction by playing the role of the honest sender on these inputs.

      2. ii.

        There does not exist a bit \(u\in \{0,1\}\) for which the adversary learns 10n shares of \(v_u\). In this case, the simulator picks two random values for \(v_0\) and \(v_1\) of the appropriate length and completes the interaction, playing the role of the honest sender on these values. Note that in this case the simulator does not call the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\).

  5. 5.

    The simulator completes the simulation in Stage 5 similarly to Stage 3.

Note that the only difference between the simulation and the real execution is in the way the messages in Stage 4 are generated. In the simulation, a value u is extracted from the malicious receiver and then fed to the \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) functionality, where the simulation is completed based on the output returned from the functionality. Furthermore, we recall that in Stage 5 the receiver learns both of the sender’s inputs in all sessions \(i \in \Gamma \); then, it holds that the receiver learns one such input for every session it behaved honestly and two inputs for all sessions it deviates or included in \(\Gamma \). Proving that the event, for which the adversary deviates in more than n OT executions, only happens with negligible probability, implies that it learns less than 20n shares in total. Therefore, at least one of the shared secrets is completely hidden due to the 10n-out-of-18n secret sharing scheme. To complete the simulation, the simulator identifies which of the two values \(v_0\) and \(v_1\) is learnt by the receiver (by identifying how many shares are obtained by that party) and fixes that to be the receiver’s input. Finally, indistinguishability follows from the defensible privacy with respect to the receiver of the OT protocol.

More formally, we begin with a proof that the probability that the simulator returns \({\mathsf {fail}}\) is negligible and then neglect this event. Namely, we prove that the simulated and real views are computationally indistinguishable conditioned on the even that the simulator did not output \({\mathsf {fail}}\).

Claim 3.1

The probability \({\mathcal{S}}\) return \({\mathsf {fail}}\) is negligible.

Proof

Note first that the only place where the simulator fails is in Step 4a, when \(\phi \ge n\). We now show that this event occurs with negligible probability. In other words, we need to show that the probability that the corrupted receiver deviates from the honest receiver’s strategy in at least n-out-of-18n OT executions while not getting caught by the sender is negligible. Formally, let \(\mathrm Bad\) denote the event for which the corrupted receiver deviates in at least n coordinates. Note first that the simulator can easily identify when event \(\mathrm Bad\) occurs since it knows the random tapes and the inputs the receiver must use in all executions, and can therefore identify the coordinates for which the receiver deviates. Next, we show that conditioned on event \(\mathrm Bad\) occurring, the probability that \(\Gamma _\mathrm{Sen}\) does not contain one of the n deviated coordinates is negligible. This implies that the probability that \({\mathcal{S}}\) returns \({\mathsf {fail}}\) is negligible.

Denote by \(\Phi \) the set of n coordinates in which the receiver deviates and define the bins \(A_j = \{20(j-1)+1,\ldots ,20j\}\) for all \(j\in [n]\). By the pigeonhole principle it holds that at least \(\lfloor n/20 \rfloor \) bins intersect with \(\Phi \). In addition, we recall that \(\Gamma _\mathrm{Sen}\) is chosen by the simulator by picking one element from each bin independently of \(\Phi \) and uniformly at random. Then the probability that \(\Gamma _\mathrm{Sen}\cap \Phi = \emptyset \) is at most \((19/20)^{\lfloor n/20 \rfloor }\) which is negligible in n. This concludes the proof of the claim. \(\square \)

Next, we prove that the receiver’s view for both executions is computationally indistinguishable assuming that the simulator did not abort the execution. More formally, denoting the simulated execution by \(\pi _{\scriptscriptstyle \mathrm {IDEAL}}\), then we prove the following statement,

Claim 3.2

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}}_{\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}, \mathcal{A}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

The security argument proceeds in a sequence of hybrids games starting from the simulated execution toward the real execution. We denote by \(\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_i}\) the receiver’s view in the ith hybrid game.

Hybrid 1 :

In this game, we define a simulator \({\mathcal{S}}_1\) that is identical to \({\mathcal{S}}\) except for the way the sender’s message is generated in Stage 4. More precisely, the simulator modifies the way \(\beta _i^{1\oplus u}\) is computed for all \(i \in \Delta - \Gamma -\Phi \). Recall first that \({\mathcal{S}}\) sets

$$\begin{aligned} \beta _i^{1\oplus u} = \rho _i^{1 \oplus u} \oplus s_i^{(1\oplus u)\oplus \alpha _i} = \rho _i^{1 \oplus u} \oplus s_i^{1 \oplus c_i}. \end{aligned}$$

Instead, \({\mathcal{S}}_1\) will choose \(\beta _i^{1\oplus u}\) at random, which can be viewed as using a masking element that is independent of \(s_i^{1 \oplus c_i}\). Intuitively, we claim that the simulated view and the view generated in hybrid 1 are computationally indistinguishable because for every \(i \in \Delta - \Gamma -\Phi \) the receiver generates the OT messages in session i of Stage 2 honestly (i.e., using the input \(c_i\) and random tape \(\tau ^i_\mathrm{Rec}\)), where by the defensible privacy of the receiver it cannot distinguish the input \(s_i^{1 \oplus c_i}\) from a random input.

Proof Intuition The goal in this hybrid is to remove the real input from the receiver. The idea is that in all parallel OT executions where the sender does not cheat, by the defensible privacy, the sender should not be able to identify the receiver’s input. The specific executions can be identified by the set \(\Delta - \Gamma - \Phi \). In order to carry of the security reduction, we need to reduce a cheating sender to a sender that violates the defensible privacy of the underlying OT protocol. In the main sequence of hybrids \(H_1^1,H_1^2,\ldots \), we change the inputs of the receiver in the executions corresponding to \(\Delta - \Gamma - \Phi \) one at a time. To argue indistinguishability between the ith and \(i+1\)st hybrids, we need to do two things. First we need to decouple the receiver’s actions in these execution (that are simulated) from the coin-tossing stage. We can rely on the hiding of the commitment for this; however, to carry out this reduction we need to guess which is the ith index in \(\Delta - \Gamma - \Phi \). Toward this, we consider a variant of the hybrid \(\bar{H}_e^i\) and \(\bar{H}_e^{i+1}\) where we make a guess and isolate the indistinguishability on the guessed coordinate. Next, we consider a nested hybrid \(\tilde{H}_1^i\) and \(\tilde{H}_1^{i+1}\) where we rely on the hiding of the commitment scheme by setting the receivers commitments to 0 in the coin-tossing stage. Formally, we prove this in the following claim.

Claim 3.3

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} \equiv \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_1},{\mathcal{S}}_1, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

Toward proving this claim, we introduce a sequence of intermediate hybrid experiments \(H^e_1\) for \(e = 0,\ldots , 18n\), where in hybrid \(H^e_1\) we consider a simulator \({\mathcal{S}}_1^e\) that proceeds identically to \({\mathcal{S}}\) with the exception that it follows \({\mathcal{S}}_1\)’s strategy for the first e indices in \(\Delta - \Gamma -\Phi \) regarding the generation of \(\beta _i^{1\oplus u}\) (i.e., for the first e sessions where the receiver proceeded honestly). By definition, we have that experiment \(H_1^0\) proceeds identically to the ideal simulation and \(H_1^{18n}\) proceeds identically to hybrid 1. Denote the view output in hybrid \(H_1^e\) by \(\mathsf{hyb}_e(n)\) and assume by contradiction that there exist an adversary \(\mathcal{A}\) (controlled by \(\mathcal{Z}\)), a distinguisher D, a polynomial \(p(\cdot )\) and infinitely many n’s such that

$$\begin{aligned} \Big |\Pr [D(\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)) = 1] - \Pr [D(\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_1},{\mathcal{S}}, \mathcal{Z}}(n)) = 1]\Big | \ge \frac{1}{p(n)}~. \end{aligned}$$

Using a standard hybrid argument it follows that there exists an \(e \in [18n]\) such that

$$\begin{aligned} \Big |\Pr [D(\mathsf{hyb}_{e}(n)) = 1] - \Pr [D(\mathsf{hyb}_{e-1}(n)) = 1]\Big | \ge \frac{1}{18np(n)}~. \end{aligned}$$

Next, we plan to exploit the above observation in order to construct a defensible adversary \(\mathcal{A}'\) that violates the receiver’s defensible privacy relative to \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\) in the sense of Definition 2.6. At a high level, \(\mathcal{A}'\) picks a random \(j\in [20n]\) and externally forwards \(\mathcal{A}\)’s messages within the jth execution of the OT protocol, where j serves as the guess for the eth execution in \(\Delta - \Gamma - \Phi \). \(\mathcal{A}'\) then emulates the rest of the OT executions, playing the role of the sender. In order to simplify the analysis and allow \(\mathcal{A}'\) carry out the reduction properly (where the generated randomness within the coin-tossing phase is disassociated from the OT executions), we consider the following additional hybrid executions.

First, we consider a slight variation of \(H_i^{e-1}\) (resp., \(H_i^{e}\)) denoted by \({\overline{H}}_{e-1}\) (resp., \({\overline{H}}_e\)), and a random variable J that denotes a randomly chosen index from [20n] which is picked at the onset of the hybrid execution. Moreover, the experiment is aborted if chosen index does not correspond to the eth execution in \(\Delta - \Gamma - \Phi \). We say that index J is \(\mathrm Bad\) if the experiment aborts. Note that experiments \({\overline{H}}_{e-1}\) and \({\overline{H}}_e\) proceed identically to \(H_{e-1}\) and \(H_e\), respectively, conditioned on J not being \(\mathrm Bad\). This is due to the fact that J is chosen independently of the experiments. Moreover, relying on the fact that the eth execution can take at most 20n values we have that

$$\begin{aligned} \Pr [J \text{ is } \text{ not } \mathrm Bad] = \frac{1}{20n}. \end{aligned}$$

Therefore, if \({\overline{\mathsf{hyb}}}_{e-1}(n)\) and \({\overline{\mathsf{hyb}}}_{e}(n)\), respectively, correspond to \(\mathcal{A}\)’s views in \({\overline{H}}_{e-1}\) and \({\overline{H}}_e\), then

$$\begin{aligned} \Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1]&= \Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1 \wedge \ J \text{ not } \mathrm Bad] \\&\quad + \Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1 \wedge \ J \text{ is } \mathrm Bad]\\&= \left( \frac{1}{20n}\right) \Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1\ |\ J \text{ not } \mathrm Bad] \\&\quad + \left( 1-\frac{1}{20n}\right) \Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1\ |\ J \text{ is } \mathrm Bad]\\&= \left( \frac{1}{20n}\right) \Pr [D(\mathsf{hyb}_{e}(n)) = 1]+ \left( 1-\frac{1}{20n}\right) \Pr [D(\bot ) = 1]. \end{aligned}$$

Similarly,

$$\begin{aligned} \Pr [D({\overline{\mathsf{hyb}}}_{e-1}(n)) = 1] {=} \left( \frac{1}{20n}\right) \Pr [D(\mathsf{hyb}_{e-1}(n)) {=} 1]{+} \left( 1-\frac{1}{20n}\right) \Pr [D(\bot ) = 1]. \end{aligned}$$

Therefore,

$$\begin{aligned}&\Big |\Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1] - \Pr [D({\overline{\mathsf{hyb}}}_{e-1}(n)) = 1]\Big | \nonumber \\&\quad = \left( \frac{1}{20n}\right) \Big |\Pr [D(\mathsf{hyb}_{e}(n)) = 1] - \Pr [D(\mathsf{hyb}_{e-1}(n)) = 1]\Big | \nonumber \\&\quad \ge \left( \frac{1}{20n}\right) \frac{1}{18np(n)}. \end{aligned}$$
(1)

Before we provide the description of \(\mathcal{A}'\), we consider our second modification and define hybrids \({\widetilde{H}}_{e-1}\) and \({\widetilde{H}}_e\) as follows. Namely, in these new experiments we slightly modify the sender’s messages in the coin-tossing phase and ask the sender to commit to the all zeros string of appropriate length instead of committing to a uniform string \(a^j_\mathrm{Sen}\). Recalling that \(a^j_\mathrm{Sen}\) and \(b^j_\mathrm{Sen}\) determine the sender’s input in the jth execution of the OT protocol, we instruct the sender to commit to 0 so that \(\mathcal{A}'\) can forward the jth’s execution messages to an external OT sender in the reduction described next. More precisely, \({\widetilde{H}}_{e-1}\) (resp., \({\widetilde{H}}_e\)) follows exactly as \({\overline{H}}_{e-1}\) (resp., \({\overline{H}}_e\)) with the exception that we modify the honest sender’s message in the coin-tossing stage, where it commits to the all zeros string instead of \(a^J_\mathrm{Sen}\). Observe that this change does not affect the cut-and-choose phase where the sender is required to reveal randomness for indices in \(\Gamma _\mathrm{Rec}\) because if \(J \in \Gamma _\mathrm{Rec}\wedge \Gamma \) then the experiment is aborted by definition. Denote by the random variables \({\widetilde{\mathsf{hyb}}}_e(n)\) and \({\widetilde{\mathsf{hyb}}}_{e-1}(n)\) the views of adversary \(\mathcal{A}\) in \({\widetilde{H}}_{e-1}\) and \({\widetilde{H}}_e\), respectively. Then from the computational hiding property of \(\omega _L\) the commitment scheme used in the coin-tossing stage it follows that there exists a negligible function \(\nu (\cdot )\) such that for all sufficiently large n’s,Footnote 9

$$\begin{aligned} \Big |\Pr [D({\overline{\mathsf{hyb}}}_{e}(n)) = 1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e}(n)) = 1]\Big |\le & {} \nu (n)~ \end{aligned}$$
(2)
$$\begin{aligned} \Big |\Pr [D({\overline{\mathsf{hyb}}}_{e-1}(n)) = 1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) = 1]\Big |\le & {} \nu (n)~. \end{aligned}$$
(3)

Using Eq. 1 we obtain that for all sufficiently large n’s,

$$\begin{aligned} \Big |\Pr [D({\widetilde{\mathsf{hyb}}}_{e}(n)) = 1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) = 1]\Big |&\ge \frac{1}{(18n)20np(n)} - 2\nu (n)\ge \frac{1}{q(n)},~ \end{aligned}$$
(4)

where \(q(\cdot )\) is the polynomial \(2\times 18(20n)np(n)\). Fix an n such that this happens. We now show how to define \(\mathcal{A}'\) and distinguisher \(D'\) that violate the defensible private with respect to a corrupted receiver in \(\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}\). More specifically, \(\mathcal{A}'\) internally emulates experiment \({\widetilde{H}}_1^{e-1}\) by running the simulation strategy of \({\mathcal{S}}_1^{e-1}\) with the malicious receiver \(\mathcal{A}\). Let \((c_J,\tau _\mathrm{Rec}^J)\) denote the input and randomness that the honest receiver is supposed to use in the internal Jth execution. Recall that this is determined by \(a^J_\mathrm{Rec}\oplus b^J_\mathrm{Rec}\) and is known to the simulator, as it extracts the adversary’s commitments. Next \(\mathcal{A}'\) plays the role of the sender in the executions of \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\) with the exception that it externally relays the messages of the adversary (acting as the receiver) in the Jth execution of the oblivious transfer protocol from Stage 2. Following the oblivious transfer executions, \(\mathcal{A}'\) continues the internal emulation until the end of Stage 3. If the experiment aborts in the internal emulation (where this happens if J is \(\mathrm Bad\)), then \(\mathcal{A}'\) aborts. Otherwise, there is a good defense for the receiver in the Jth execution, namely \((c_J,\tau _\mathrm{Rec}^J)\). Let \(\mathsf{STATE}\) be the complete view of experiment \({\widetilde{H}}_{e-1}\) which includes the input and random tape of \(\mathcal{A}\) and the simulator (playing the sender), as well as the partial transcript of the messages exchanged with \(\mathcal{A}\) until Stage 3. \(\mathcal{A}'\) outputs \((c_J,\tau _\mathrm{Rec}^J)\) as its defense and \(\mathsf{STATE}\) as its output.

Upon receiving \((\mathsf{view},s)\), where \(\mathsf{view}\) is \(\mathcal{A}'\)’s view and s is a string (as specified in Definition 2.6), distinguisher \(D'\) proceeds as follows. It first extracts state \(\mathsf STATE\) from the view and then completes the internal emulation of the experiment by playing the role of the sender in Stages 4 and 5. We note that \(D'\) has all the information it needs as part of \(\mathsf STATE\) to complete the execution, except for the sender’s inputs \((s_J^0,s_J^1)\) that are required to compute \(\beta _J^0\) and \(\beta _J^1\) in Stage 4. Note also that the distinguisher can use \(\mathcal{A}'\)’s valid defense \((c_J,\tau _\mathrm{Rec}^J)\) to compute one of the two sender’s inputs, namely \(s_J^{c_J}\). For the other input, \(D'\) uses s, i.e., it sets \(s_J^{1\oplus c_J} = s\) and completes the experiment using these inputs. Finally, \(D'\) invokes D on \(\mathcal{A}\)’s view and outputs whatever D outputs. It follows from the construction that the view on which D is invoked is distributed identically to \(\mathcal{A}\)’s view in \({\widetilde{\mathsf{hyb}}}_{e-1}(n)\) if s is the sender’s other input, namely \(s_J^{1\oplus c_J}\). On the other hand, if s is a random string then the view is distributed identically to \({\widetilde{\mathsf{hyb}}}_e(n)\).

$$\begin{aligned} D'(\Gamma (\text{ View }_{\mathcal{A}}[\mathrm{Sen}(1^n,(U^n_0,U^n_1)),\mathcal{A}(1^n)],U^n_{1-b})) = D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) \end{aligned}$$

and

$$\begin{aligned} D'(\Gamma (\text{ View }_{\mathcal{A}}[\mathrm{Sen}(1^n,(U^n_0,U^n_1)),\mathcal{A}(1^n)],\bar{U}^n)) = D({\widetilde{\mathsf{hyb}}}_e(n)), \end{aligned}$$

where \(\Gamma (v,*) = (v,*)\) if v contains a valid defense for \(\mathcal{A}'\). From Equation 4, it follows that the difference is non-negligible and that \(\mathcal{A}'\) and \(D'\) contradict the defensible privacy of protocol \(\widehat{\pi }_{{\scriptscriptstyle \mathrm {OT}}}\) with respect to a corrupted receiver. This concludes the proof of the claim. \(\square \)

Hybrid 2 :

In this hybrid game, there is no trusted party that computes functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\). Instead, we define a simulator \({\mathcal{S}}_2\) that is given the sender’s real inputs \(v_0\) and \(v_1\). Furthermore, \({\mathcal{S}}_2\) uses these inputs in Stage 5 of the execution. Then we claim that the receiver’s view in Hybrids 1 and 2 is statistically close because the probability that the receiver learns more than 10n shares for both \(u=0\) and \(u=1\) is negligible. More formally,

Claim 3.4

The following two distribution ensembles are statistically indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_1},{\mathcal{S}}_1, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{s}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_2},{\mathcal{S}}_2, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

This follows from the facts that \(|\Delta | = 18n\), \(|\Gamma | = n\) and \(|\Phi | \le n\) with overwhelming probability (relying on the proof of Claim 3.2), and that the masking values that are used in Stage 5 are independent of the input to the OT executions in Stage 4. Specifically, the overall number of shares that the receiver learns is bounded by \(|\Delta - \Gamma - \Phi | + 2|\Gamma | + 2|\Phi | \le 20n\), where the rest of the shares are perfectly hidden (as their masking strings are not used elsewhere in the protocol). \(\square \)

Hybrid 3 :

In this game, we define a simulator \({\mathcal{S}}_3\) that is identical to \({\mathcal{S}}_2\) except for the way the sender’s message is generated in Stage 4. More precisely, for all \(i \in \Delta - \Gamma -\Phi \) it modifies the way \(\beta _i^{1\oplus u}\) is computed. Recalling that \({\mathcal{S}}_2\) sets it to be random, then \({\mathcal{S}}_3\) will instead set

$$\begin{aligned} \beta _i^{1\oplus u} = \rho _i^{1 \oplus u} \oplus s_i^{1 \oplus c_i}. \end{aligned}$$

Indistinguishability of hybrids 2 and 3 follows using the same proof as in Claim 3.3. Therefore, we have that the following ensembles are computationally indistinguishable.

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_2},{\mathcal{S}}_2, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_3},{\mathcal{S}}_3, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Observe that hybrid 3 is identical to the real execution. This concludes the proof of Claim 3.2. \(\square \)

Simulating the corrupted sender: In this case, \({\mathcal{S}}\) proceeds as follows:

  1. 1.

    \({\mathcal{S}}\) picks 20n random strings \(a^1_\mathrm{Rec},\ldots ,a^{20n}_\mathrm{Rec}\) and emulates the ideal functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) by invoking 20n times the commitment phase of \(\omega _R\) with inputs \(a^1_\mathrm{Rec},\ldots ,a^{20n}_\mathrm{Rec}\), against \(\mathcal{A}\) that plays the role of receiver for the commitment scheme. At the end of this phase, \({\mathcal{S}}\) obtains the output \(((\widetilde{{\mathsf {trans}}_1},\gamma _1),\ldots ,(\widetilde{{\mathsf {trans}}_{20n}},\gamma _{20n}))\) and receives from the adversary 20n random strings \(b^1_\mathrm{Rec},\ldots , b^{20n}_\mathrm{Rec}\).

    Next, \({\mathcal{S}}\) emulates functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) in Stage 1 and invokes 20n times the commitment scheme \(\omega _L\) with \(\mathcal{A}\) (that plays the role of the committer), obtaining \(((\widetilde{{\mathsf {trans}}'_1},a^1_\mathrm{Sen}),\ldots ,(\widetilde{{\mathsf {trans}}'_{20n}},a^{20n}_\mathrm{Sen}))\). It internally records \(a^1_\mathrm{Sen},\ldots ,a^{20n}_\mathrm{Sen}\) and further picks 20n random strings \(b^1_\mathrm{Sen},\ldots ,b^{20n}_\mathrm{Sen}\), forwarding them to the adversary. The simulator also computes \(r^i_\mathrm{Sen}= a^i_\mathrm{Sen}\oplus b^i_\mathrm{Sen}\) and then views \(r^i_\mathrm{Sen}= s_i^0 || s_i^1|| \tau ^i_\mathrm{Sen}\); \((s_i^0,s_i^1)\) is the input an honest sender must use in the ith OT protocol execution in Stage 3, together with randomness \(\tau ^i_\mathrm{Sen}\).

  2. 2.

    In Stage 2, the simulator participates with the adversary in 20n executions of the OT protocol \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\), while playing the role of the honest receiver. Note that due to the fact that the simulator knows the values of the input and randomness that the honest sender must use in each of the OT executions, the simulator can identify the coordinates of which the sender deviates. We denote this set of coordinates by the set \(\Phi \).

  3. 3.

    In Stage 3, \({\mathcal{S}}\) receives \((q^1_\mathrm{Sen},\ldots ,q^n_\mathrm{Sen})\) from the sender and decommits the subset of values that corresponds to the coordinates in \(\Gamma _\mathrm{Sen}\) as determined by \((q^1_\mathrm{Sen},\ldots ,q^n_\mathrm{Sen})\), playing the role of the receiver. Next, the simulator picks n random numbers \((q^1_\mathrm{Rec},\ldots ,q^n_\mathrm{Rec})\) from \(\{1,\ldots ,20\}^n\) and sends them to the sender. Upon receiving the decommitments from the sender, the simulator verifies the decommitments as would the honest receiver do with respect to \((q^1_\mathrm{Rec},\ldots ,q^n_\mathrm{Rec})\) and halts in case of a mismatch, outputting the simulated transcript thus far. Finally, it samples a subset \(\Gamma \) from [20n] of size n and emulates functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}\) by invoking the commitment scheme \(\omega _R\) with \(\mathcal{A}\) (that plays the role of the receiver) n times on input \(\Gamma \), obtaining \((({\mathsf {trans}}'_1,\gamma _1),\ldots ,({\mathsf {trans}}'_{20n},\gamma _{20n}))\). Let \(\Delta = [20n] - \Gamma _\mathrm{Rec}- \Gamma _\mathrm{Sen}- \Phi \).

  4. 4.

    In Stage 4, the simulator proceeds as the honest receiver would do with input \(u=0\) and extracts the sender’s inputs \(v_0,v_1\). Specifically, the simulator knows all the inputs \(\{(s^0_i,s^1_i)\}_{i\in \Delta }\) of the sender to the OT executions in Stage 2 and extracts the two sets of shares \(\{\rho _i^0\}_{i \in \Delta }\) and \(\{\rho _i^1\}_{i \in \Delta }\).

  5. 5.

    In Stage 5, the simulator plays the role of the honest receiver and checks whether the inputs and randomness revealed by the sender are consistent with the OT session that correspond to \(\Delta \cap \Gamma \). In case of a mismatch the simulator halts, outputting the simulated transcript thus far. Next, the simulator checks that \(\tilde{\rho _0}\) and \(\tilde{\rho _1}\) agree with some respective code words \(w_0\) and \(w_1\) on 16n locations. In case of a non-agreement, the simulator records a default value; else, it records the code words \(w_0\) and \(w_1\). It then runs a second consistency check to verify whether these code words agree with \(\beta _j^u \oplus s_j^{u \oplus \alpha _j}\) for all coordinates \(j\in \Gamma \). If not, it records a default value. Finally, the simulator sends the recorded values to \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\).

We next prove the following,

Claim 3.5

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {EXTCOM}}}_{\pi _{\scriptscriptstyle \mathrm {OT}}^{\scriptscriptstyle \mathrm {ML}}, \mathcal{A}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

The security argument proceeds in a sequence of hybrid games starting from the simulated execution toward the real execution. We denote by \(\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_i}\) the receiver’s view in the ith hybrid game.

Hybrid 1: :

In this hybrid game, there is no trusted party that computes functionality \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\). Instead, we define a simulator \({\mathcal{S}}_1\) that is given the receiver’s real input u and proceeds identically as \({\mathcal{S}}\) except for the way it generates the receiver’s message in Stage 4. More precisely, \({\mathcal{S}}_1\) uses the real input u instead of 0 in order to compute \(\alpha _i\) for all \(i \in \Delta -\Phi \). Indistinguishability of the simulation from the view in hybrid 1 follows from the receiver privacy of the OT protocol.

We follow an approach similar to the proof of Claim 3.3 where we consider a sequence of hybrids \(H_1^1,H_1^2,\ldots \) where we replace the sender’s input in the executions in the parallel OT executions where the receiver proceeded honestly. Then we decouple actions of the sender from the coin-tossing stage by considering nested hybrids. Finally, we need an additional step where after replacing the sender’s input from the executions in \(\Delta -\Phi \) we rely on the secret sharing scheme to conclude that one of the two sender’s inputs has been removed.

More formally, we prove the following claim.

Claim 3.6

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}{\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_1},{\mathcal{S}}_1, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

Recall that the only difference between hybrid 1 and the simulated view is in the way that the messages in Stage 4 are generated. Specifically, in the simulated view \({\mathcal{S}}\) uses \(u=0\) in all sessions to compute \(\alpha _i\). On the other hand, in game hybrid 1 the receiver uses the real u. Clearly, if the real input equals 0 then the views are identical and the proof of the claim follows immediately. Therefore, it suffices to consider the case \(u=1\). Toward proving this claim, we introduce a sequence of intermediate hybrid experiments \(H^e_1\) for \(e = 0,\ldots , 20n\). Namely, in hybrid \(H^e_1\) we consider a simulator \({\mathcal{S}}^e\) that proceeds identically to \({\mathcal{S}}\) with the exception that it uses \(u=1\) in the first e sessions in \(\Delta \) in order to compute \(\alpha _i\). By construction, we have that the experiment \(H_1^0\) proceeds identically to the ideal simulation and \(H_1^{20n}\) proceeds identically to hybrid 1. Denote the view output in hybrid \(H_1^e\) by \(\mathsf{hyb}_e(n)\) and assume by contradiction that there exist a distinguisher D, a polynomial \(p(\cdot )\) and infinitely many n’s such that

$$\begin{aligned} |\Pr [D(\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)) = 1] - \Pr [D(\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}}_{\pi _{{\scriptscriptstyle \mathrm {HYBRID}}_1},{\mathcal{S}}_1, \mathcal{Z}}(n))=1]| \ge \frac{1}{p(n)}~. \end{aligned}$$

Using a standard hybrid argument, it follows that there exists an \(e \in \{1,\ldots ,20n\}\) such that

$$\begin{aligned} |\Pr [D(\mathsf{hyb}_{e}(n)) = 1] - \Pr [D(\mathsf{hyb}_{e-1}(n)) = 1]| \ge \frac{1}{20np(n)}~. \end{aligned}$$

As in the proof of Claim 3.3, we consider experiments \({\overline{H}}_{e-1}\) and \({\overline{H}}_e\) where the simulator samples a random \(J \in [3n+1]\) and aborts if J is not the eth session in \(\Delta \). Next, we consider modified hybrids \({\widetilde{H}}_{e-1}\) and \({\widetilde{H}}_e\), where we slightly modify the receiver’s messages in the coin-tossing phase and ask the receiver to commit to the all zeros string of appropriate length instead of committing to a uniform string \(a^J_\mathrm{Rec}\). Recalling that \(a^J_\mathrm{Rec}\) and \(b^J_\mathrm{Rec}\) determine the receiver’s input in the Jth execution of the OT protocol, we instruct the receiver to commit to 0 so that in the reduction we explain next we can forward the Jth’s execution messages to an external OT receiver in the reduction described next. More precisely, \({\widetilde{H}}_{e-1}\) (resp., \({\widetilde{H}}_e\)) follows exactly as \({\overline{H}}_{e-1}\) (resp., \({\overline{H}}_e\)) with the exception that we modify the honest receiver’s message in the coin-tossing stage, where it commits to the all zeros string instead of \(a^j_\mathrm{Rec}\). Denote by the random variables \({\widetilde{\mathsf{hyb}}}_e(n)\) and \({\widetilde{\mathsf{hyb}}}_{e-1}(n)\) the views of adversary \(\mathcal{A}\) in \({\widetilde{H}}_{e-1}\) and \({\widetilde{H}}_e\), respectively. Then following the same proof as in Claim 3.3, we can conclude that there exists a polynomial \(q(\cdot )\) such that

$$\begin{aligned} \Big |\Pr [D({\widetilde{\mathsf{hyb}}}_{e}(n)) = 1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) = 1]\Big |&\ge \frac{1}{q(n)}. \end{aligned}$$
(5)

Without loss of generality, assume that

$$\begin{aligned} \Pr [D({\widetilde{\mathsf{hyb}}}_{e}(n)) = 1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) = 1] \ge \frac{1}{q(n)}~. \end{aligned}$$

We use the above to construct a malicious sender \(\mathcal{A}'\) that violates the privacy of the receiver relative to the oblivious transfer protocol \(\widehat{\pi }_{\scriptscriptstyle \mathrm {OT}}\). Specifically, \(\mathcal{A}'\) internally emulates the experiment \({\widetilde{H}}_1^{e-1}\) by running the simulation strategy of \({\mathcal{S}}_1^{e-1}\) with the malicious sender \(\mathcal{A}\), except for the following difference. \(\mathcal{A}'\) relays the messages of the sender in the Jth execution of the oblivious transfer protocol from Stage 2 to an external receiver with input c. Following the oblivious transfer executions, it continues the internal emulation until Stage 4. If J is not the eth session in \(\Delta \), then \(\mathcal{A}'\) follows hybrid \({\widetilde{H}}_1^{e-1}\) and sets the view as \(\bot \). Otherwise, set \(\alpha _J\) to be a random bit and continue the internal emulation to completion. It then invokes the distinguisher D on \(\mathcal{A}\)’s internally generated view; denote by b the bit output by D. Then, \(\mathcal{A}'\) outputs \(\alpha _j \oplus b\).

We proceed by analyzing the probability that \(\mathcal{A}'\) correctly guesses c. Conditioned on not outputting \({\mathsf {fail}}\) and \(\alpha _j \ne c\), the experiment emulated internally by \(\mathcal{A}\) is identical to \({\widetilde{H}}_1^{e}\). Analogously, conditioned on not outputting \({\mathsf {fail}}\) and \(\alpha _j = c\), the experiment emulated internally by \(\mathcal{A}\) is identical to \({\widetilde{H}}_1^{e-1}\), where the probability that \(\alpha _j = c\) (and \(\alpha _j \ne c\)) is \(\frac{1}{2}\). Therefore,

$$\begin{aligned} \Pr [\ \mathcal{A}'&\text{ guesses } c \text{ correctly } ] \\&= \frac{1}{2}\Pr [D({\widetilde{\mathsf{hyb}}}_e(n)) \oplus \alpha _j = c~|\ \alpha _j \ne c\ ]\\&\ \quad + \frac{1}{2}\Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n)) \oplus \alpha _j = c~|\ \alpha _j = c\ ]\\&= \frac{1}{2}\Pr [D({\widetilde{\mathsf{hyb}}}_e(n)) =1] + \frac{1}{2}\Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n))=0]\\&= \frac{1}{2} + \frac{1}{2}\left( \Pr [D({\widetilde{\mathsf{hyb}}}_e(n)) =1] - \Pr [D({\widetilde{\mathsf{hyb}}}_{e-1}(n))=1]\right) \\&\ge \frac{1}{2} + \frac{1}{2}\left( \frac{1}{q(n)}\right) . \end{aligned}$$

Thus, we arrive at a contradiction. \(\square \)

Note that the only difference between the real execution and hybrid 1 is in the way that the receiver outputs \(v_u\). Specifically, in hybrid 1 simulator \({\mathcal{S}}_1\) extracts \(v_0,v_1\) and then outputs \(v_u\), while in the real execution the receiver outputs the value that corresponds to its strategy in Stage 5. We now prove that the receiver’s output in both experiments is statistically close. In more detail, the difference between the simulation’s strategy and the honest receiver’s strategy is that the simulator extracts the sender’s both inputs in all \(i \in \Delta -\Phi \) and then finds code words that are 16n-close to the extracted values, whereas the honest receiver finds a code word that is 17n-close based on the inputs it received in Stages 2 and 5, and returns it.

Observe that the sender’s views in hybrid 1 and the real execution are identical. It is therefore suffices to show that the value u extracted by the simulator and fed to \(\mathcal{F}_{\scriptscriptstyle \mathrm {OT}}\) is identical the to the reconstructed output of the honest receiver. Let v denote the value the honest receiver outputs and \(v_u\) denote the value extracted by the simulator. These values are obtained in two steps:

  • The honest receiver obtains shares of v by computing \(\tilde{\rho }_i = \beta _i^u \oplus \tilde{s}_i\) for \(i \in \Delta \) where \(\tilde{s}_i\) are its output from the OT sessions in Stage 2. On the other hand, the simulator computes \(\overline{\rho }^u_i = \beta _j^u \oplus s_j^{u \oplus \alpha _j}\) where \(s_j^b\)’s are the inputs that the simulator extracted from Stage 1. (Note that these were the inputs that the sender was supposed to use in the OT sessions.)

  • Next, the closest code word is computed from the shares. The honest receiver picks that code \(\tilde{w}\) that is 17n-close to \((\tilde{\rho }_i)_{i \in \Delta }\). The simulator, on the other hand, picks a code word \(\overline{w}^u\) that is 16n-close to \((\overline{\rho }^u_i)_{i \in \Delta }\).

We now show that \(v \ne v_u\) only with negligible probability because of the final cut-and-choose stage. We consider two cases:

  • Case 1:The honest receiver extracts a valid v from\((\tilde{\rho }_i)_{i \in \Delta }\): In this case, we know that there is a code word w that is 17n-close to \((\tilde{\rho }_i)_{i \in \Delta }\). Now, for every \(i \in \Delta -\Phi \), we have that \(\tilde{\rho }_i = \overline{\rho }_i\) since the sender proceeded honestly in those sessions. Following the same proof as in Claim 3.6, we can show that \(|\Phi | \ge n\) only with negligible probability. Therefore, \(|\Delta -\Phi | \ge 17n\) and \(\tilde{\rho }_i\) and \(\overline{\rho }^u_i\) agree on at least 17n locations in \(\Delta -\Phi \). Now, since w is 17n-close to \((\tilde{\rho }_i)_{i \in \Delta }\), this means that w is 16n-close to \((\overline{\rho }^u_i)_{i \in [20n]}\) (because \(|\Delta | = 18n\)). Therefore, the simulator would have recovered the same code word and extracted the same value.

  • Case 2:The honest receiver does not extract a validvfrom\((\tilde{\rho }_i)_{i \in \Delta }\): This happens when \((\tilde{\rho }_i)_{i \in \Delta }\) is not 17n-close to any code word. In this case, the receiver uses a default value for v. We need to show that in this case, even the simulation sets \(\tilde{v}_u\) to a default value. Suppose that there exists w that is 16n-close to \((\overline{\rho }^u_i)_{i \in [20n]}\). We will argue that the simulator still sets \(v_u\) to a default value.

    Let \(\psi \) be the locations where w and \((\tilde{\rho }_i)_{i \in \Delta }\) differ. By our hypothesis that \((\tilde{\rho }_i)_{i \in \Delta }\) is not 17n-close to any code word, we have that \(|\psi | > n\). Nevertheless, since \(\Gamma \) is a randomly chosen subset of size n and based on the proof of Claim 3.6, we can show that \(\psi \cap \Gamma \ne \emptyset \) except with negligible probability. In this case, there exists an index \(j \in \psi \cap \Gamma \) such that the sender must reveal values \(s_j^0,s_j^1\) that are consistent with the OT protocol in session i in Stage 2. Therefore, for such a \(j \in \psi \cap \Gamma \) it holds that

    $$\begin{aligned} \tilde{\rho _j} = \beta _j^u \oplus \tilde{s}_i= \beta _j^u \oplus s^{u \oplus \alpha _j}_j \ne w_j. \end{aligned}$$

    This implies that the simulator would have noted that \(\beta _j^u \oplus s^{u \oplus \alpha _j}_j \ne w_j\). In this case, the sender fails the second consistency check and the simulator should report that \(\tilde{v}_u\) as the default value.

This concludes the proof of the claim. \(\square \)

4 One-Sided Adaptive UC Secure Computation

In the two-party one-sided adaptive setting, at most one of the parties is adaptively corrupted [30, 38]. In this section, we provide a simple transformation of our static UC protocol from Sect. 3 to a two-party UC protocol that is secure against one-sided adaptive corruption. Our first observation is that in Protocol 1 the parties use their real inputs to the OT protocol only in Phase 4. Therefore, simulation of the first three phases can be easily carried out by simply following the honest strategy. On the other hand, simulating the messages in Phase 4 requires some form of equivocation since if corruption takes place after this phase is concluded then the simulator needs to explain this message with respect to the real input of the corrupted party. It is important to note that while in the plain model any statically secure protocol can be compiled into one-sided secure protocol by encrypting its entire communication using non-committing encryption (NCE) [5, 9, 17], the same transformation does not hold in the UC setting due to the additional setup, e.g., a CRS, which may depend on the identity of the corrupted party. Nevertheless, in Phase 4 the parties only run a combiner for which the computation does not involve any usage of the CRS (which is induced by the extractable commitment). Therefore, the proof follows directly.

Our second observation is that in the context of one-sided adaptive security, it is sufficient to rely on a weaker variant of NCE, namely one that is secure against only a single adaptive corruption [30, 38]. In particular, we take advantage of a construction presented in [9] and later refined in [17] that achieves equivocation with respect to only one party under the assumption of semi-honest OT with receiver equivocation (namely such OT implies that the receiver’s messages can be explained with respect to both potential inputs \(u=0\) and \(u=1\) and some random string). We will briefly describe it now. Recall that in the fully adaptive case, the high-level idea is for the sender and receiver to mutually agree on a random bit. This process requires simulatable PKE schemes which implies the ability to both obliviously sample a public key without the knowledge of the secret key, as well as the ability to obliviously sample a ciphertext without the knowledge of the corresponding plaintext. In the simpler one-sided scenario, Canetti et al. [9] observed that an oblivious transfer protocol can replace the oblivious generation of the public key. Specifically, the NCE receiver sends two public keys to the sender, and then, the parties invoke an OT protocol where the NCE receiver plays the role of the OT sender and enters the corresponding secret keys. To allow equivocation for the NCE sender, the OT must enable equivocation with respect to the OT receiver. The [22] OT protocol is an example for such a protocol. Here the OT receiver can pick the two ciphertexts so that it knows both plaintexts. Then equivocation is carried out by declaring that the corresponding ciphertext is obliviously sampled.

The advantage of this approach is that it removes the requirement of generating the public key obliviously, as now the randomness for its generation is split between the parties, where anyway only one of them is corrupted. This implies that the simulator can equivocate the outcome of the protocol execution without letting the adversary the ability to verify it. To conclude, it is possible to strengthen the security of Protocol 1 into the one-sided setting by simply encrypting the communication within the combiner phase using one-sided NCE which in turn can be constructed based on PKE with oblivious ciphertext generation. This implies the following theorem which further implies black-box one-sided UC secure computation from enhanced trapdoor permutation.

Theorem 4.1

Assume the existence of PKE with oblivious ciphertext generation. Then for any two-party well-formed functionality \(\mathcal{F}\), there exists a protocol that UC realizes \(\mathcal{F}\) in the presence of one-sided adaptive, malicious adversaries in the CRS model using black-box access to the PKE.

5 Adaptive UC Secure Computation

In this section, we demonstrate the feasibility of UC commitment schemes based on PKE with oblivious ciphertext generation (namely where it is possible to obliviously sample the ciphertext without knowing the plaintext). Our construction is secure even in the presence of adaptive corruptions and is the first to achieve the stronger notion of adaptive security based on this hardness assumption. As stated in the introduction, plugging in our UC commitment protocol into the transformation of [6], that generates adaptive malicious oblivious transfer given adaptive semi-honest oblivious transfer and UC commitments, implies malicious adaptive UC oblivious transfer based on semi-honest adaptive oblivious transfer and PKE with oblivious ciphertext generation, using only black-box access to these underlying primitives. Stating formally,

Theorem 5.1

Assume the existence of adaptive semi-honest oblivious transfer and PKE with oblivious ciphertext generation. Then for any multi-party well-formed functionality \(\mathcal{F}\), there exists a protocol that UC realizes \(\mathcal{F}\) in the presence of adaptive, malicious adversaries in the CRS model using black-box access to the oblivious transfer protocol and the PKE.

Noting that simulatable PKE implies both semi-honest adaptive OT [5, 12] and PKE with oblivious ciphertext generation, we derive the following corollary (where simulatable PKE implies oblivious sampling of both public keys and ciphertexts),

Corollary 5.2

Assume the existence of simulatable PKE. Then for any multi-party well-formed functionality \(\mathcal{F}\), there exists a protocol that UC realizes \(\mathcal{F}\) in the presence of adaptive, malicious adversaries in the CRS model using black-box access to the simulatable PKE.

This in particular improves the result from [16] that relies on simulatable PKE in a non-black-box manner. Note also that our UC commitment can be constructed using a weaker notion than simulatable PKE where the inverting algorithms can require a trapdoor. This notion is denoted by trapdoor simulatable PKE [5] and can be additionally realized based on the hardness assumption of factoring Blum integers. This assumption, however, requires that we modify our commitment scheme so that the CRS includes \(3n+1\) public keys of the underlying PKE instead of just one, as otherwise the reduction to the security of the PKE does not follow for multiple ciphertexts. Specifically, at the cost of linear blowup (in the security parameter) of the CRS, we obtain adaptively secure UC commitments under a weaker assumption. Now, since trapdoor simulatable PKE implies adaptive semi-honest OT [5] it holds,

Corollary 5.3

Assume the existence of trapdoor simulatable PKE. Then for any multi-party well-formed functionality \(\mathcal{F}\), there exists a protocol that UC realizes \(\mathcal{F}\) in the presence of adaptive, malicious adversaries in the CRS model using black-box access to the trapdoor simulatable PKE.

Note that, since the best-known general assumptions for realizing adaptive semi-honest OT is trapdoor simulatable PKE, this corollary gives evidence that the assumptions for adaptive semi-honest OT are sufficient for adaptive UC security and makes a step toward identifying the minimal assumptions for achieving UC security in the adaptive setting. To conclude, we note that enhanced trapdoor permutations, which imply PKE with oblivious ciphertext generation, imply the following corollary,

Theorem 5.4

Assume the existence of enhanced trapdoor permutation. Then \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\) (cf. Fig. 2) can be UC realized in the CRS model in the presence of adaptive malicious adversaries.

5.1 UC Commitments from PKE with Oblivious Ciphertext Generation

In this section, we demonstrate the feasibility of adaptively secure UC commitments for the message space \(m\in \{0,1\}\) from any public-key encryption scheme \(\Pi =(\mathsf {Gen},\mathsf {Enc},\mathsf {Dec},\widetilde{\mathsf {Enc}}, \widetilde{\mathsf {Enc}}^{-1})\) with oblivious ciphertext generation (cf. Definition 2.4) in the common reference string (CRS) model.

Protocol Overview At a high level, the CRS contains two public keys of the encryption scheme, one used by the sender and the other by the receiver, and proceeds in two phases: an input encoding phase, where the sender encodes its input via a n-out-of-\((2n+1)\) Shamir secret sharing scheme and commits to them in a specific way followed by a cut-and-choose phase where the receiver asks the sender to reveal n of the shares. In slightly more detail, in the input encoding phase, the sender encodes its message m via n-degree polynomial \(p(\cdot )\) such that \(p(0)=m\) and commits to \(p(1),\ldots ,p(2n+1)\) as follows: For each i, it sends two strings: One is a ciphertext containing an encryption of p(i) under the public key in the CRS meant for the sender and the other is a random string. Furthermore, the sender randomly decides which one of the two strings is the encryption of p(i). In the cut-and-choose phase, the parties engage in a coin toss where the receiver first encrypts its share for the coin-tossing using the receiver public key from the CRS, followed by the sender providing its share of the coin-tossing. Then the receiver opens its share and the result of the coin-tossing decided by the XOR of the shares determines a subset \(\Gamma \) of \([2n+1]\) of size n. The sender reveals the p(i) for every \(i \in \Gamma \) and the randomness used for generating the ciphertext. In the decommitment phase, the sender reveals the entire randomness used for the encoding phase.

Straight-line equivocation is achieved by considering encodings of both 0 and 1 and for each i, via polynomials \(p(\cdot )\) and \(q(\cdot )\), such that they agree on n points randomly chosen from \(\{1,\ldots ,2n+1\}\), call this set \(\Gamma ^*\). Then the simulator encodes by having one ciphertext with the value p(i) and the other q(i). Finally, the simulator biases the coin-tossing outcome so that the outcome results is the set \(\Gamma ^*\). This can be achieved in a straight-line manner as the simulator will possess the secret key corresponding to the receiver public key in the CRS. Since the receiver encrypts its share first in the coin-tossing, the simulator can extract the value and send the sender’s share accordingly to the biased outcome. Finally, since p and q agree on this set \(\Gamma ^*\), in the decommitment phase, the sender will be able to open either \(p(\cdot )\) or \(q(\cdot )\), depending on what the message is. Furthermore, we require one of the two strings in each coordinate to be random and this can be faked as the encryption scheme has pseudorandom ciphertexts.

Straight-line extraction, on the other hand, requires an information-theoretic lemma which states that there exists a unique set \(\Gamma ^*\) after the encoding phase that the sender needs as the outcome of the coin-tossing in the cut-and-choose phases for it to equivocate. First, using the semantic security of the receiver public key we show that the probability that the sender can bias the coin-tossing is negligible. Then we show that the simulator can extract the message of the sender by using the secret key corresponding to the sender public key used by the sender in the encoding phase. This will accomplish by using the n values that have been revealed and finding a consistent polynomial for the remaining shares. (See [31] for more details.)

Finally, we remark that security against adaptive corruptions essentially follows from the pseudorandomness of the ciphertexts.

Our complete construction is shown in Fig. 4. Next, we prove that

Fig. 4
figure 4

UC adaptively secure commitment scheme

Theorem 5.5

Assume that \(\Pi =(\mathsf {Gen},\mathsf {Enc},\mathsf {Dec},\widetilde{\mathsf {Enc}}, \widetilde{\mathsf {Enc}}^{-1})\) is a PKE with oblivious ciphertext generation. Then protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\) (cf. Fig. 4) UC realizes \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\) in the CRS model in the presence of adaptive malicious adversaries.

Proof Overview Intuitively, security requires proving both hiding and binding in the presence of static and adaptive corruptions. The hiding property follows from the IND-CPA security of the encryption scheme combined with the fact that the receiver only sees n shares in a \((n+1)\)-out-of-\((3n+1)\) secret sharing of the message in the commit phase. On the other hand, proving binding is much more challenging and reduces to the fact that a corrupted sender cannot successfully predict exactly the n indices from \(\{1,\ldots ,3n+1\}\) that will be chosen in the coin-tossing protocol. In fact, if it can identify these n indices, then it would be possible for the adversary to break binding. This is because it can create two different polynomials that intersect on these n points, yet encode two different messages. An important information-theoretic argument that we prove here is that for a fixed encoding phase, no adversary can equivocate on two continuations from the encoding phase with different outcomes of the coin-tossing phase. Saying differently, for any given encoding phase there is exactly one outcome for the coin-tossing phase that will allow equivocation. Given this claim, binding now follows from the IND-CPA security of the encryption scheme used in the coin-tossing phase.

In addition, recall that in the UC setting the scheme must also support a simulation that allows straight-line extraction and equivocation. At a high level, the simulator sets the CRS to public keys for which it knows the corresponding secret keys. This will allow the simulator to extract all the values encrypted by the adversary. We observe that the simulator can fix the outcome of the coin-tossing phase to any n indices of its choice by extracting the random string \(\sigma _0\) encrypted by the receiver and choosing a random string \(\sigma _1\) so that \(\sigma _0 \oplus \sigma _1\) is a particular string. Next, the simulator generates secret sharing for both 0 and 1 so that they overlap in the particular n shares. To commit, the simulator encrypts the n common shares within the n indices to be revealed (which it knows in advance), and for the rest of the indices, it encrypts two shares: one that corresponds to the sharing of 0 and the other that corresponds to the sharing of 1. Finally, in the decommit phase, the simulator reveals that shares that correspond to the real message m, and exploits the invertible sampling algorithm to prove that the other ciphertexts were obliviously generated.

5.2 Proof of Theorem 5.5

Let \(\mathcal{A}\) be a malicious probabilistic polynomial-time real adversary running the above protocol in the \(\mathcal{F}_{\scriptscriptstyle \mathrm {CRS}}\)-hybrid model. We construct an ideal model adversary \({\mathcal{S}}\) with access to \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\) which simulates a real execution of protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\) with \(\mathcal{A}\) such that no environment \(\mathcal{Z}\) can distinguish the ideal process with \({\mathcal{S}}\) and \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\) from a real execution of \(\pi _{\scriptscriptstyle \mathrm {COM}}\) with \(\mathcal{A}\). \({\mathcal{S}}\) starts by invoking a copy of \(\mathcal{A}\) and running a simulated interaction of \(\mathcal{A}\) with environment \(\mathcal{Z}\), emulating the honest party. We separately describe the actions of \({\mathcal{S}}\) for every corruption case.

  • Initialization: The common reference string (CRS) is chosen by \({\mathcal{S}}\) in the following way. It generates \((\textsc {PK},\textsc {SK})\leftarrow \mathsf {Gen}(1^n)\) and \(({\widetilde{\textsc {PK}}},{\widetilde{\textsc {SK}}})\leftarrow \mathsf {Gen}(1^n)\), and places \((\textsc {PK},{\widetilde{\textsc {PK}}})\) in the CRS. The simulator further records \((\textsc {SK},{\widetilde{\textsc {SK}}})\).

  • Simulating the communication with\(\mathcal{Z}\): Every message that \({\mathcal{S}}\) receives from \(\mathcal{Z}\) it internally feeds to \(\mathcal{A}\) and every output written by \(\mathcal{A}\) is relayed back to \(\mathcal{Z}\).

  • Simulating the commitment phase when the receiver is statically corrupted: In this case \({\mathcal{S}}\) proceeds as follows:

    1. 1.

      Encoding phase: Upon receiving message \((sid,\mathrm{Sen},\mathrm{Rec})\) from \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\), the simulator picks a random subset \(S^*\subset [3n+1]\) of size n and two random n-degree polynomials \(p_0(\cdot )\) and \(p_1(\cdot )\) such that:

      $$\begin{aligned} p_0(i)= & {} p_1(i) \ \ \ \ \ \ \ \ \ \ \ \forall i \in S^*\\ p_0(0) = 0&\text{ and }&p_1(0) = 1. \end{aligned}$$

      Note that the simulator can define these polynomials via interpolation, where a unique n-degree polynomial can be constructed given \(n+1\) points. Let \(T^* = [3n+1] - S^*\); then, the simulator defines the commitment as follows:

      • For every \(i \in S^*\), the simulator proceeds as the honest sender would with polynomial \(p_0(\cdot )\). Namely, it first picks \(b_i\leftarrow \{0,1\}\) at random and then sets the following pairs,

        $$\begin{aligned}&\text{ If } b_i = 0 \text{ then } \left. \begin{array}{lll} c_i^{0} &{}=&{}\mathsf {Enc}_\textsc {PK}(p_0(i);t_i) \\ c_i^{1} &{}=&{} r_i \end{array} \right. \text{ else, } \\&\quad \text{ if } b_i = 1 \text{ then } \left. \begin{array}{lll} c_i^{0} &{}= &{} r_i \\ c_i^{1} &{}=&{} \mathsf {Enc}_\textsc {PK}(p_0(i);t_i),\end{array} \right. \end{aligned}$$

        where \(b_i \leftarrow \{0,1\}\), \(t_i \leftarrow \{0,1\}^n\) and \(r_i\leftarrow \widetilde{\mathsf {Enc}}_\textsc {PK}(\cdot )\) is obliviously sampled (we recall that \(p_0(i) = p_1(i)\) for all \(i \in S^*\)).

      • For every \(i \in T^*\), the simulator picks \(b_i\leftarrow \{0,1\}\) at random and then uses the points on both polynomials \(p_0(\cdot )\) and \(p_1(\cdot )\) to calculate the following pairs,

        $$\begin{aligned}&\text{ If } b_i = 0 \text{ then } \left. \begin{array}{lll} c_i^{0} &{}= &{}\mathsf {Enc}_\textsc {PK}(p_0(i);t_i) \\ c_i^{1} &{}=&{} \mathsf {Enc}_\textsc {PK}(p_1(i);\tilde{t}_i)\end{array} \right. \ \text{ else, } \\&\quad \text{ if } b_i = 1 \text{ then } \left. \begin{array}{lll} c_i^{0} &{}= &{}\mathsf {Enc}_\textsc {PK}(p_1(i);\tilde{t}_i) \\ c_i^{1} &{}=&{} \mathsf {Enc}_\textsc {PK}(p_0(i);t_i),\end{array} \right. \end{aligned}$$

        where \(b_i \leftarrow \{0,1\}\) and \(t_i,\tilde{t}_i \leftarrow \{0,1\}^n\) are chosen uniformly at random.

      Finally, the simulator sends the pairs \((c_0^0,c_0^1),\ldots ,(c_{3n+1}^0,c_{3n+1}^1)\) to the receiver.

    2. 2.

      Coin-tossing phase: The simulator biases the coin-tossing result so that the set S that is chosen in this phase is identical to \(S^*\). More precisely, the simulator extracts \(\sigma _0\) from the receiver’s ciphertext and then sets \(\sigma _1\) so that \(\sigma =\sigma _0\oplus \sigma _1\) yields the set \(S^*\).

    3. 3.

      Cut-and-choose phase: The simulator decrypts all the ciphertexts within \(\{c_i^{b_i}\}_{i\in S^*}\).

  • Simulating the decommitment phase where the receiver is statically corrupted: Upon receiving a message \((\mathsf{reveal},sid,m)\) from \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\), \({\mathcal{S}}\) generates a simulated decommitment message as follows. Recall first that the simulator needs to reveal points on a polynomial \(p(\cdot )\) and pairs \(\{(b_i,t_i)\}_{i \in [3n+1]}\) such that \(p(0)=m\) and \( c_i^{b_i} = \mathsf {Enc}_\textsc {PK}(p(i);t_i). \) Let \(\hat{b}_i = b_i\oplus m\) for all \(i \in T^*\), then \({\mathcal{S}}\) reveals \(p_m(\cdot )\), \(\{\hat{b}_i, t_i^{\hat{b}_i},r_i = \widetilde{\mathsf {Enc}}^{-1}_\textsc {PK}(c_i^{1-\hat{b}_i})\}_{i\in T^*}\).

Next, we prove that \(\mathcal{Z}\) cannot distinguish an interaction of protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\) with \(\mathcal{A}\), corrupting the receiver, from an interaction of \({\mathcal{S}}\) with \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\). Formally,

Claim 5.1

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {CRS}}}_{\pi _{\scriptscriptstyle \mathrm {COM}}, \mathcal{A}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

We prove this claim using a sequence of hybrid games.

Hybrid 0::

This is the real interaction of \(\mathcal{Z}\) with \(\mathcal{A}\) and Protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\).

Hybrid 1::

In this experiment, we define a simulator \({\mathcal{S}}_1\) that proceeds as follows. \({\mathcal{S}}_1\) uses \({\mathcal{S}}\)’s strategy in the coin-tossing phase when simulating the corrupted receiver. Specifically, \({\mathcal{S}}_1\) emulates \(\mathcal{F}_{\scriptscriptstyle \mathrm {CRS}}\) and generates \((\textsc {PK},\textsc {SK})\) and \(({\widetilde{\textsc {PK}}},{\widetilde{\textsc {SK}}})\) as in the simulation. Next, it picks at the beginning of the commit phase a random subset \(S^*\) for which it wishes to bias the outcome of the coin-tossing phase. It then extracts the value \(\sigma _0\) encrypted by the receiver in the coin-tossing phase using \({\widetilde{\textsc {SK}}}\) and sets \(\sigma _1\) so that \(\sigma _0\oplus \sigma _1\) results in \(S^*\). \({\mathcal{S}}_1\) next follows the honest’s sender role with input m for the rest of the execution. We claim that the adversary’s view in this hybrid game is identically distributed to its view in the prior hybrid. This is because \(S^*\) is chosen independently of the hybrid game and uniformly at random. Therefore, given any particular \(\sigma _0\) extracted from the adversary’s commitment in the coin-tossing stage, \(\sigma _1\) will be uniformly random (which is exactly how it is distributed in hybrid 0). Therefore, we have that the following distributions are identical,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {CRS}}}_{\pi _{\scriptscriptstyle \mathrm {COM}}, \mathcal{A}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} \equiv \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}_1, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$
Hybrid 2::

In this experiment, we define a simulator \({\mathcal{S}}_2\) that is given the sender’s message m, yet it carries out \({\mathcal{S}}\)’s strategy in the encoding phase instead of playing the role of the honest sender. More precisely, \({\mathcal{S}}_2\) proceeds identically to \({\mathcal{S}}_1\) with the exception that in the encoding phase, it defines polynomials \(p_m(\cdot )\) and \(p_{1-m}(\cdot )\) exactly as \({\mathcal{S}}\) does in the simulation using the set \(S^*\). Observe first that the outcome of the coin-tossing phase has already been fixed to \(S^*\) in hybrid 1. Moreover, \({\mathcal{S}}_2\) executes the decommitment phase exactly as the honest sender does by providing polynomial \(p_m(\cdot )\). Then the differences between the receiver’s view in hybrids 1 and 2 are with respect to the non-opened ciphertexts, namely the ciphertexts that are in positions \(1-b_i\)’s, and denoted by \(\{c_i^{1-b_i}\}_{i\in [3n+1]}\), which encode the polynomial \(p_{1-m}(\cdot )\). These ciphertexts are obliviously picked in hybrid 1, yet computed using algorithm \(\mathsf {Enc}\) in hybrid 1. We now prove that the receiver’s views in these hybrids executions are computational indistinguishability due to the indistinguishability of ciphertexts using \(\mathsf {Enc}\) and \(\widetilde{\mathsf {Enc}}\). More precisely, we show the following claim:

Claim 5.2

The following distributions are computationally indistinguishable.

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}_1, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}_2, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

Assume by contradiction that there exist a PPT adversary \(\mathcal{A}\), distinguisher D and polynomial \(p(\cdot )\) such that D can distinguish the two distributions stated in the claim for infinitely many n’s with probability \(\frac{1}{p(n)}\). Fix an n for which this happens. Then we use \(\mathcal{A}\) we construct an adversary \(\mathcal{A}'\) that violates the indistinguishability of real and obliviously generated ciphertexts (cf. Definition 2.4). Toward this, we consider a sequence of intermediate hybrid games \(H_2^0,\ldots ,H_2^{3n+1}\) where in hybrid \(H_2^j\) we define a simulator \({\mathcal{S}}_2^j\) that proceeds identically to \({\mathcal{S}}_2\) when generating \(\{c_i^{1-b_i}\}_{i \in [j]}\); namely, it picks a polynomial \(p_{1-m}(\cdot )\) and sets \(c_i^{1-b_i}\) by \(\mathsf {Enc}_\textsc {PK}(p_{1-m}(i))\). Finally, \(\{c_i^{1-b_i}\}_{i > j}\) are obliviously generated as in the real sender’s strategy. Note that by our construction, \(H_2^0\) and \(H_2^{3n+1}\) proceed identically to Hybrids 1 and 2, respectively. Denoting the output of the execution in hybrid \(H_2^j\) by \(\mathsf{hyb}_j(n)\) and using a standard hybrid argument, it follows that there exists j such that

$$\begin{aligned} \Big |\Pr [D(\mathsf{hyb}_j(n)) = 1] - \Pr [D(\mathsf{hyb}_{j-1}(n))=1] \Big | \ge \frac{1}{(3n+1)p(n)}. \end{aligned}$$
(6)

We now construct an adversary \(\mathcal{A}'\) that violates the indistinguishability of obliviously generated ciphertexts and real ciphertexts. Specifically, recall that \(\mathcal{A}'\) needs to distinguish \((\textsc {PK},r_1,c_1)\) from \((\textsc {PK},r'_1,c_2)\) where \(c_1 \leftarrow \widetilde{\mathsf {Enc}}_{\textsc {PK}}(r_1)\) and \(c_2 \leftarrow \mathsf {Enc}_{\textsc {PK}}(m;r_2), r'_1 \leftarrow \widetilde{\mathsf {Enc}}^{-1}(c_2)\). Upon receiving \((\textsc {PK},r,c)\)\(\mathcal{A}'\) proceeds as follows. It first emulates the execution as in hybrid 1 by setting the CRS to be \((\textsc {PK},{\widetilde{\textsc {PK}}})\) for \(({\widetilde{\textsc {PK}}},{\widetilde{\textsc {SK}}})\leftarrow \mathsf {Gen}(1^n)\). It then emulates the internal execution by following the strategy of \({\mathcal{S}}_1^{j-1}\) with the exception that \(c_j^{1-b_j}\) is set to c. Later, when \(\mathcal{A}'\) needs to reveal \(c_j^{1-b_j}\) it returns r as the randomness used to obliviously generate c. Finally, \(\mathcal{A}'\) invokes D on \(\mathcal{A}\)’s view and outputs whatever D outputs. We recall that the ciphertexts that correspond to the set \(\{1-b_i\}_{i\in [3n+1]}\) are always revealed as obliviously generated ciphertexts regardless of the way they were generated. It must also be noted that \(\mathcal{A}'\) does not need to know \(\textsc {SK}\) in order to complete the simulation of the sender’s messages since it never extracts here. Nevertheless, \(\mathcal{A}'\) does need access to \(\widetilde{\mathsf {Enc}}^{-1}\) in order to generate the randomness of the first \(j-1\) ciphertexts, which by the definition of the encryption scheme requires only \(\textsc {PK}\). To conclude, the internal emulation of \(\mathcal{A}'\) upon receiving \((\textsc {PK},r_1,c_1)\) so that \(c_1 \leftarrow \widetilde{\mathsf {Enc}}_{\textsc {PK}}(r_1)\) is identically distributed to \(H_j\), whereas when \((\textsc {PK},r'_1,c_2)\) are generated so that \(c_2 \leftarrow \mathsf {Enc}_{\textsc {PK}}(m;r_2), r'_1 \leftarrow \widetilde{\mathsf {Enc}}^{-1}_\textsc {PK}(c_2)\) with \(m = p_{1-m}(j)\), \(\mathcal{A}\)’s view is distributed identically to \(H_{j-1}\). Therefore, it follows from Equation 6 that

$$\begin{aligned} \Big |\Pr [\mathcal{A}'(\textsc {PK},r_1,c_1) = 1] - \Pr [\mathcal{A}'(\textsc {PK},r'_1,c_2)=1] \Big | \ge \frac{1}{(3n+1)p(n)}. \end{aligned}$$

This implies a contradiction relative to the indistinguishability property of real and obliviously generated ciphertexts. \(\square \)

Hybrid 3::

This hybrid is the actual simulation with \({\mathcal{S}}\). Namely, here \({\mathcal{S}}_3\) does not have the honest sender’s actual input m and it computes two polynomials \(p_0(\cdot )\) and \(p_1(\cdot )\) as defined above. Furthermore, \({\mathcal{S}}_3\) reveals one of the polynomials \(p_0(\cdot )\) or \(p_1(\cdot )\) in the decommitment phase, depending on the value of m. Observe that the distribution of the messages sent by \({\mathcal{S}}_2\) and \({\mathcal{S}}_3\) is identical in both hybrids. We use the facts that at most n shares are revealed in the commitment phase and that \(p(\cdot )\) is an n-degree polynomial. Therefore, revealing these n shares keeps \(p_{1-m}(0)\) completely hidden and we have that

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}_2, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}\equiv \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}_3, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

\(\square \)

  • Simulating the commit phase when the sender is statically corrupted: Simulating the sender involves extracting the committed value as follows:

    1. 1.

      Encoding phase: The simulator proceeds honestly following the honest receiver’s strategy, receiving pairs \((c_i^0,c_i^1)\) for all \(i\in [3n+1]\). The simulator exploits the fact that it knows the secret key \(\textsc {SK}\) and decrypts all ciphertexts. Let \(\beta _i^b = \mathsf {Dec}_\textsc {SK}(c_i^b)\).

    2. 2.

      Coin-tossing phase: The simulator proceeds honestly following the honest receiver’s strategy. Let \(S'\) be the outcome of the coin-tossing phase.

    3. 3.

      Cut-and-choose phase: The simulator proceeds as the honest receiver and verifies whether the openings are consistent with the ciphertexts sent in the encoding phase. Note that none of the revealed values should differ from what the simulator decrypted using \(\textsc {SK}\) due to the fact that \(\Pr [\mathsf {Dec}_\textsc {SK}(\mathsf {Enc}_\textsc {PK}(m)) = m] = 1\).

    4. 4.

      Input extraction: Finally, the simulator extracts the sender’s input as follows. \({\mathcal{S}}\) chooses an arbitrary index \(j \in [3n+1] - S'\) and reconstructs two polynomials \(q(\cdot )\) and \({\widetilde{q}}(\cdot )\) such that

      $$\begin{aligned}&q(i) = \ {\widetilde{q}}(i) = \beta _i^{b_i} \ \ \ \ \ \ \forall i\in S'\\ q(j) = \beta _j^0 \text{ and }&{\widetilde{q}}(j) = \beta _j^1 \text{ and } q(0), {\widetilde{q}}(0) \in \{0,1\}. \end{aligned}$$

      It then verifies whether for all \(i\in [3n+1]\), \(q(i) \in \{\beta _i^0,\beta _i^1\}\) and \({\widetilde{q}}(i) \in \{\beta _i^0,\beta _i^1\}\). The following cases arise:

      • Case 1:Both\(q(\cdot )\)and\({\widetilde{q}}(\cdot )\)satisfy the condition and\({\widetilde{q}}(0) \ne q(0)\). Then \({\mathcal{S}}\) halts returning \({\mathsf {fail}}\). Below we prove that the simulator outputs \({\mathsf {fail}}\) with negligible probability.

      • Case 2:At most one of\(q(\cdot )\)and\({\widetilde{q}}(\cdot )\)satisfies the condition or\({\widetilde{q}}(0) = q(0)\). \({\mathcal{S}}\) sends \((\mathsf{commit},sid,q(0))\) to the \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\) functionality and stores the committed bit q(0). Otherwise, \({\mathcal{S}}\) sends a default value.

      • Case 3:Neither\(q(\cdot )\)or\({\widetilde{q}}(\cdot )\)satisfy the condition.\({\mathcal{S}}\) sends a default value to the ideal functionality and need not store the committed bit since it will never be decommitted correctly.

Claim 5.3

Conditioned on case 1 not occurring, the sender can decommit to b if and only if \({\mathcal{S}}\) sends b to \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\).

Proof

By the assumption in the claim, either case 2 or 3 occurs. We now show that if \(\mathcal{A}\) decommits successfully, then it must be either with polynomial \(q(\cdot )\) or \({\widetilde{q}}(\cdot )\) if both satisfy the conditions, or with the single satisfying polynomial. That would imply that the adversary can only decommit to whatever sent by the simulator to \(\mathcal{F}_{{\scriptscriptstyle \mathrm {COM}}}\). We will demonstrate our argument for the case that both polynomials satisfy the condition. The case of a single polynomial follows similarly. More formally, suppose that \(q(\cdot )\) and \({\widetilde{q}}(\cdot )\) are as required by the above condition. Then polynomial \(q^*(\cdot )\) that is revealed by \(\mathcal{A}\) in the decommitment phase must take the same value as \(q(\cdot )\) and \({\widetilde{q}}(\cdot )\) for all \(i \in S'\). Focusing on the jth index that is specified above, it holds that either \(q^*(j) = q(j)\) or \(q^*(j) = {\widetilde{q}}(j)\) (because \(\mathcal{A}\) can decrypt ciphertexts \(c_j^0\) and \(c_j^1\) only to the plaintexts q(j) and \({\widetilde{q}}(j)\), respectively). This implies that either \(q^*(\cdot )\) and \(q(\cdot )\) share \(n+1\) points or \(q^*(\cdot )\) and \({\widetilde{q}}(\cdot )\) share \(n+1\) points. Consequently, \(q^*(\cdot )\) becomes identically equal to either \(q(\cdot )\) or \({\widetilde{q}}(\cdot )\) since it is an n-degree polynomial, and \(\mathcal{A}\) can only decommit to q(0) or \({\widetilde{q}}(0)\). \(\square \)

Claim 5.4

The probability that \({\mathcal{S}}\) outputs \({\mathsf {fail}}\) in case 1 is negligible.

Proof

Assume for contradiction that there exists \(\mathcal{A}\) that for infinitely many n’s generates ciphertexts for the encoding phase such that \({\mathcal{S}}\) obtains valid \(q_0(\cdot )\) and \(q_1(\cdot )\) such that both satisfy the conditions at the end of the commit phase with probability at least \(\frac{1}{p(n)}\). Observe that in such a case, the transcript can be equivocated to both 0 and 1 using \(q_0(\cdot )\) and \(q_1(\cdot )\), respectively. We show how to construct \(\mathcal{A}'\) that violates the privacy of the underlying encryption scheme. At a high level, we prove that \(\mathcal{A}\) can successfully equivocate only if it biases the coin-tossing outcome, and this can be achieved only by breaking the privacy of the encryption scheme.

We consider first an alternative simulator \({\widetilde{\mathcal{S}}}\) that proceeds exactly as the real simulator \({\mathcal{S}}\) does, with the exception that it receives as input \(\textsc {PK}^*\) that it internally sets as \({\widetilde{\textsc {PK}}}\) in the CRS. Observe that \({\mathcal{S}}\) does not use the \({\widetilde{\textsc {SK}}}\) in simulating the corrupted sender. Hence, the view generated by \({\widetilde{\mathcal{S}}}\) is identical to \({\mathcal{S}}\). This implies that the transcript that is obtained in the simulated commit phase of \({\widetilde{\mathcal{S}}}\) can be equivocated with probability \(\frac{1}{p(n)}\), i.e., there are valid decommitments to both 0 and 1 relative to polynomials \(q_0(\cdot )\) and \(q_1(\cdot )\). Then, by applying a standard averaging argument it holds that the transcript from the commit phase can be equivocated with probability \(\frac{1}{2p(n)}\) over a random continuation of some fixed transcript \(\tau \), where \(\tau \) is a partial transcript of protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\) that reaches right after the encoding phase and the probability is taken over the adversary’s and honest receiver’s randomness. (Specifically, the probability that \(\frac{1}{2p(n)}\) portion of the partial transcripts lead to a successful equivocation is \(\frac{1}{2p(n)}\).) Using this observation, we will construct an adversary \(\mathcal{B}\) that wins the IND-CPA game for the scheme \(\Pi \).

Our proof relies heavily of the following claim,

Claim 5.5

Let \(\tau \) be a fixed partial transcript as above. Then there exist no transcripts \({\mathsf {trans}}_1,{\mathsf {trans}}_2\) that satisfy the following conditions:

  1. 1.

    \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\) are complete and accepting transcripts of \(\pi _{\scriptscriptstyle \mathrm {COM}}\) with \(\tau \) being their prefix.

  2. 2.

    There exists two distinct sets \(S_1,S_2\) such that \(S_1\) and \(S_2\) are the respective outcomes of the coin-tossing phase within \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\).

  3. 3.

    There are valid decommitments to values 0 and 1 in \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\).

An important observation that follows from Claim 5.5 is that the sets chosen in the coin-tossing phase must be identical for any two complete \({\mathsf {trans}}_1,{\mathsf {trans}}_2\) that are defined relative to a fixed partial transcript \(\tau \), on which a transcript can be equivocated. Clearly, given that the receiver’s random string \(\sigma _0\) in the coin-tossing phase is hidden from the sender, it must hold that the probability that the same set is chosen in two coin-tossing executions is exponentially small. On the other hand, with non-negligible probability over partial transcripts \(\tau \), there are decommitments to both 0 and 1, and from Claim 5.5, we know that a successful equivocation implies a fixed joint set \(S^*\) of size n. This intuitively means that the adversary violates the IND-CPA security of the encryption scheme \(\Pi \). Formally, we construct an adversary \(\mathcal{B}\) that internally incorporates \(\mathcal{A}\) and proceeds as in the IND-CPA game:

  1. 1.

    \(\mathcal{B}\) externally receives a public key \(\textsc {PK}^*\). It follows \({\widetilde{\mathcal{S}}}\)’s strategy and sets \({\widetilde{\textsc {PK}}}\) in the CRS as this input.

  2. 2.

    \(\mathcal{B}\) emulates an execution with \(\mathcal{A}\) following \({\widetilde{\mathcal{S}}}\)’s strategy until the completion of the encoding phase. Denote the partial transcript obtained so far by \(\tau \).

  3. 3.

    \(\mathcal{B}\) samples \(M_1= np(n)\) transcripts with prefix \(\tau \) as follows. It invokes \(\mathcal{A}\), \(M_1\) times, each time with independent randomness for \({\widetilde{\mathcal{S}}}\) (which specifically implies independent randomness in the coin-tossing phase). For each such execution, \(\mathcal{B}\) checks whether there are two valid decommitments for 0 and 1. If there exists one such transcript, \(\mathcal{B}\) stores the outcome \(S^*\) of the coin-tossing phase on that transcript. If no such transcript is encountered, \(\mathcal{B}\) outputs a random bit and halts.

  4. 4.

    \(\mathcal{B}\) samples two random strings \(\sigma ^0_0\) and \(\sigma ^1_0\) independently at random from \(\{0,1\}^N\) and outputs these strings. Upon receiving a ciphertext c from its oracle, \(\mathcal{B}\) feeds c internally as the receiver’s message in the coin-tossing phase within the partial transcript \(\tau \). It then invokes \(\mathcal{A}\) on \((\tau ,c)\) and completes the execution as follows. If \(\mathcal{A}\) aborts, then \(\mathcal{B}\) outputs a random bit and halts. Otherwise, let \(\sigma \) be the string revealed by \(\mathcal{A}\) in the coin-tossing phase. If \(\sigma \oplus \sigma _0^{b'}\) does not result in \(S^*\) as the outcome of the coin-tossing for some \(b'\in \{0,1\}\), then \(\mathcal{B}\) outputs a random bit. Otherwise \(\mathcal{B}\) outputs \(b'\) and halts. (Note that in any case \(\mathcal{B}\) aborts the execution right before it needs to decrypt c, since it cannot do that.)

We will now prove that \(\mathcal{B}\) successfully identifies whether c is an encryption of \(\sigma _0^0\) or \(\sigma _0^1\) with probability non-negligibly greater than \(\frac{1}{2}\). Toward proving that we consider the following events, conditioned on c being an encryption of \(\sigma _0^{b}\):

\(\mathrm E_1\)::

There are decommitments to 0 and 1 conditioned on transcript \(\tau \) that is generated in Step 2. We already argued above that the probability that \(\mathrm E_1\) occurs is at least \(\frac{1}{2p(n)}\).

\(\mathrm E_2\)::

Here we consider the event that \(\mathcal{B}\) successfully computes \(S^*\) in Step 3. Note first that the probability a transcript, generated in Step 3, fails to reveal \(S^*\) is at most \(1 - \frac{1}{2p(n)}\). This is due to the fact that set \(S^*\) can be efficiently extracted whenever there are two decommitments. Therefore,

$$\begin{aligned} \Pr [\mathrm{E}_1\ |\ \mathrm{E}_2] = 1 - \left( 1 - \frac{1}{2p(n)}\right) ^{np(n)} = 1 - e^{-n/2}. \end{aligned}$$
\(\mathrm E_3\)::

Finally, we consider the event that the coin-tossing phase results in \(S^*\). Note that by Claim 5.5 whenever the transcript can be equivocated then it must be that the coin-tossing result is \(S^*\), this implies that

$$\begin{aligned} \Pr [\mathrm E_3\ |\ \mathrm E_2\ \wedge \ \mathrm E_1]&\ge \Pr [ \text{ transcript } \text{ with } \text{ prefix } \tau \ \text{ can } \text{ be } \text{ equivocated } |\ \mathrm E_2\ \wedge \ \mathrm E_1]\\&\ge \frac{1}{2p(n)}. \end{aligned}$$

From the definition of the events it follows that if \(\mathrm E_1\wedge \mathrm E_2\wedge \mathrm E_3\) occurs, then \(\mathcal{B}\) wins the IND-CPA game with probability 1. Denote the joint events \(\mathrm E_1\wedge \mathrm E_2\wedge \mathrm E_3\) by \(\text {Comp}\) and note that if \(\text {Comp}\) does not occur then \(\mathcal{B}\)’s guess is correct with probability \(\frac{1}{2}\). Then from the calculation above it holds that

$$\begin{aligned} \Pr [\text {Comp}]&= \Pr [\mathrm E_3\wedge \mathrm E_2\wedge \mathrm E_1] = \Pr [\mathrm E_3\ |\ \mathrm E_2\wedge \mathrm E_1]\cdot \Pr [\mathrm E_2\ |\ \mathrm E_1]\cdot \Pr [\mathrm E_1]\\&\ge \frac{1}{2p(n)}\times (1 - e^{-n/2})\times \frac{1}{2p(n)}\\&\ge \frac{1}{8(p(n))^2}. \end{aligned}$$

Next, we compute the probability that \(\mathcal{B}\) succeeds in its guess.

$$\begin{aligned} \textsc {Adv}_{\Pi ,\mathcal{B}}(n)&= \Pr [\mathcal{B} \text{ succeeds } |\text {Comp}]\cdot \Pr [\text {Comp}] \\&\quad + \Pr [\mathcal{B} \text{ succeeds } |\lnot \text {Comp}]\cdot \Pr [\lnot \text {Comp}]\\&= 1\cdot \Pr [\text {Comp}] + \frac{1}{2}\Pr [\lnot \text {Comp}]\\&= 1\cdot \Pr [\text {Comp}] + \frac{1}{2}(1 - \Pr [ \text {Comp}])\\&= \frac{1}{2} + \frac{1}{2}\Pr [\text {Comp}]\\&\ge \frac{1}{2} + \frac{1}{16(p(n))^2}. \end{aligned}$$

This concludes the proof of Claim 5.6. It remains to prove Claim 5.5.

Proof of Claim 5.5

Assume for contradiction there exists a partial transcript \(\tau \) of the encoding phase, complete transcripts \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\) and sets \(S_1 \ne S_2\) as in Claim 5.5. Let \(\textsc {PK}\) be the public key in the CRS and \(\textsc {SK}\) be the corresponding secret key. We define some notations first.

figure a
  • We denote by \(T_1 = [3n+1] - S_1\) and \(T_2 = [3n+1] - S_2\).

  • Recall that transcripts \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\) include valid decommitments to both 0 and 1. Moreover, since we assume that the prefix \(\tau \) is in common to both transcripts and the decryption is prefect, then a ciphertext \(c_i^b\) that is decrypted in either \({\mathsf {trans}}_1\) or \({\mathsf {trans}}_2\) must be correctly revealed into exactly one plaintext, which is determined by \(\beta _i^b = \mathsf {Dec}_{\textsc {SK}}(c_i^b)\).

  • By the assumption above, transcript \({\mathsf {trans}}_1\) induces two valid decommitments to both 0 and 1. We denote the \(b_i\) values within the decommitment to 0 by \((b_1^0,\ldots ,b_{3n+1}^0)\) and the values within the decommitment to 1 by \((b_1^1,\ldots ,b_{3n+1}^1)\).

  • Similarly, transcript \({\mathsf {trans}}_2\) induces two valid such decommitments. Then, let the \(b_i\) values within the decommitment to 0 be denoted by \(({\widehat{b}}_1^0,\ldots ,{\widehat{b}}_{3n+1}^0)\) and the values within the decommitment to 1 denoted by \(({\widehat{b}}_1^1,\ldots ,{\widehat{b}}_{3n+1}^1)\).

Consider transcript \({\mathsf {trans}}_1\). Then the shares that correspond to the indices in \(S_1\) and are revealed during the commitment phase imply that

$$\begin{aligned} \beta _i^0 = \beta _i^1\ \ \ \ \ \forall \ i\in S_1. \end{aligned}$$

This further imply that

$$\begin{aligned} b_i^0= & {} b_i^1\ \ \ \ \ \forall \ i\in S_1. \end{aligned}$$
(7)

The rest of the shares are revealed in the decommitment phase. Now, since we use an \((n+1)\)-out-of-\((3n+1)\) secret sharing scheme, these n shares that correspond to the indices in \(S_1\), together with any additional revealed share \(i \in T_1\), constitute \(n+1\) shares from which a unique polynomial can be reconstructed. Specifically, the reconstructed polynomials for decommitting to 0 and 1 have to be different (since the secrets are different), so it must be that the revealed plaintexts are also different for every \(i \in T_1\), i.e.,

$$\begin{aligned} \beta _i^0 \ne \beta _i^1\ \ \ \ \ \forall \ i\in T_1. \end{aligned}$$

This means that, for \(i\in T_1\), \(\{c_i^0,c_i^1\}\) must contain the plaintexts \(\beta _i^0\) and \(\beta _i^1\). Hence, \((c_i^0,c_i^1)\) must either be the encryption of \((\beta _i^0,\beta _i^1)\) or \((\beta _i^1,\beta _i^0)\). In either case, we have that

$$\begin{aligned} b_i^0= & {} 1 - b_i^1\ \ \ \ \ \forall \ i\in T_1. \end{aligned}$$
(8)

Next, we consider transcript \({\mathsf {trans}}_2\) and recall that it shares the same encoding phase with \({\mathsf {trans}}_1\). It thus must be the case that for every \(i \in [3n+1]\) the revealed shares for both transcripts must correspond to either \(\beta _i^0\) or \(\beta _i^1\). From Equation 8 we know that for every \(i\in T_1\), \(b_i^0 \ne b_1^1\). Hence, for every \(i \in T_1\), either \({\widehat{b}}_i^0 = b_i^0\) or \({\widehat{b}}_i^0 = b_i^1\). Relying on the fact that \(|T_1| = 2n+1\) and the pigeonhole principle, it must hold that there are \(n+1\) indices in \(T_1\) for which either \({\widehat{b}}_i^0= b_i^0\) for all these indices, or \({\widehat{b}}_i^0= b_i^1\) for all these indices. This implies two cases:

  • \({\widehat{b}}_i^0 = b_i^0\)on\(n+1\)locations: In this case, the revealed shares for these \(n+1\) locations must be the same for both \({\mathsf {trans}}_1\) and \({\mathsf {trans}}_2\) when decommitting to 0. Note that if \(n+1\) shares are the identical then the polynomials revealed for both transcripts must be identical as well, and therefore, the revealed shares for every other index must be identical. We additionally know that the plaintext shares \(\beta _i^0 \ne \beta _i^1\) for any index \(i \in T_1\). Combining this with the fact that \({\widehat{b}}_i^0\) must correspond to the same share as the one corresponding to \(b_i^0\), it follows that \({\widehat{b}}_i^0 = b_i^0\) for all \(i \in T_1\).

  • \({\widehat{b}}_i^0 = b_i^1\)on\(n+1\)locations: In this case, we conclude, analogously to the previous case, that the polynomial revealed when decommitting to 0 in \({\mathsf {trans}}_2\) must be the same as when decommitting to 1 in \({\mathsf {trans}}_1\). However, such a decommitment on \({\mathsf {trans}}_2\) is invalid because the secret, which is the value of this polynomial evaluated at 0 is 1.

Therefore, we can conclude that it must be first case, where the revealed polynomials are identical so that \({\widehat{b}}_i^0 = b_i^0\) for all \(i \in T_1\) (and not just for the \(n+1\) locations). Applying the same argument, we can prove that \({\widehat{b}}_i^1 = b_i^1\) for all \(i \in T_1\). Now, since for every \(i \in T_1\) we have that \(b_i^0 = 1- b_i^1\), it follows that:

$$\begin{aligned} {\widehat{b}}_i^0 = 1-{\widehat{b}}_i^1 \ \ \ \forall i \in T_1. \end{aligned}$$

Next, we observe that \(T_1 \cap S_2\) is non-empty since \(S_1\ne S_2\). We conclude with a contradiction by considering \(i^* \in T_1 \cap S_2\). Specifically, for any such \(i^*\) we have from the preceding argument that the fact that \(i^* \in T_1\) implies that

$$\begin{aligned} {\widehat{b}}_{i^*}^0= & {} 1 - {\widehat{b}}_{i^*}^1. \end{aligned}$$

On the other hand, \(i^* \in S_2\) and the values for all the indices in \(S_2\) are already revealed in the commitment phase of \({\mathsf {trans}}_2\); thus, we have that

$$\begin{aligned} {\widehat{b}}_{i^*}^0 = {\widehat{b}}_{i^*}^1 \end{aligned}$$

which is a contradiction. \(\square \)

  • Simulating the decommit phase when the sender is statically corrupted:\({\mathcal{S}}\) first checks whether the decommitted value is the value stored during the commit phase and whether the decommitment is valid. If these two conditions are met, then \({\mathcal{S}}\) sends \((\mathsf{reveal},sid)\) to \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\). Otherwise, \({\mathcal{S}}\) ignores the message. Next, we prove that \(\mathcal{Z}\) cannot distinguish an interaction of protocol \(\pi _{\scriptscriptstyle \mathrm {COM}}\) with \(\mathcal{A}\), corrupting the sender, from an interaction of \({\mathcal{S}}\) with \(\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}\). Formally,

Claim 5.6

The following two distribution ensembles are computationally indistinguishable,

$$\begin{aligned} \big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {CRS}}}_{\pi _{\scriptscriptstyle \mathrm {COM}}, \mathcal{A}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}} {\mathop {\approx }\limits ^\mathrm{c}}\big \{\text{ View }^{\mathcal{F}_{\scriptscriptstyle \mathrm {COM}}}_{\pi _{{\scriptscriptstyle \mathrm {IDEAL}}},{\mathcal{S}}, \mathcal{Z}}(n)\big \}_{n \in \mathbb {N}}. \end{aligned}$$

Proof

Here we need to argue that the simulator outputs \({\mathsf {fail}}\) with negligible probability and that the receiver outputs the same message in both executions. Recall first the event for which the simulator fails occurs when it computes the polynomials \(q_0\) and \(q_1\) and then finds out that both of them satisfy the condition in case 1. In this case, \(\mathcal{A}\) can equivocate the committed message in the decommit phase. We thus prove that the probability that \(\mathcal{A}\) can generate such polynomials is negligible, which implies that the probability that \({\mathcal{S}}\) fails is negligible. More precisely, we prove in the following lemma that the probability that \(\mathcal{A}\) can break the binding property is negligible probability. A key point in our proof relies on the fact that a malicious sender cannot bias the coin-tossing outcome (as opposed to the simulator). Formally, \(\square \)

  • Simulating the commit phase when the sender is corrupted after the encoding phase: Upon corruption, \({\mathcal{S}}\) receives the sender’s input m. It then reveals the sender’s randomness just as it would do when simulating a decommitment for an uncorrupted sender. Computational indistinguishability follows similarly to the case that the receiver is statically corrupted.

  • Simulating the commit phase when the receiver is corrupted anywhere during the commit phase: Recall that \({\mathcal{S}}\) honestly simulates the receiver messages, and thus, it can simply reveal the randomness of the receiver.