Introduction

Quantum state plays an important role in quantum information processing1. Quantum devices for creating quantum states are building blocks for quantum technology. Being able to verify these quantum states reliably and efficiently is an essential step towards practical applications of quantum devices2. Typically, a quantum device is designed to output some desired state ρ, but the imperfection in the device’s construction and noise in the operations may result in the actual output state deviating from it to some random and unknown states σi. A standard way to distinguish these two cases is quantum-state tomography3,4,5,6,7. However, this method is both time-consuming and computationally challenging8,9. Non-tomographic approaches have also been proposed to accomplish the task10,11,12,13,14,15,16,17, yet these methods make some assumptions either on the quantum states or on the available operations. It is then natural to ask whether there exists an efficient non-tomographic approach to accomplish the task?

The answer is affirmative. Quantum-state verification protocol checks the device’s quality efficiently. Various studies have been explored using local measurements14,16,18,19. Some earlier works considered the verification of maximally entangled states20,21,22,23. In the context of hypothesis testing, optimal verification of maximally entangled state is proposed in ref. 20. Under the independent and identically distributed setting, Hayashi et al.23 discussed the hypothesis testing of the entangled pure states. In a recent work, Pallister et al.24 proposed an optimal strategy to verify non-maximally entangled two-qubit pure states under locally projective and nonadaptive measurements. The locality constraint induces only a constant-factor penalty over the nonlocal strategies. Since then, numerous works have been done along this line of research25,26,27,28,29,30,31, targeting on different states and measurements. Especially, the optimal verification strategies under local operations and classical communication are proposed recently27,28,29, which exhibit better efficiency. We also remark related works by Dimić and Dakić32, and Saggio et al.33, in which they developed a generic protocol for efficient entanglement detection using local measurements and with an exponentially growing confidence vs. the number of copies of the quantum state.

In this work, we report an experimental two-qubit-state verification procedure using both optimal nonadaptive (local measurements) and adaptive (active feed-forward operations) strategies with an optical setup. Compared with previous works merely on minimizing the number of measurement settings34,35,36, we also minimize the number of copies (i.e., coincidence counts (CCs) in our experiment) required to verify the quantum state generated by the quantum device. We perform two tasks–Task A and Task B. With Task A, we obtain a fitting infidelity and the number of copies required to achieve a 99% confidence to verify the quantum state. Task B is performed to estimate the confidence parameter δ and infidelity parameter ϵ vs. the number of copies N. We experimentally compare the scaling of δ-N and ϵ-N by applying the nonadaptive strategy24 and adaptive strategy27,28,29 to the two-qubit states. With our methods, we obtain a comprehensive judgment about the quantum state generated by a quantum device. Present experimental and data analysis workflow may be regarded as a standard procedure for quantum-state verification.

Results

Quantum-state verification

Consider a quantum device \({\cal{D}}\) designed to produce the two-qubit pure state

$$\left| {\Psi} \right\rangle = \sin \theta \left| {HH} \right\rangle + \cos \theta \left| {VV} \right\rangle ,$$
(1)

where θ [0, π/4]. However, it might work incorrectly and actually outputs independent two-qubit fake states σ1, σ2, , σN in N runs. The goal of the verifier is to determine the fidelity threshold of these fake states to the target state with a certain confidence. We remark that the state for θ = π/4 is the maximally entangled state and θ = 0 is the product state. As special cases of the general state in Eq. (1), all the analysis methods presented in the following can be applied to the verification of maximally entangled state and product state. The details of the verification strategies for maximally entangled state and product state are given in Supplementary Notes1.C and 1.D. Previously, theoretical20,23,37 and experimental21 works have studied the verification of maximally entangled state. Here we focus mainly on the verification of non-maximally entangled state in the main text, which is more advantageous in certain experiments in comparison to maximally entangled state. For instance, in the context of loophole-free Bell test, non-maximally entangled states require lower detection efficiency than maximally entangled states38,39,40,41. The details and experimental results for the verification of maximally entangled state and product state are shown in the Supplementary Notes2 and 4. To realize the verification of our quantum device, we perform the following two tasks in our experiment (see Fig. 1):

Fig. 1: Illustration of quantum-state verification strategy.
figure 1

a Consider a quantum device \({\cal{D}}\) designed to produce the two-qubit pure state |ψ〉. However, it might work incorrectly and actually outputs two-qubit fake states σ1, σ2,, σN in N runs. For each copy σi, randomly projective measurements {M1, M2, M3, } are performed by the verifier based on their corresponding probabilities {p1, p2, p3, }. Each measurement outputs a binary outcome 1 for pass and 0 for fail. The verifier takes two tasks based on these measurement outcomes. b Task A gives the statistics on the number of copies required before finding the first fail event. From these statistics, the verifier obtains the confidence δA that the device outputs state |ψ〉. c Task B performs a fixed number (N) of measurements and makes a statistic on the number of copies (mpass) passing the test. From these statistics, the verifier can judge with a certain confidence δB1/δB2 that the device belongs to Case 1 or Case 2.

Task A: Performing measurements on the fake states copy-by-copy according to the verification strategy and making statistics on the number of copies required before we find the first fail event. The concept of Task A is shown in Fig. 1b.

Task B: Performing a fixed number (N) of measurements according to verification strategy and making statistics on the number of copies that pass the verification tests. The concept of Task B is shown in Fig. 1c.

Task A is based on the assumption that there exists some ϵ > 0 for which the fidelity 〈Ψ|σi|Ψ〉 is either 1 or satisfies 〈Ψ|σi|Ψ〉 ≤ 1 − ϵ for all i {1, , N} (see Fig. 1b). Our task is to determine which is the case for the quantum device. To achieve Task A, we perform binary-outcome measurements from a set of available projectors to test the state. Each binary-outcome measurement {Ml,1 − Ml} (l = 1, 2, 3, ) is specified by an operator Ml, corresponding to passing the test. For simplicity, we use Ml to denote the corresponding binary measurement. This measurement is performed with probability pl. We require the target state |Ψ〉 always passes the test, i.e., Ml|Ψ〉 = |Ψ〉. In the bad case (〈Ψ|σi|Ψ〉 ≤ 1 − ϵ), the maximal probability that σi can pass the test is given by24,25

$$\mathop {{\max }}\limits_{\left\langle {\Psi} \right.|\sigma _i{\mathrm{|}}\left. {\Psi} \right\rangle \le 1 - \epsilon } {\mathrm{Tr}}({\Omega} \sigma _i) = 1 - [1 - \lambda _2({\Omega} )]\epsilon : = 1 - {\Delta} _\epsilon ,$$
(2)

where Ω = \(\mathop {\sum}\nolimits_l {p_l} M_l\) is called an strategy, ∆ϵ is the probability σi fails a test and λ2(Ω) is the second largest eigenvalue of Ω. Whenever σi fails the test, we know immediately that the device works incorrectly. After N runs, σi in the incorrect case can pass all these tests with probability being at most [1 − [1 − λ2(Ω)]ϵ]N. Hence, to achieve confidence 1 − δ, it suffices to conduct N number of measurements satisfying24

$$N \ge \frac{{\ln \delta }}{{\ln [1 - [1 - \lambda _2({\Omega} )]\epsilon ]}} \approx \frac{1}{{[1 - \lambda _2({\Omega} )]\epsilon }}\ln \frac{1}{\delta }.$$
(3)

From Eq. (3), we can see that an optimal strategy is obtained by minimizing the second largest eigenvalue λ2(Ω), with respect to the set of available measurements. Pallister et al.24 proposed an optimal strategy for Task A, using only locally projective measurements. As no classical communication is involved, this strategy (hereafter labeled as Ωopt) is nonadaptive. Later, Wang et al.27, Yu et al.28, and Li et al.29 independently propose the optimal strategy using one-way local operations and classical communication (hereafter labeled as \({\Omega} _{^{{\mathrm{opt}}}}^ \to\)) for two-qubit pure states. Furthermore, Wang et al.27 also gives the optimal strategy for two-way classical communication. The adaptive strategy allows general local operations and classical communication measurements, and is shown to be more efficient than the strategies based on local measurements. Thus, it is important to realize the adaptive strategy in the experiment. We refer to the Supplementary Notes1 and 2 for more details on these strategies.

In reality, quantum devices are never perfect. Another practical scenario is to conclude with high confidence that the fidelity of the output states are above or below a certain threshold. To be specific, we want to distinguish the following two cases:

Case 1: \({\cal{D}}\) works correctly—i, 〈ψ|σi|ψ〉 > 1 − ϵ. In this case, we regard the device as “good”.

Case 2: \({\cal{D}}\) works incorrectly—i, 〈ψ|σi|ψ〉 ≤ 1 − ϵ. In this case, we regard the device as “bad”.

We call this Task B (see Fig. 1c), which is different from Task A, as the condition for “\({\cal{D}}\) works correctly” is less restrictive compared with that of Task A. It turns out that the verification strategies proposed for Task A are readily applicable to Task B. Concretely, we perform the nonadaptive verification strategy Ωopt sequentially in N runs and count the number of passing events mpass. Let Xi be a binary variable corresponding to the event that σi passes the test (Xi = 1) or not (Xi = 0). Thus, we have mpass = \(\mathop {\sum}\nolimits_{i = 1}^N {X_i}\). Assuming that the device is “good”, then from Eq. (2) we can derive that the passing probability of the generated states is no smaller than 1 − [1 − λ2opt)]ϵ. We refer to Lemma 3 in the Supplementary Note 3.A for proof. Thus, the expectation of Xi satisfies \({\Bbb E}\)[Xi] ≥ 1 − (1 − λ2opt))ϵ ≡ µ. The independence assumption together with the law of large numbers then guarantee mpass ≥ , when N is sufficiently large. We follow the statistical analysis methods using the Chernoff bound in the context of state verification28,32,33,42, which is related to the security analysis of quantum key distributions43,44. We then upper bound the probability that the device works incorrectly as

$$\delta \equiv e^{ - N{\mathop{\rm{D}}\nolimits} \left( {\frac{{m_{{\mathrm{pass}}}}}{N}\parallel \mu } \right)},$$
(4)

where \({\mathop{\rm{D}}\nolimits} \left( {x\parallel y} \right): = x\log _2\frac{x}{y} + (1 - x)\log _2\frac{{1 - x}}{{1 - y}}\) is the Kullback–Leibler divergence. That is to say, we can conclude with confidence δB1 = 1 − δ that \({\cal{D}}\) belongs to Case 1. Conversely, if the device is “bad”, then using the same argument we can conclude with confidence δB2 = 1 − δ that \({\cal{D}}\) belongs to Case 2. Please refer to the Supplementary Note 3 for rigorous proofs and arguments on how to evaluate the performance of the quantum device for these two cases.

To perform Task B with the adaptive strategy \({\Omega} _{{\mathrm{opt}}}^ \to\), we record the number of passing events mpass = \(\mathop {\sum}\nolimits_{i = 1}^N {X_i}\). If the device is “good”, the passing probability of the generated states is no smaller than µs ≡ 1 − [1 − λ4(\({\Omega} _{{\mathrm{opt}}}^ \to\))]ϵ, where λ4(\({\Omega} _{{\mathrm{opt}}}^ \to\)) = sin2θ/(1 + cos2θ) is the smallest eigenvalue of \({\Omega} _{{\mathrm{opt}}}^ \to\), as proved by Lemma 5 in Supplementary Note 3.B. The independence assumption along with the law of large numbers guarantee that mpass ≥ s, when N is sufficiently large. On the other hand, if the device is “bad”, we can prove that the passing probability of the generated states is no larger than µl ≡ 1 − [1 − λ2(\({\Omega} _{{\mathrm{opt}}}^ \to\))]ϵ, where λ2(\({\Omega} _{{\mathrm{opt}}}^ \to\)) = cos2θ/(1 + cos2θ), by Lemma 4 in Supplementary Note 3.B. Again, the independence assumption and the law of large numbers guarantee that mpass ≤ l, when N is large enough. Therefore, we consider two regions regarding the value of mpass in the adaptive strategy, i.e., the region mpass ≤ s and the region mpass ≥ l. In these regions, we can conclude with δB1 = 1 − δl/δB2 = 1 − δs that the device belongs to Case 1/Case 2. The expressions for δl and δs and all the details for applying adaptive strategy to Task B can be found in Supplementary Note 3.B.

Experimental setup and verification procedure

Our two-qubit entangled state is generated based on a type-II spontaneous parametric down-conversion in a 20 mm-long periodically poled potassium titanyl phosphate crystal, embedded in a Sagnac interferometer45,46 (see Fig. 2). A continuous-wave external-cavity ultraviolet diode laser at 405 nm is used as the pump light. A half-wave plate (HWP1) and quarter-wave plate (QWP1) transform the linear polarized light into the appropriate elliptically polarized light to provide the power balance and phase control of the pump field. With an input pump power of 30 mW, we typically obtain 120 kHz CCs.

Fig. 2: Experimental setup for optimal verification of two-qubit quantum state.
figure 2

We use a photon pair source based on a Sagnac interferometer to generate various two-qubit quantum state. QWP1 and HWP1 are used for adjusting the relative amplitude of the two counter-propagating pump light. For nonadaptive strategy, the measurement is realized with QWP, HWP, and polarizing beam splitter (PBS) at both Alice’s and Bob’s site. The adaptive measurement is implemented by real-time feed-forward operation of electro-optic modulators (EOMs), which are triggered by the detection signals recorded with a field-programmable gate array (FPGA). The optical fiber delay is used to compensate the electronic delay from Alice’s single photon detector (SPD) to the two EOMs. DM: dichroic mirror; dHWP: dual-wavelength half-wave plate; dPBS: dual-wavelength polarizing beam splitter; FPC: fiber polarization controller; HWP: half-wave plate; IF: 3 nm interference filter centered at 810 nm; PBS: polarizing beam splitter; PPKTP: periodically poled KTiOPO4; QWP: quarter-wave plate.

The target state has the following form

$$\left| \psi \right\rangle = \sin \theta \left| {HV} \right\rangle + e^{i\phi }\cos \theta \left| {VH} \right\rangle ,$$
(5)

where θ and ϕ represent amplitude and phase, respectively. This state is locally equivalent to |Ψ〉 in Eq. (1) by \({\Bbb U} = \left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \end{array}} \right) \otimes \left( {\begin{array}{*{20}{c}} 0 & {e^{i\phi }} \\ 1 & 0 \end{array}} \right)\). By using Lemma 1 in Supplementary Note 1, the optimal strategy for verifying |ψ〉 is \({\Omega} _{{\mathrm{opt}}}^\prime = {\Bbb U}{\Omega} _{{\mathrm{opt}}}{\Bbb U}^\dagger\), where Ωopt is the optimal strategy verifying |Ψ〉 in Eq. (1). In the Supplementary Note 2, we write down explicitly the optimal nonadaptive strategy24 and adaptive strategy27,28,29 for verifying |ψ〉.

In our experiment, we implement both the nonadaptive and adaptive measurements to realize the verification strategies. There are four settings {P0, P1, P2, P3} for nonadaptive measurements24, while only three settings {\(\tilde T_0\), \(\tilde T_1\), \(\tilde T_2\)} are required for the adaptive measurements27,28,29. The exact form of these projectors is given in the Supplementary Note 2. It is noteworthy that the measurements \(P_0 = \tilde T_0 = \left| H \right\rangle \left\langle H \right| \otimes \left| V \right\rangle \left\langle V \right| + \left| V \right\rangle \left\langle V \right| \otimes \left| H \right\rangle \left\langle H \right|\) are determined by the standard σz basis for both the nonadaptive and adaptive strategies, which are orthogonal and can be realized with a combination of QWP, HWP, and polarization beam splitter. For adaptive measurements, the measurement bases \(\tilde v_ + = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle + {\mathrm{sin}}\theta \left| V \right\rangle {\mathrm{/}}\tilde w_ + = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle - i{\mathrm{sin}}\theta \left| V \right\rangle\) and \(\tilde v_ - = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle - {\mathrm{sin}}\theta \left| V \right\rangle {\mathrm{/}}\tilde w_ - = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle + i{\mathrm{sin}}\theta \left| V \right\rangle\) at Bob’s site are not orthogonal. It is noteworthy that we only implement the one-way adaptive strategy in our experiment. The two-way adaptive strategy is also derived in ref. 27. Compared to nonadaptive and one-way adaptive strategy, the two-way adaptive strategy gives improvements on the verification efficiency due to the utilization of more classical communication resources. The implementation of two-way adaptive strategy requires the following: first, Alice performs her measurement and sends her results to Bob; then, Bob performs his measurement according to Alice’s outcomes; finally, Alice performs another measurement conditioning on Bob’s measurement outcomes. This procedure requires the real-time communications both from Alice to Bob and from Bob to Alice. Besides, the two-way adaptive strategy requires the quantum nondemolition measurement at Alice’s site, which is difficult to implement in the current setup. To realize the one-way adaptive strategy, we transmit the results of Alice’s measurements to Bob through classical communication channel, which is implemented by real-time feed-forward operations of the electro-optic modulators (EOMs). As shown in Fig. 2, we trigger two EOMs at Bob’s site to realize the adaptive measurements based on the results of Alice’s measurement. If Alice’s outcome is \(\left| + \right\rangle = \left( {\left| V \right\rangle + \left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\) or \(\left| R \right\rangle = \left( {\left| V \right\rangle + i\left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\), EOM1 implements the required rotation and EOM2 is identity operation. Conversely, if Alice’s outcome is \(\left| - \right\rangle = \left( {\left| V \right\rangle - \left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\) or \(\left| L \right\rangle = \left( {\left| V \right\rangle - i\left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\), EOM2 will implement the required rotation and EOM1 is identity operation. Our verification procedure is the following.

  1. (1)

    Specifications of quantum device. We adjust the HWP1 and QWP1 of our Sagnac source to generate the desired quantum state.

  2. (2)

    Verification using the optimal strategy. In this stage, we generate many copies of the quantum state sequentially with our Sagnac source. These copies are termed as fake states {σi, i = 1, 2,, N}. Then, we perform the optimal nonadaptive verification strategy to σi. From the parameters θ and ϕ of the target state, we can compute the angles of wave plates QWP2 and HWP2, QWP3 and HWP3 for realizing the projectors {P0, P1, P2, P3} required in the nonadaptive strategy. To implement the adaptive strategy, we employ two EOMs to realize the \(\tilde v_ + {\mathrm{/}}\tilde v_ -\) and \(\tilde w_ + {\mathrm{/}}\tilde w_ -\) measurements once receiving Alice’s results (refer to Supplementary Note 2.B for the details). Finally, we obtain the timetag data of the photon detection from the field-programmable gate array and extract individual CC, which is regarded as one copy of our target state. We use the timetag experimental technique to record the channel and arrival time of each detected photon for data processing47. The time is stored as multiples of the internal time resolution (156 ps). The first data in the timetag is recorded as the starting time ti0. With the increasing of time, we search the required CC between different channels within a fixed coincidence window (0.4 ns). If a single CC is obtained, we record the time of the ended timetag data as tf0. Then, we move to the next time slice ti1 − tf1 to search for the next CC. This process can be cycled until we find the N-th CC in time slice tiN−1 – tfN−1. This measurement can be viewed as single-shot measurement of the bipartite state with post selection. The time interval in each slice is about 100 µs in our experiment, consistent with the 1/CR, CR-coincidence rate. By doing so, we can precisely obtain the number of copies N satisfying the verification requirements. We believe this procedure is suitable in the context of verification protocol, because one wants to verify the quantum state with the minimum amount of copies.

  3. (3)

    Data processing. From the measured timetag data, the results for different measurement settings can be obtained. For the nonadaptive strategy, {P0, P1, P2, P3} are chosen randomly with the probabilities {µ0, µ1, µ2, µ3} (µ0 = α(θ), µi = (1 − α(θ))/3)) with α(θ) = (2 − sin(2θ))/(4 + sin(2θ)). For the adaptive strategy, {\(\tilde T_0\), \(\tilde T_1\), \(\tilde T_2\)} projectors are randomly chosen according to the probabilities {β(θ), (1 − β(θ))/2, (1 − β(θ))/2}, where β(θ) = cos2θ/(1 + cos2θ). For Task A, we use CC to decide whether the outcome of each measurement is pass or fail for each σi. The passing probabilities for the nonadaptive strategy can be, respectively, expressed as,

    $$\displaystyle\begin{array}{l}P_0:\frac{{CC_{HV} + CC_{VH}}}{{CC_{HH} + CC_{HV} + CC_{VH} + CC_{VV}}},\\ P_i:\frac{{CC_{\tilde u_i\tilde v_i^ \bot } + CC_{\tilde u_i^ \bot \tilde v_i} + CC_{\tilde u_i^ \bot \tilde v_i^ \bot }}}{{CC_{\tilde u_i\tilde v_i} + CC_{\tilde u_i\tilde v_i^ \bot } + CC_{\tilde u_i^ \bot \tilde v_i} + CC_{\tilde u_i^ \bot \tilde v_i^ \bot }}}.\end{array}$$
    (6)

    where i = 1, 2, 3, and \(\tilde u_i{\mathrm{/}}\tilde u_i^ \bot\) and \(\tilde v_i{\mathrm{/}}\tilde v_i^ \bot\) are the orthogonal bases for each photon and their expressions are given in the Supplementary Note 2.A. For P0, if the individual CC is in CCHV or CCVH, it indicates that σi passes the test and we set Xi = 1; otherwise, it fails to pass the test and we set Xi = 0. For Pi, i = 1, 2, 3, if the individual CC is in \({\mathrm{CC}}_{\tilde u_i\tilde v_i^ \bot }\), \({\mathrm{CC}}_{\tilde u_i^ \bot \tilde v_i}\), or \({\mathrm{CC}}_{\tilde u_i^ \bot \tilde v_i^ \bot }\), it indicates that σi passes the test and we set Xi = 1; otherwise, it fails to pass the test and we set Xi = 0. For the adaptive strategy, we set the value of the random variables Xi in a similar way.

We increase the number of copies (N) to decide the occurrence of the first failure for Task A and the frequency of passing events for Task B. From these data, we obtain the relationship of the confidence parameter δ, the infidelity parameter ϵ, and the number of copies N. There are certain probabilities that the verifier fail for each measurement. In the worst case, the probability that the verifier fails to assert σi is given by 1 − ∆ϵ, where ∆ϵ = 1 − ϵ/(2 + sinθ cosθ) for nonadaptive strategy24 and ∆ϵ = 1 − ϵ/(2 − sin2θ) for adaptive strategy27,28,29.

Results and analysis of two-qubit optimal verification

The target state to be verified is the general two-qubit state in Eq. (5), where the parameter θ = kπ/10 and ϕ is optimized with maximum likelihood estimation method. In this section, we present the results of k = 2 state (termed as k2, see Supplementary Note 2) as an example. The verification results of other states, such as the maximally entangled state and the product state, are presented in Supplementary Note 4. Our theoretical non-maximally target state is specified by θ = 0.6283 (k = 2). In experiment, we obtain \(\left| \psi \right\rangle = {\mathrm{0}}{\mathrm{.5987}}\left| {HV} \right\rangle + {\mathrm{0}}{\mathrm{.8010}}e^{{\mathrm{3}}{\mathrm{.2034}}i}\left| {VH} \right\rangle\) (θ = 0.6419, ϕ = 3.2034) as our target state to be verified. To realize the verification strategy, the projective measurement is performed sequentially by randomly choosing the projectors. We take 10,000 rounds for a fixed 6000 number of copies.

Task A: According to this verification task, we make a statistical analysis on the number of measurements required for the first occurrence of failure. According to the geometric distribution, the probability that the n-th measurement (out of n measurements) is the first failure is

$${\mathrm{Pr}}(N_{{\mathrm{first}}} = n) = (1 - {\Delta} _\epsilon )^{n - 1} \cdot {\Delta} _\epsilon$$
(7)

where n = 1, 2, 3, · · ·. We then obtain the cumulative probability

$$\delta _{\mathrm{A}} = \mathop {\sum}\limits_{N_{{\mathrm{first}}} = 1}^{n_{{\mathrm{exp}}}} {{\mathrm{Pr}}} (N_{{\mathrm{first}}})$$
(8)

which is the confidence of the device generating the target state |ψ〉. In Fig. 3a, we show the distribution of the number Nfirst required before the first failure for the nonadaptive (Non) strategy. From the figure we can see that Nfirst obeys the geometric distribution. We fit the distribution with the function in Eq. (7) and obtain an experimental infidelity \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 0.0034(15), which is a quantitative estimation of the infidelity for the generated state. From the experimental statistics, we obtain the number \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 3283 required to achieve the 99% confidence (i.e., 99% cumulative probability for Nfirst ≤ \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\)) of judging the generated states to be the target state in the nonadaptive strategy.

Fig. 3: The distribution of the number required before the first failure.
figure 3

a For the nonadaptive strategy. b For the adaptive strategy. From the statistics, we obtain the fitting infidelity of \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 0.0034(15) and \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 0.0121(6). The numbers required to achieve a 99% confidence are \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 3283 and \(n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 536, respectively.

The results for the adaptive (Adp) verification of Task A are shown in Fig. 3b. The experimental fitting infidelity for this distribution is \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 0.0121(6). The number required to achieve the same 99% confidence as the nonadaptive strategy is \(n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 536. It is noteworthy that this nearly six times (i.e., \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}{\mathrm{/}}n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) 6) difference of the experimental number required to obtain the 99% confidence is partially because the infidelity with adaptive strategy is approximately four times larger than the nonadaptive strategy. However, the number of copies required to achieve the same confidence by using the adaptive strategy is still about two times fewer than the nonadaptive strategy even if the infidelity of the generated states is the same (see the analysis presented in Supplementary Note 5). This indicates that the adaptive strategy requires a significant lower number of copies to conclude the device output state |ψ〉 with 99% confidence compared with the nonadaptive one.

Task B: We emphasize that Task B is considered under the assumption that the quantum device is either in Case 1 or in Case 2 as described above. These two cases are complementary and the confidence to assert whether the device belongs to Case 1 or Case 2 can be obtained according to different values of mpass. We refer to the Supplementary Note 3 for detailed information on judging the quantum device for these two cases. For each case, we can reduce the parameter δ by increasing the number of copies of the quantum state. Thus, the confidence δB = 1 − δ to judge the device belongs to Case 1/Case 2 is obtained. For the nonadaptive strategy, the passing probability mpass/N can finally reach a stable value 0.9986 ± 0.0002 after about 1000 number of copies (see Supplementary Note 6). This value is smaller than the desired passing probability µ when we choose the infidelity ϵmin to be 0.001. In this situation, we conclude the state belongs to Case 2. Conversely, the stable value is larger than the desired passing probability µ when we choose the infidelity ϵmax to be 0.006. In this situation, we conclude the state belongs to Case 1. In Fig. 4, we present the results for the verification of Task B. First, we show the confidence parameter δ vs. the number of copies for the nonadaptive strategy in Fig. 4a, b. With about 6000 copies of quantum state, the δ parameter reaches 0.01 for Case 2. This indicates that the device belongs to Case 1 with probability at most 0.01. In other words, there are at least 99% confidence that we can say the device is in “bad” case after about 6000 measurements. In general, more copies of quantum states are required to reach a same level δ = 0.01 for Case 1, because there are fewer portion for the number of passing events mpass to be chosen in the range of µN to N. From Fig. 4b, we can see that it takes about 17,905 copies of quantum state, to reduce the parameter δ to be below 0.01. At this stage, we can say that the device belongs to Case 2 with probability at most 0.01. That is, there are at least 99% confidence that we can say the device is in “good” case after about 17,905 measurements.

Fig. 4: Experimental results for the verification of Task B.
figure 4

a, b Nonadaptive strategy. The confidence parameter δ decreases with the increase of number of copies. After about 6000 copies, δ goes below 0.01 for Case 2 (see inset of a). For Case 1 (see inset of b), it takes about 17,905 copies to reduce δ below 0.01. c, d Adaptive strategy. The number of copies required to reduce δs and δl to be 0.01 for the two cases are about 10,429 and 23,645, respectively. In general, it takes less number of copies for verifying Case 2, because more space are allowed for the states to be found in the 0−µN region. The blue is the experimental error bar (Exp.), which is obtained by 100 rounds of measurements for each coincidence. The insets show the log-scale plots, which indicates δ can reach a value below 0.01 with about thousands to tens of thousands of copies.

Figure 4c, d are the results of adaptive strategy. For the adaptive strategy, the passing probability mpass/N finally reaches a stable value 0.9914 ± 0.0005 (see Supplementary Note 6), which is smaller than the nonadaptive measurement due to the limited fidelity of the EOMs’ modulation. Correspondingly, the infidelity parameter for the two cases are chosen to be ϵmin = 0.008 and ϵmax = 0.017, respectively. We can see from the figure that it takes about 10,429 number of copies for δs to be decreased to 0.01 when choosing ϵmin, which indicates that the device belongs to Case 2 with at least 99% confidence after about 10,429 measurements. On the other hand, about 23,645 number of copies are needed for δl to be decreased to 0.01 when choosing ϵmax, which indicates that the device belongs to Case 1 with at least 99% confidence after about 23,645 measurements. It is noteworthy that the difference of adaptive and nonadaptive comes from the different descent speed of δ vs. the number of copies N, which results from the differences in passing probabilities and the infidelity parameters. See Supplementary Note 6 for detailed explanations.

From another perspective, we can fix δ and see how the parameter ϵ changes when increasing the number of copies. Figure 5 presents the variation of ϵ vs. the number of copies in the log–log scale when we set the δ to be 0.10. At small number of copies, the infidelity is large and drops fast to a low level when the number of copies increases to be ~100. The decline becomes slow when the number of copies exceeds 100. It should be noted that the ϵ asymptotically tends to a value of 0.0036 (calculated by 1 − ∆ϵ = 0.9986) and 0.012 (calculated by 1 − ∆ϵ = 0.9914) for the nonadaptive and adaptive strategies, respectively. Therefore, we are still in the region of mpass/N ≥ µ. We can also see that the scaling of ϵ vs. N is linear in the small number of copies region. We fit the data in the linear region with ϵNr and obtain a slope r −0.88 ± 0.03 for nonadaptive strategy and r −0.78 ± 0.07 for adaptive strategy. This scaling exceeds the standard quantum limit ϵN−0.5 scaling42,48 for physical parameter estimation. Thus, our method is better for estimating the infidelity parameter ϵ than the classical metrology. It is noteworthy that mpass/N is a good estimation for our state fidelity. If the state fidelity increases, the slope of linear region will decreases to the Heisenberg limit ϵ ~ N−1 in quantum metrology (see Supplementary Note 6).

Fig. 5: The variation of infidelity parameter vs. the number of copies.
figure 5

a Nonadaptive strategy and b adaptive strategy. Here, the data are plotted on a log–log scale. The confidence parameter δ is chosen to be 0.10. The parameter ϵ fast decays to a low value which is asymptotically close to the infidelity 0.0036 (Nonadaptive) and 0.012 (Adaptive) of the generated quantum state when increasing the number of copies. The fitting slopes for the linear scaling region are −0.88 ± 0.03 and −0.78 ± 0.07 for the nonadaptive and adaptive, respectively. The blue symbol is the experimental data with error bar (Exp.), which is obtained by 100 rounds of measurements for each coincidence.

Comparison with standard quantum-state tomography

The advantage of the optimal verification strategy lies in that it requires fewer number of measurement settings and, more importantly, the number of copies to estimate the quantum states generated by a quantum device. In standard quantum-state tomography49, the minimum number of settings required for a complete reconstruction of the density matrix is 3n, where n is the number of qubits. For two-qubit system, the standard tomography will cost nine settings whereas the present verification strategy only needs four and three measurement settings for the nonadaptive and adaptive strategies, respectively. To quantitatively compare the verification strategy with the standard tomography, we show the scaling of the parameters δ and ϵ vs. the number of copies N in Fig. 6. For each number of copies, the fidelity estimation F ± ∆F can be obtained by the standard quantum-state tomography. The δ of standard tomography is calculated by the confidence assuming normal distribution of the fidelity with mean F and SD ∆F. The ϵ of standard tomography is calculated by ϵ = 1 − F. The result of verification strategy is taken from the data in Figs. 4 and 5 for the nonadaptive strategy. For δ vs. N, we fit the curve with equation δ = eg·N, where g is the scaling of log(δ) with N. We obtain gtomo = −6.84 × 10−5 for the standard tomography and gverif = −7.35 × 10−4 for the verification strategy. This indicates that present verification strategy achieves better confidence than standard quantum-state tomography given the same number of copies. For ϵ vs. N, as shown in Fig. 6b, the standard tomography will finally reach a saturation value when increasing the number of copies. With the same number of copies N, the verification strategy obtains a smaller ϵ, which indicates that the verification strategy can give a better estimation for the state fidelity than the standard quantum-state tomography when small number of quantum states are available for a quantum device.

Fig. 6: Comparison of standard quantum-state tomography and present verification strategy.
figure 6

In the figure, we give the variation of a δ and b ϵ vs. the number of copies N by using standard quantum-state tomography (tomo) and present verification strategy (verif). For standard tomography, the fidelity F ± ∆F is first obtained from the reconstructed density matrix of each copy N. Then confidence parameter δ is estimated by assuming normal distribution of the fidelity with mean F and SD ∆F. The infidelity parameter ϵ is estimated by ϵ = 1 − F. It is noteworthy that the experimental data symbols shown in a looks like lines due to the dense data points.

Discussion

Our work, including experiment, data processing and analysis framework, can be used as a standardized procedure for verifying quantum states. In Task A, we give an estimation of the infidelity parameter ϵexp of the generated states and the confidence δA to produce the target quantum state by detecting certain number of copies. With the ϵexp obtained from Task A, we can choose ϵmax or ϵmin which divides our device to be Case 1 or Case 2. Task B is performed based on the chosen ϵmin and ϵmax. We can have an estimation for the scaling of the confidence parameter δ vs. the number of copies N based on the analysis method of Task B. With a chosen δ, we can also have an estimation for the scaling of the infidelity parameter ϵ vs. N. With these steps, we can have a comprehensive judgment about how well our device really works.

In summary, we report experimental demonstrations for the optimal two-qubit pure state verification strategy with and without adaptive measurements. We give a clear discrimination and comprehensive analysis for the quantum states generated by a quantum device. Two tasks are proposed for practical applications of the verification strategy. The variation of confidence and infidelity parameter with the number of copies for the generated quantum states are presented. The obtained experimental results are in good agreement with the theoretical predictions. Furthermore, our experimental framework offers a precise estimation on the reliability and stability of quantum devices. This ability enables our framework to serve as a standard tool for analyzing quantum devices. Our experimental framework can also be extended to other platforms.