Introduction

The acquisition of a high-resolution image over a large field of view is essential in most optical imaging applications. In this respect, the widespread development of digital cameras made of millions of pixels has strongly contributed to create imaging systems with large space-bandwidth products. In classical imaging, it is therefore mainly the imaging systems with strong spatial constraints that suffer from pixelation and undersampling, such as lensless on-chip microscopes1,2. However, this effect is very present in quantum imaging schemes where it severely hinders the progress of these techniques towards practical applications.

Quantum imaging systems harness quantum properties of light and their interaction with the environment to go beyond the limit of classical imaging or to implement unique imaging modalities3. Of the many approaches, imaging schemes based on entangled photon pairs are the most common and are among the most promising. Proof-of-principle demonstrations range from improving optical resolution4,5 and imaging sensitivity6,7,8,9 to the creation of new imaging protocols, such as ghost imaging10,11, quantum illumination12,13,14,15 and quantum holography16,17. Contrary to classical imaging, photon-correlation-based imaging systems operate by measuring photon coincidences between many spatial positions of the image plane in parallel (except from induced-coherence imaging approaches18,19,20). In practice, this process is much more delicate than forming an intensity image by photon accumulation and therefore requires specific photodetection devices. Although such a task was originally performed with raster-scanning single-photon detectors10, today most implementations use single-photon sensitive cameras such as electron multiplied charge coupled devices (EMCCD)21,22, intensified complementary metal-oxide-semiconductor (iCMOS)23 and single-photon avalanche diode (SPAD) cameras24.

EMCCD cameras are the most widely used devices for imaging photon correlations thanks to their high quantum efficiency, low noise and high pixel resolution. However, these cameras suffer from a very low frame rate (~100 fps) due to their electronic amplification process operating in series and therefore require very long acquisition times (~10 h) to reconstruct correlation images13,14,17,25. Using a smaller sensor area or a binning technique can reduce the acquisition time, but at the cost of a loss in pixel resolution. By using an image intensifier, iCMOS cameras do not use such slow electronic amplification processes and can therefore reach higher frame rates (~1000 fps). However, so far these cameras have only enabled correlation images with relatively small number of pixels (and still several hours of acquisition), mostly because of their low detection efficiency and higher noise level23,26. Finally, SPAD cameras are an emerging technology that can detect single photons at low noise while operating at very high frame rate (up to 800 kfps) despite their low quantum efficiency (~20%) and typically low resolution (~1000 pixels).

It is clear from the above that nearly all sensor technologies currently used in quantum imaging experiments suffer from a poor pixel resolution, either directly when the technology does not have cameras with enough pixels or indirectly when the experiment can only operate in a reasonable time with a small number of pixels. In these systems, objects are therefore often undersampled, resulting in a loss of spatial information and the creation of artefacts in the retrieved images. Here, we demonstrate a quantum image processing technique based on entangled photon pairs that increases the pixel resolution by a factor two. We experimentally demonstrate it in three common photon-pair-based imaging schemes: two quantum illumination protocols using (i) a near-field illumination configuration with an EMCCD camera13 and a (ii) far-field configuration with a SPAD camera27, and (iii) an entanglement-enabled holography system17. In addition, we use our JPD pixel super-resolution method in (iv) a full-field version of N00N-state quantum holography, a scheme that has only been demonstrated so far using a scanning approach6. Note that we refer to our technique as ’pixel super-resolution’ to avoid confusion with the term ’super-resolution’ describing imaging techniques capable of overcoming the classical diffraction limit.

RESULTS

Experimental demonstration

Figure 1a describes the experimental setup used to demonstrate the principle of our technique for the widely used case of p = 2 spatially entangled photons. The spatially entangled photon pairs are produced by type-I SPDC in a thin β-barium Borate (BBO) crystal and illuminate an object, t, using a near-field illumination configuration (i.e. the output surface of the crystal is imaged onto the object). The biphoton correlation width in the camera plane is estimated to be σ ≈ 13 μm. The object is imaged onto an EMCCD camera with a pixel pitch of Δ = 32 μm. In our experiment, t is a horizontal square-modulation amplitude grating. The recorded intensity image in Fig. 1b shows a region of the object composed of 10 grating periods imaged with 25 rows of pixels. It is clear that the image suffers from the effect of aliasing, due to an harmonic above the Nyquist frequency of the detector array, leading to a low-frequency Moire modulation with a period of approximately 5 pixels. If we are able to double the sampling frequency so as to super-resolve the image, it is expected that we will be able to image all the harmonics and hence remove this Moire pattern.

Fig. 1: Experimental demonstration of JPD pixel super-resolution.
figure 1

a Experimental setup. Light emitted by a diode laser (405 nm) illuminates β-barium borate (BBO) crystal with a thickness of 0.5 mm to produce spatially entangled pairs of photons by type I SPDC. Long-pass and band-pass filters at 810 ± 5 nm (BPF) positioned after the crystal remove pump photons. A two-lens system f1f2 images the crystal surface onto an object t, that is itself imaged onto the EMCCD camera by another two-lens system f3f4. t is a grid-shaped amplitude object. Photon correlation width in the image plane is estimated as σ ≈ 13 μm and camera pixel pitch is Δ = 32 μm. b Intensity image, (c) JPD diagonal image and (d) sum-coordinate projection of the JPD. e Intensity image obtained using a camera with half pixel pitch i.e. 16μm. All images show the same spatial region of the object containing 10 grating periods. Coordinates are in pixels. f Spectra of the intensity image (solid red), diagonal image (dashed black), JPD sum-coordinate image (dashed blue) and intensity image acquired with a 16 μm-pixel-pitch camera (dashed red) obtained by performing a discrete Fourier transform to the corresponding image and averaging over the x-axis. g System modulation transfer function (MTF) obtained using the slanted-edge technique with an intensity image (solid red), a diagonal image (dashed black), a JPD sum-coordinate image (dashed blue) and an intensity image acquired using a 16 μm-pixel-pitch camera (dashed red). All frequency values are normalized to the same reference frequency k0 = 1/Δ.

We also measure the spatially resolved JPD of the photon pairs Γijkl by identifying photon coincidences between any arbitrary pair of pixels (i, j) and (k, l) centred at spatial positions \(\overrightarrow{{r}_{1}}=({x}_{i},{y}_{j})\) and \(\overrightarrow{{r}_{2}}=({x}_{k},{y}_{l})\) using the method described in28. The information contained in the JPD can be used for various purposes. For example, it was used in pioneering works with EMCCD cameras to estimate position and momentum correlation widths of entangled photons pairs in direct imaging of the Einstein-Podolsky-Rosen paradox21,22. In the experiment here, the diagonal component of the JPD, Γijij reconstructs an image of the object i.e. Γijij = t(xi, yj)4. Such a diagonal image is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes using a near-field illumination configuration5,13,25,29. As shown in Fig. 1c, measuring the diagonal image in our experiment does not however improve the image quality i.e. the Moire pattern is still present.

There is another way to retrieve an image of the object without using the diagonal component of the JPD. For that, one can project the JPD along its sum-coordinate axis, achieved by summing Γ:

$${P}_{{i}^{+}{j}^{+}}^{+}=\mathop{\sum }\limits_{i=1}^{{N}_{X}}\mathop{\sum }\limits_{j=1}^{{N}_{Y}}{{{\Gamma }}}_{({i}^{+}-i)({j}^{+}-j)ij}\,,$$
(1)

where NY × NX is the number of pixels of the illuminated region of the camera sensor, P+ is defined as the sum-coordinate projection of the JPD and (i+, j+) are sum-coordinate pixel indexes. Such a projection retrieves an image of the object sampled over four times the number of pixels of the sensor: \({P}_{{i}^{+}{j}^{+}}^{+}=| t({x}_{i},{y}_{j}){| }^{4}\) for even pixel indices i.e. i+ = 2i and j+ = 2j, with xi = iΔ and xj = jΔ; \({P}_{{i}^{+}{j}^{+}}^{+}=| t({x}_{i+1/2},{y}_{j+1/2}){| }^{4}\) for odd pixel indices i.e. i+ = 2i + 1 and j+ = 2j + 1, with xi+1/2 = xi + Δ/2 and yj + Δ/2. Pixel resolution is therefore increased by a factor 2 (see section JPD pixel super-resolution principle for more details). Figure 1d shows the resulting sum-coordinate projection of the JPD measured in the experiment shown in Fig. 1a. Thanks to pixel super-resolution, we observe that the spurious low frequency Moire modulation has been removed and the 10 grating periods are now clearly visible. As a comparison, such a high-resolution image is very similar to a conventional intensity image acquired using a camera with half the pixel pitch i.e. 16 μm (Fig. 1e).

The frequency analysis of these different images is shown in Fig. 1f and provides more quantitative information about our approach. In particular, we observe the absence of a frequency peak at 0.2 in the sum-coordinate image spectrum (dashed blue), while it is present in both intensity (solid red) and JPD-diagonal (dashed black) image spectra. It is instead substituted by a peak at 0.8 that is the true frequency component of the object (harmonics), as confirmed by its presence in the spectrum of the intensity image acquired using the high-resolution, 16 μm-pixel-pitch camera (dashed red). Removal of the aliased frequency component corresponds also to the disappearance of the Moire pattern in the real space. This confirms that our approach achieves pixel super-resolution and retrieves information that was lost due to undersampling. Additional measurements provided in supplementary document section 3 are acquired using a camera of even lower resolution (48 μm-pixel-pitch) and show that JPD pixel super-resolution can also recovers the fundamental spectral component (main peak) even when this is absent in both the intensity and diagonal images. JPD pixel super-resolution is also confirmed by simulations detailed in supplementary document section 2.8.

Figure 1g shows system modulation transfer functions (MTF) calculated using the slanted-edge technique30 with different imaging modalities. The MTF obtained using the JPD sum-coordinate projection (dashed blue) is approximately 1.7 times broader than those acquired using intensity (solid red) and JPD diagonal (dashed black) images, almost matching the MTF retrieved by intensity measurement with the high-resolution 16 μm-pixel-pitch camera (dashed red). This shows that JPD pixel super-resolution not only doubles the Nyquist frequency (1/(2Δ) → 1/Δ), but also broadens the system MTF which results in less attenuation of higher frequencies. The broadening of the MTF is explained by the fact that the effective size of a ’pixel’ during a JPD measurement (i.e. the size of the surface over which the coincidences are integrated) is on average smaller than the real size of the pixels. This shows that images retrieved by JPD pixel super-resolution are similar to conventional intensity images obtained with a camera that has four times more pixels (higher Nyquist frequency) but that also has smaller pixels (broader MTF) (more details about MTF measurements are provided in Methods and in supplementary document section 4.)

Interestingly, results in Fig. 1 also lead to another conclusion that, contrary to a common belief in the field, conventional imaging with photon pairs (i.e. using the JPD diagonal) does not always improve image quality compared to classical intensity imaging. For example here, the spurious Moire effect in the diagonal image (Fig. 1c) is more intense than in the direct intensity image (Fig. 1b), which is also confirmed by a higher intensity of the 0.2 frequency peak in the diagonal image spectra (Fig. 1f).

JPD pixel super-resolution principle

We gain further insight into the underlying principle of JPD pixel super-resolution from inspection of the JPD of a spatially entangled two-photon state: this is a 4-dimensional object containing much richer information than a conventional 2D intensity image. The JPD contains correlation information not only between photons detected at the same pixel, but also correlation information about photons detected between nearest-neighbour pixels.

Figure 2 illustrates this concept in the one-dimensional case. In Fig. 2a, photon pairs with a correlation width σ illuminate a one-dimensional object t(x) imaged onto an array of pixels with pitch Δ and pixel gap δ (i.e. the width of non-active areas between neighbouring pixels). When measuring the JPD, there are two main contributions: (i) The first contribution originates from pairs of photons detected at the same pixel (blue rays) and form the JPD diagonal elements, Γkk. Because these pairs crossed the object around the same positions as the pixels i.e. xk, the resulting image (Fig. 2b) provides a sampling of the object Γkk ~ t(xk)4 similar to that performed by a conventional intensity measurement. (ii) The second contribution originates from photon pairs detected by nearest-neighbour pixels and forms the JPD off-diagonal elements Γkk+1. Because these photons crossed the object around positions located between the pixels i.e. xk+1/2, the resulting image (Fig. 2c) provides a sampling of the object Γkk+1 ~ t(xk+1/2)4 similar to an intensity measurement performed with a sensor shifted by Δ/2 in the transverse direction (see Methods for derivations of Γkk and Γkk+1).

Fig. 2: Principle of JPD pixel super-resolution.
figure 2

a Schematic of the optical imaging system composed of an object t illuminated by entangled photon pairs with correlation width σ and imaged onto an array of pixels. Δ and δ are pixel pitch and gap, respectively. Positions at the center of pixels are noted using an integer indices i.e. xk, while those between pixels are noted using half-integer indices i.e. xk+1/2. Blue rays represent some photon pairs falling on a single pixel and red rays some falling on two nearest-neighbor pixels. b, c Unidimensional images formed by JPD diagonal elements Γkk and off-diagonal elements Γkk+1, respectively. d Performing a sum-coordinate projection of the JPD using equation (1) recombines these two low-resolution images into a high-resolution one.

Finally, projecting the JPD along the sum-coordinate (Eq. 1) retrieves a high-resolution image (Fig. 2d) by interlacing diagonal and off-diagonal elements. To understand this recombination, one can expand Eq. 1 in the one dimensional case for even and odd pixels:

$${P}_{{i}^{+} = 2k}^{+}={{{\Gamma }}}_{kk}+2\mathop{\sum }\limits_{l=0}^{N}{{{\Gamma }}}_{k-2lk}$$
(2)
$${P}_{{i}^{+} = 2k+1}^{+}=2{{{\Gamma }}}_{kk+1}+2\mathop{\sum }\limits_{l=1}^{N}{{{\Gamma }}}_{k-(2l+1)k}$$
(3)

where N is the number of pixel of the sensor. In theory, when operating under the constraint σ < Δ, correlations between non-neighbouring pixels are nearly zeros i.e. Γij ≈ 0 if i − j > 2. The sum terms in Eqs. 2 and 3 then become negligible compared to Γkk and Γkk+1. It leads to \({P}_{{i}^{+} = 2k}^{+}={{{\Gamma }}}_{kk}\) and \({P}_{{i}^{+} = 2k+1}^{+}=2{{{\Gamma }}}_{kk+1}\), providing the super-resolved image. In practice, however, experimentally measured correlation values are noisy. All the noise then adds up in the sums in Eqs. 2 and 3, ultimately producing a noise with greater variance that dominates the diagonal and off-diagonal elements in the final image. To solve this issue, the JPD is filtered before performing the projection to remove the weakest correlation values i.e. all terms except Γkk and Γkk+1 are set to zero. In doing so, we also remove the noise associated with these values, that do not add up in the sums, and significantly improve the quality of the final image (see Methods and Fig. 5 for more details).

In addition, it should be noted that JPD diagonal values (i.e. Γijij in the two dimensional case) are not measured directly with single-photon cameras, as most of these devices are not photon-number resolving. To circumvent this limitation, diagonal values are estimated from correlations between either vertical (Γi(j±1)ij) or horizontal (Γ(i±1)jij) direct-neighbouring pixels. This has the practical consequence of restricting the super-resolution effect to one dimension in the complementary axis. In the general case, an image super-resolved in two dimensions can still be obtained by combining two images super-resolved in each direction. In the specific case of EMCCD cameras, this is however not possible because of charge smearing5. This effect compromises horizontal correlations values and truly restricts the super-resolution effect to one dimension along the vertical spatial axis (see Methods). These limitations are lifted when operating in a far-field illumination configuration14,17,27 (see section Application to quantum holography) and in quantum imaging schemes using two distinct cameras16,31.

The previous analysis also shows that two experimental conditions must be verified to achieve pixel super-resolution. First, the sensor fill-factor must be large enough to allow photon coincidence detection between neighbouring pixels i.e. δ < σ. Second, the spatial resolution of the imaging system must be limited by the pixel size. This is true if the higher spatial frequency component of the optical field in the object plane is both smaller than the imaging system spatial frequency cut-off and larger than the sensor Nyquist frequency 1/(2Δ). In practice, one must verify at least that σ < Δ. Otherwise, a JPD sum-coordinate projection can still be retrieved, but will not contain more information than a conventional intensity measurement. In the following, we apply our approach to real quantum imaging experiments in which the condition δ < σ < Δ is true. We note that this condition is straightforward to satisfy. Indeed, σ < Δ is the starting requirement for any form of pixel super-resolution to actually make sense–without this condition, there will not even be any aliasing effects in the first place. The other condition δ < σ, is always satisfied unless the pixel fill factor is extremely low.

Applications to quantum illumination

We demonstrate our technique on two different experimental schemes based on two-photon quantum illumination protocols described in13 and27. In the first protocol, an amplitude object t1 is illuminated by photon pairs and imaged onto an EMCCD camera using a similar experimental setup than this shown in Fig. 1a. Simultaneously, another object t2 is illuminated by a classical source and also imaged on the camera (see Methods and supplementary document section 7.1 for more details about the experimental setup). The intensity image, Fig. 3a, shows a superposition of both quantum and classical images. The goal of such protocol is to segment the quantum object and therefore retrieve an image showing only object t1 illuminated by photon pairs, effectively removing any classical objects or noise. Conventionally, this is achieved by measuring the diagonal image shown in Fig. 3b. Using the JPD pixel super-resolution processing method, we can now perform such a task and simultaneously retrieve a pixel super-resolved image of t1 (Fig. 3c), in which we observe the clear removal of the Moire effect due to aliasing.

Fig. 3: Results in quantum illumination imaging.
figure 3

Experiment performed using a setup similar to this shown in Fig. 1a. in which another object illuminated by a classical light source (LED) is inserted. Images of both objects (positive and negative resolution charts) are superimposed onto the camera. a Intensity image, (b) diagonal image and (c) JPD pixel super-resolution image acquired using a camera with a 32 μm-pixel-pitch. d Intensity image, (e) diagonal image and (f) JPD pixel super-resolution image acquired using a camera with a 16 μm-pixel-pitch. The photon correlation width in the camera plane is σ ≈ 8 μm.

In addition, we reproduced the same experiment using a higher resolution camera i.e. 16 μm-pixel-pitch (Fig. 3d–f). In this case, we observe that JPD pixel super-resolution anti-aliasing is mainly visible at the edges and corners of the object, in particular with the attenuation of the so-called staircase effects. Thus, even in imaging situations where aliasing does not produce artifacts as clear as Moire effect, the JPD pixel super-resolution approach still provides clear benefits in removing it and improving the overall image quality.

We also apply our technique in another quantum illumination protocol demonstrated in27. This protocol performs the same task as in Fig. 3 i.e. distilling the quantum image from the classical, but uses a far-field illumination configuration, operates in reflection and detects photons with a SPAD camera (see supplementary document section 7.2 for results and experimental arrangement).

Application to quantum holography

We now demonstrate JPD pixel super-resolution on two different experimental quantum holography schemes. The first approach was demonstrated in17 and its experimental configuration is described in Fig. 4a. Pairs of photons entangled in space and polarisation illuminate an SLM and a birefringent phase object t6 positioned in each half of an optical plane using a far-field illumination configuration. The object and SLM are then both imaged onto two different parts of an EMCCD camera.

Fig. 4: Results in entanglement-enabled quantum holography.
figure 4

a Experimental setup. Light emitted by a laser diode at 405 nm and polarized at 45 illuminates a stacked BBO crystal pair (0.5 mm thickness each) whose optical axes are perpendicular to each other to produce pairs of photons entangled in space and polarization by type-I SPDC. After the crystals, pump photons are filtered out by a band pass filter at 810 ± 5 nm. A lens f11 is used to Fourier-image photon pairs onto an optical plane where an SLM with a phase object t6. A two-lens imaging system f12 − f13 is then used to image the SLM and the object onto two different parts of an EMCCD camera. A polarizer (P) at 45 is positioned before the camera. The biphoton correlation width in the camera plane is estimated as σ ≈ 9 μm. EMCCD camera pixel pitch is 16 μm. b Photo of the birefringent object used in the experiment (section of a bird feather). c Spatial phase of the object reconstructed by combining four anti-diagonal images Γij −i −j measured for four phase shifts {0, π/2, π/2, 3π/4} programmed by the SLM. d Spatial phase of the same object reconstructed from four minus-coordinate projections P. White scale bar is 1 mm.

In such a far-field configuration, the information of the JPD is now concentrated around the anti-diagonal (i.e. Γij −i −j) because photon pairs are spatially anti-correlated in the object plane32. This anti-diagonal is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes that use a far-field illumination configuration14,17,27. In this case, the JPD pixel super-resolution technique must be adapted by using the minus-coordinate projection P of the filtered JPD in place of the sum-coordinate projection P+ to retrieve the high-resolution image (see Methods).

The object considered here is a section of a bird feather, shown in Fig. 4b. The SLM is used to compensate for optical aberrations and implement a four phase-shifting holographic process by displaying uniform phase patterns with values {0, π/2, π, 3π/2} (see Methods). In the original protocol, four different images are obtained for each step of the process by measuring the anti-diagonal component Γij −i −j of the JPD. These images are then combined numerically to reconstruct the spatial phase of the birefringent object (Fig. 4c). Using the new JPD pixel super-resolution approach, the four images are now obtained from the minus-coordinate projection P of the JPD and recombined to retrieve a phase image with improved spatial resolution (Fig. 4d). We do not observe the clear removal of aliasing artefacts in the super-resolved image as in Figs. 1d and 3c due to the relatively smooth shape of the bird feather that is mostly composed of low frequencies below the Nyquist limit. Resolution improvements are therefore mainly located at the edges and corners, that visually translates into an overall improvement of the image quality.

The second approach is a full-field version of a N00N-state holographic scheme based on photon pairs (N = 2). N00N states are known for providing phase measurements with N-times better sensitivity than classical holography7. Our results show that a full-field version of N00N-state holography with photon pairs not only preserves the two-fold sensitivity enhancement, but also provides a better pixel resolution, an advantage that could only be matched by a scanning approach6 that would also increase the acquisition time by a factor of four (see supplementary document section 6 for experimental layout and results).

Discussion

We have introduced a pixel super-resolution technique based on the measurement of a spatially resolved JPD of spatially entangled photons. This approach retrieves spatial information about an object that is lost due to pixelation, without shifting optical elements or relying on prior knowledge. We demonstrated that this JPD pixel super-resolution approach can improve the spatial resolution in already established quantum illumination and entanglement-enabled holography schemes, as well as in a full-field version of N00N-state quantum holography, using different types of illumination (near-field and far-field) and different single-photon-sensitive cameras (EMCCD and SPAD). Our approach has the advantage that it can be used immediately in quantum imaging schemes based on photon pairs, and even in some cases by only reprocessing already acquired data. In addition, our approach can also be implemented in any classical imaging system limited by pixelation, after substituting the classical source by a source of correlated photons with similar properties. Indeed, contrary to classical pixel super-resolution techniques, such as shift-and-add approaches33, wavelength scanning34 and machine-learning-based algorithms35, the JPD pixel super-resolution approach has the advantage that it does not require displacing optical elements in the system or having prior knowledge about the object being imaged. Although we experimentally demonstrated this technique for the case of p = 2 (photon pair) entanglement, we anticipate that our approach could be generalised for all p to further increase the pixel resolution. Photon pair sources are without doubt the current experimental choice in any given lab but recent efforts have shown promising progress towards direct generation of spatially entangled three-photon states36. We also underline that although the schemes shown here used spatially entangled photons, strictly speaking it is not entanglement but only spatial correlations that are used to generate the JPD. This opens the intriguing prospect for future work to investigate the potential of classical sources of light, e.g. thermal light, to achieve similar pixel super-resolution as shown here but with ready access to p > 2 JPDs.

Methods

Experimental layouts

Experiment in Fig. 1a

BBO crystal has 0.5 × 5 × 5 mm and is cut for type I SPDC at 405 nm with a half opening angle of 3 degrees (Newlight Photonics). It is slightly rotated around horizontal axis to ensure near-collinear phase matching of photons at the output (i.e. ring collapsed into a disk). The pump is a continuous-wave laser at 405 nm (Coherent OBIS-LX) with an output power of approximately 200 mW and a beam diameter of 0.8 ± 0.1 mm. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The camera is an EMCCD (Andor Ixon Ultra 897) that operates at −60 C, with a horizontal pixel shift readout rate of 17 MHz, a vertical pixel shift every 0.3 μs, a vertical clock amplitude voltage of +4V above the factory setting and an amplification gain set to 1000. The camera sensor has a total 512 × 512 pixels with 16 μm pixel pitch and nearly unity fill factor. In Fig. 1b–d, the camera is operated with a pixel pitch of Δ = 32 μm by using a 2 × 2 binning. In Fig. 1e, the camera is operated with a pixel pitch of 16 μm. Exposure time is set to 2 ms. The camera speed is about 100 frames per second (fps) using a region of interest of 100 × 100 pixels. The two-lens imaging system f1f2 in Fig. 1a is represented by two lenses for clarity, but is composed of a series of six lenses with focal lengths 45, 75, 50, 150, 100, 150 mm. The first and the last lens are positioned at focal distances from the crystal and the object, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. Similarly, the second two-lens imaging system f3 − f4 in Fig. 1a is composed of a series of four consecutive lenses with focal lengths 150, 50, 75, 100 mm arranged as in the previous case. The imaging system magnification is 3.3. The photon correlation width in the camera plane is estimated as σ ≈ 13 μm.

Experiment used to acquire images in Fig. 3

The experimental setup is the same as this shown in Fig. 1a, with some changes in the lenses used and the addition of an external arm to superimposed the classical image. It is shown in Fig. 15a of the supplementary document. The output surface of the crystal is imaged onto an object t1 using a two-lenses imaging system with focal lengths f5 = 35 mm and f6 = 75 mm. The object is then imaged onto the camera using a single-lens imaging system composed of one lens with focal length f7 = 50 mm positioned at a distance of 100mm from the object and the camera. Another object t2 is inserted and illuminated by a spatially filtered light-emitting diode (LED) and spectrally filtered at 810 ± 5 nm. Images of both objects are superimposed on the camera using a beam splitter. t1 and t2 are negative and positive amplitude USAF resolution charts, respectively. They are shown in Figs. 15b and c of the supplementary document. Exposure time is set to 6 ms. The imaging system magnification 2.1. The biphoton correlation width in the camera plane is estimated as σ ≈ 8 μm. 6.106 frames were acquired to retrieve intensity image and JPD in approximately 20 h of acquisition. The same setup was used in13. More details in section 7 of the supplementary document.

Experiment in Fig. 4a

The paired set of stacked BBO crystals have dimensions of 0.5 × 5 × 5 mm each and are cut for type I SPDC at 405 nm. They are optically contacted with one crystal rotated by 90 degrees about the axis normal to the incidence face. Both crystals are slightly rotated around horizontal and vertical axis to ensure near-collinear phase matching of photons at the output (i.e. rings collapsed into disks). The pump laser and camera are the same than in Fig. 1a. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The SLM is a phase only modulator (Holoeye Pluto-2-NIR-015) with 1920 × 1080 pixels and a 8 μm pixel pitch. For clarity, it is represented in transmission in Fig. 4a, but was operated in reflection. Exposure time is set to 3 ms. The camera speed is about 40 frame per second using a region of interest of 200 × 200 pixels. The single-lens Fourier imaging system f11 is composed of a series of three lenses of focal lengths 45, 125, 150 mm. The first and last lenses are positioned at focal distance from the crystal and the object-SLM plane, respectively, and the distance between each pair of lenses equals the sum of their focal lengths. The two-lens imaging system f12 − f13 is in reality composed by a series of four lenses with focal lengths 150, 75, 75, 100 mm. The first and the last lens are positioned at focal distances from respectively the SLM-object and the camera, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. The imaging system effective focal length is 36 mm. The photon correlation width in the camera plane is estimated as σ ≈ 9 μm. 2.5.106 frames were acquired to retrieve intensity image and JPD in each case in approximately 17 h of acquisition. The same setup was used in17. More details in section 5 of the supplementary document.

JPD measurement with a camera

Γijkl, where (i, j) and (k, l) are two arbitrary pair of pixels centred at positions \(\overrightarrow{{r}_{1}}=({x}_{i},{y}_{j})\) and \(\overrightarrow{{r}_{2}}=({x}_{k},{y}_{l})\), is measured by acquiring a set of M + 1 frames \({\{{I}^{(l)}\}}_{l\in [[1,M+1]]}\) using a fixed exposure time and then processing them using the formula:

$${{{\Gamma }}}_{ijkl}=\frac{1}{M}\mathop{\sum }\limits_{l=1}^{M}\left[{I}_{ij}^{(l)}{I}_{kl}^{(l)}-{I}_{ij}^{(l)}{I}_{kl}^{(l+1)}\right]$$
(4)

In all the results of our work, M was on the order of 106–107 frames. However, it is essential to note that not all the JPD values can be directly measured using this process. When using an EMCCD camera, (i) correlation values at the same pixel, i.e. Γijij, cannot be directly measured because Eq. 4 is only valid for different pixels (i, j) ≠ (k, l)28, and (ii) and correlation values between vertical pixels, i.e. Γiji (j±q) (where q is an integer that defines the position of a pixel above or bellow (i, j)), cannot be measured because of the presence of charge smearing effects. To circumvent this issue, these values are interpolated from neighbouring correlation values of the JPD i.e. [Γij (i+1) j + Γij (i−1) j]/2 → Γijij and [Γij (i+1) (j±q) + Γij (i−1) (j±q)]/2 → Γiji (j±q), as detailed in5. As a result, the gain in resolution along the x-axis in the experiments using near-field imaging configurations (Figs. 1 and 3, and Fig. 15 and 10 of the supplementary document) is not optimal. However, it is important to note that the gain in pixel resolution along the y-axis is not affected by this interpolation technique. Therefore, the spectral analyses performed in Fig. 1 are also not impacted because the grid-objects used are horizontal (no spectral component on the x-axis) and all the resulting spectral curves are obtained by summing along the x-axis. In addition, this interpolation also does not affect experiments using far-field illumination configurations in Fig. 4a and Fig. 16 of the supplementary document because the JPD diagonals do not contain any relevant imaging information (that is in the JPD anti-diagonals). More details are provided in28 and in section 1 of the supplementary document.

JPD pixel super-resolution in far-field illumination configuration

When an object is illuminated by photon-pairs using a far-field illumination configuration (i.e. the crystal is Fourier-imaged on the object), the JPD pixel super-resolution technique must be adapted and the sum-coordinate projection P+ cannot be used to retrieve the high-resolution image. First, instead of the diagonal, information about the object is retrieved by displaying the anti-diagonal component of the JPD i.e. Γij −i −j ≈ t(xi, yi)2t(−xi, −yi)2. Γij −i −j is the quantity that is conventionally measured and used in all photon-pairs-based imaging schemes using a far-field illumination configuration14,17,27. Second, instead of using the sum-coordinate projection, a super-resolved image of the object is retrieved by integrating the JPD along its minus-coordinate axis:

$${P}_{{i}^{-}{j}^{-}}^{-}=\mathop{\sum }\limits_{i=1}^{{N}_{X}}\mathop{\sum }\limits_{j=1}^{{N}_{Y}}{{{\Gamma }}}_{({i}^{-}+i)({j}^{-}+j)ij}\,,$$
(5)

where NY × NX is the number of pixels of the illuminated region of the camera sensor, P is defined as the minus-coordinate projection of the JPD and (i, j) defines the sum-coordinate pixel to which a spatial variable \(\overrightarrow{{r}^{-}}=({x}_{{i}^{-}},{y}_{{j}^{-}})\) is associated. Each value \({P}_{{i}^{-}{j}^{-}}^{-}\) is obtained by adding all the values Γijkl located on an anti-diagonal of the JPD defined by i − k = i and j − l = j (i.e. \(\overrightarrow{{r}_{1}}-\overrightarrow{{r}_{2}}=\overrightarrow{{r}^{-}}\)). In theory, calculating the minus-coordinate projection of the JPD can therefore achieve pixel super-resolution and potentially retrieved lost spatial information of undersampled objects. However, it is important to note that the JPD anti-diagonal Γijij and minus-coordinate projection P images are not directly proportional to t(xi, yi) as in the near-field configuration, but to t(xi, yi)t(−xi, −yi), which does not always enable to retrieve t without ambiguity. In works using a far-field illumination configuration, this problem is solved by illuminating t with only half of the photon pairs beam (i.e. t(x, y ≤ 0) = 1). The object then appears twice in the retrieved image (object and its symmetric), but no information is lost.

JPD filtering

Figure 5a shows the sum-coordinate projection P+ calculated using equation (1) from the unfiltered JPD measured in Fig. 3. This image is very noisy and does not reveal the object. To solve this issue, a filtering method is applied and consists in setting to 0 all values of the JPD that have a negligible weight (i.e. values close to zero) so that their associated noise does not contribute when performing the sum. When using a near-field illumination configuration (Figs. 1 and 3, and Figs. 10 and 15 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD diagonal Γijij and of the eight other diagonals directly above and bellow i.e. Γij (i±1) j, Γiji (j±1) and Γij (i±1) (j±1). When using a far-field illumination configuration (Figs. 4a and 16 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD anti-diagonal Γij −i −j and those of the eight other anti-diagonals directly above and bellow i.e. Γij (−i±1) −j, Γij −i (−j±1) and Γij (−i±1) (−j±1). diagonal Γijij and of the eight other diagonals directly above and bellow i.e. Γij (i±1) j, Γiji (j±1) and Γij (i±1) (j±1).

Fig. 5: Filtering and normalization.
figure 5

a Sum-coordinate projection of the JPD P+ measured in the experiment in Fig. 3 without filtering. b Sum-coordinate projection of the JPD P+ measured in the experiment in Fig. 3 after filtering and before normalisation. c Graphical representation of Eq. 6.

Normalisation

Figure 5b shows the sum-coordinate projection P+ directly calculated from the filtered JPD measured in the experiment in Fig. 3 before normalisation. We observe that this image has an artefact taking the form of inhomogeneous horizontal and vertical stripes. This artefact is very similar to this commonly observed in shift-and-add super-resolution techniques33 and is often refereed as a ’motion error artefact’. In our case, it originates from the difference in the effective detection areas of photons pairs when they are detected by the same pixel (diagonal elements) or by neighbouring pixel (off-diagonal elements), as illustrated in Fig. 2. Using an analogy with the shift-and-add technique, our problem is equivalent to a situation in which the different shifted low-resolution images were measured using cameras with different pixel width during the first step of the process. Then, when these low-resolution images are recombined into a high-resolution one (second step of shift-and-add), the artifact appears in the resulting image because some pixels are less intense than others. In practice, the simplest way to remove this artifact is to normalize each low-resolution image by its total average intensity so that neighboring pixels in the high-resolution image are at the same level. We use such a normalization approach in our work by dividing all values of the non-zero diagonals in the filtered JPD by their spatial average value i.e. \({{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}\to {{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}/{\sum }_{i,j}{{{\Gamma }}}_{ij(i+{i}^{-})(j+{j}^{-})}\), where (i, j) identifies a specific JPD diagonal. After normalization, Fig. 3f is obtained. In the case of far-field illumination, the same normalisation is applied to the values of the non-zero anti-diagonals in the filtered JPD i.e. \({{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}\to {{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}/{\sum }_{i,j}{{{\Gamma }}}_{ij(i+{i}^{+})(j+{j}^{+})}\), where (i+, j+), where (i+, j+) identifies a specific JPD anti-diagonal.

In some cases the artefact is reduced but still visible in the resulting image even after normalisation. The persistence of this artefact is due the fact that the difference in the effective integration areas between diagonal and off-diagonal elements is too large to be accurately corrected by simple sum normalization. For example, in the experiment shown in Fig. 10a of the supplementary document, the poor quality of the SPAD camera sensor, in particular its very low fill-factor (10.5%), is probably at the origin of the remaining artefact. To further reduce this artefact, one could use more complex normalisation algorithms, such as L1 or L2 norms minimisation approaches37 and kernel regression38, that are commonly used in shift-and-add approaches.

Slanted-edge method

MTF measurements using the slanted-edge approach were performed with a razor blade titled by approximatively 100 mrad and positioned in the object plane of the experimental configuration shown in Fig. 1, followed by a standard method described in30. Broadening of the curves is estimated by comparing spatial frequency values where MTF is 50% of its low frequency value (i.e. criteria MTF50). More details are provided in section 4 of supplementary document.

Estimation of the photon correlation width σ

Near-field illumination configuration

The value of the correlation width in the image plane σ is obtained by calculating the position-correlation width at the output of the crystal using the formula \(\sqrt{\alpha L{\lambda }_{p}/(2\pi )}\) (L is the crystal thickness, λp is the pump beam wavelength and α is a parameter described in39) and multiplying it by the magnification of the imaging system.

Far-field configuration

The value of σ is obtained by calculating the angular-correlation width of photons at the output of the crystal using the formula λp/(2ω)39, (ω is the pump beam waist) and multiplying it by the effective focal length of the imaging system.

In our work, values of σ are estimated using the theory and not with the experimental techniques described in21,22 because these approaches fail at providing an accurate result precisely when the correlation width is smaller than the pixel width. In addition, note also that these width values are not strict bounds but correspond to standard deviation widths when modelling spatial correlations with a Gaussian model40. More details are provided in section 2.4 of the supplementary document.

Analytical derivation of Γkk and Γkk+1

We consider an unidimensional object t(x) illuminated by photon pairs using a near-field illumination configuration. Photon pairs are characterised by a two-photon wavefunction Ψt(x1, x2) in the object plane. JPD values Γkl are measured using an array of pixels with pitch Δ and gap δ. Assuming that the imaging system is not limited by diffraction but only by the sensor pixel resolution, its point spread function can be approximate by a Dirac delta function and Γkl can be formally written as:

$${{{\Gamma }}}_{kl}=\int\nolimits_{{x}_{k}-\frac{{{\Delta }}-\delta }{2}}^{{x}_{k}+\frac{{{\Delta }}-\delta }{2}}\int\nolimits_{{x}_{l}-\frac{{{\Delta }}-\delta }{2}}^{{x}_{l}+\frac{{{\Delta }}-\delta }{2}}| t({x}_{1})t({x}_{2}){{{\Psi }}}_{t}({x}_{1},{x}_{2}){| }^{2}d{x}_{1}d{x}_{2},$$
(6)

where we assumed unity magnification between object and camera planes. A graphical representation of this integral is shown in Fig. 5c. For clarity, we only represented an array of three pixels. The bivariate function t(x1)t(x2)2 is represented in green and overlaps with an grid of squares of size Δ and spacing δ. Each square represents an integration area associated to a specific JPD value. For example, the central square corresponds to the integration area of Γkk i.e. \([{x}_{k}-\frac{{{\Delta }}-\delta }{2},{x}_{k}+\frac{{{\Delta }}-\delta }{2}]\times [{x}_{k}-\frac{{{\Delta }}-\delta }{2},{x}_{k}+\frac{{{\Delta }}-\delta }{2}]\). In addition, the bivariate function Ψt(x1, x2)2 is represented by two dashed black lines. These two lines delimit the most intense part of the function, which corresponds to a diagonal band of width σ using a double Gaussian model40.

We seek to calculate the JPD values Γkk and Γkk+1. Graphically, these values are located at the intersection between the grid, the green area and the surface inside the dashed lines. They are represented in blue and red, respectively. For small widths σ < Δ, it is clear in Fig. 5c that the blue and red integration areas are tightening around the positions (xk, xk) and (xk+1/2, xk+1/2) positions, which results in Γkk ~ t(xk)4 and Γkk+1 ~ t(xk+1/2)4. More formally, one can also apply a change of variable and perform a first-order Taylor expansion in Eq. 6 to reach the same results:

$${{{\Gamma }}}_{kk}\approx | t({x}_{k}){| }^{4}{S}_{0}$$
(7)
$${{{\Gamma }}}_{kk+1}\approx | t({x}_{k+1/2}){| }^{4}{S}_{1}$$
(8)

where \({S}_{0}\approx \sigma /2[1+2\sqrt{2}({{\Delta }}-\sigma /\sqrt{2})]\) and S1 ≈ σ2/2. Full calculations are provided in section 2.5 of the supplementary document. Note also that the difference between integration areas S1 ≠ S2 makes the normalization step after calculating the JPD projection necessary (see Methods section Normalization).