1 Introduction

For space targets, it is important to conduct identifications for these interesting components, such as cabin and solar panels in the event of their aberrancy. Since inverse synthetic aperture radar (ISAR) provides high-quality two-dimensional (2D) image for space targets [1], it becomes possible to analyze characteristics of partial interesting components. In space security-related reconnaissance and surveillance, there are a number of issues. For example, the image projection plane (IPP) is not a prior knowledge, and a 2D image is usually sensitive to the relative target motion. One solution to tackle these problems is to form three-dimensional (3D) ISAR image. Compared with a 2D image, a 3D image contains geometric structure which provides robust characterization of vital interesting components, so it is of great military significance to reconstruct 3D image for space targets. In recent years, several techniques have been exploited to form 3D ISAR image. Previous attempts to form a 3D ISAR image can be roughly categorized into three types. The first class of methods to form a 3D image is a direct extension of the 2D ISAR concept [2, 3]. In this method, a transmitter illuminates the target through a 2D angular aperture in both azimuth and elevation while simultaneously emitting wideband waveforms, 3D Fourier Transform (3D-FT) is applied to obtain a 3D ISAR image. However, such a 3D-FT based method requires a measurement of radar data in three dimensions with a dense sampling way, which increases the cost of computation and storage significantly. The second type of approaches is based on the interferometric ISAR (InISAR)[47], which uses the spatial information provided by multiple antennas. Such an approach has the advantage of not requiring the prior knowledge of target motions. For such systems, the height information can be estimated from the phase differences between the corresponding ISAR images. However, such InISAR technique which needs using specific antenna arrays has difficult in applying for space targets. The third category reconstructs target 3D geometry via factorization analysis [811]. The factorization method builds the measurement matrix which is made up of a range and cross range of tracked scattering centers in multiple images. Then the target 3D geometry are reconstructed from a singular value decomposition of the measurement matrix. However, this method needs cross-range scaling to obtain cross range of scattering centers. Otherwise, the reconstructed 3D geometry of target cannot provide the real size information. In this paper, a new algorithm is proposed for space targets 3D reconstruction with multistatic ISAR systems. Firstly, observation angles of sensor are estimated by using kinematic formulas and coordinate system transformation, so the projection matrix from the target 3D geometry to the trajectory of scattering centers can be build. Secondly, ranges and Doppler frequencies of scattering centers are extracted to build the trajectory matrix. Finally, the projection equations from the target 3D geometry to the trajectory matrix of scattering centers are solved to reconstruct target 3D geometry. Compared with factorization method, the proposed algorithm only uses ranges and Doppler frequencies of scattering centers without cross-range scaling constraint, which would be thought as an advantage. However, space targets are viewed under multistatic ISAR systems so that the positions of scattering centers may vary widely in different ISAR images. To tackle this problem, a new method based on projective transform and epipolar geometry [12] is proposed to associate scattering centers between different images. In this method, an optimization cost function is developed via minimizing the assignment cost of two scattering center sets. To solve the mixed linear assignment programming, the Jonker-Volgenant algorithm is used to build a one-to-one correspondence between two scattering center sets with the lowest cost.

The remainder of this paper is organized as follows. In Section 2, imaging geometry and signal model is introduced. The proposed 3D reconstruction algorithm is presented in Section 3. In Section 4, numerical examples illustrating the efficiency of the proposed method are given. Finally, the conclusions are drawn in Section 5.

2 Geometry and signal model

In this paper, the space target is in three-axis stabilized attitude control system, which is the advanced operating mode. In the operating mode, space target undergoes steady moving motion relative to the radar. It is noted that the orbital elements are utilized to calculate the IPP under the steady moving model. As the space target is often engaged in complicated maneuvers that combine translational and rotational motions, we use the motion compensation method in [13] to eliminate any relative translation motion of the target. The process of motion compensation consists of two steps. Firstly, an adaptive joint time-frequency technique is used to parametrize the signal using the basis functions. Secondly, the reference points of target can be selected by using a search and projection procedure in the time-frequency plane and the corresponding motion parameter can be obtained by the basis functions. Based on the motion parameters, all the motion error can be well eliminated by multiplying the phase correction vector, interpolating to the uniform azimuth scale, and polar reformatting the original collected data. The target is able to be transformed into a turntable model [14], in which the target rotates around the center of gravity of the target.

Multistatic ISAR systems are shown in Fig. 1. There is a body reference coordinate system that is centered at the center of gravity of the target. We assume that the orange solid line represents line of sight (LOS) which is determined by the azimuth and elevation angles βn and φn, n=1,2,⋯,N, N is the number of sensors. In the far-field region, radar sensor transmits a linear frequency modulated signal

$$ s\left(\hat{t} \right) = A\exp \left\{ {j2\pi \left({{f_{c}}\left(\hat{t} \right) + \frac{1}{2}\mu {\hat{t}^{2}}} \right)} \right\}, \left| \hat{t} \right| < \frac{1}{2}{T_{p}} $$
(1)
Fig. 1
figure 1

Observation geometry with multistatic ISAR systems

where A denotes amplitude, fc denotes carrier frequency, μ is chirp rate, Tp is time length of the chirp pulse, and \(\hat {t}\) is the fast time. After dechirp and pulse compression, the returned signal of the kth scattering centers can be given by

$$\begin{array}{@{}rcl@{}} {s_{k}}\left(\hat{f},{t_{m}} \right) = {\delta_{k}}{T_{p}}\sin c\left\{ {{T_{p}}\left(\hat{f} + \frac{{2\mu }}{c}{r_{k}} \right)} \right\}\\ \cdot\exp \left({ - j\frac{{4\pi {f_{c}}}}{c}{r_{k}}} \right), k = 1,2, \cdots,K \end{array} $$
(2)

where tm denotes slow time, c is the velocity of light, δk is the backscatter coefficient, rk denotes the range between the kth scattering center and target rotational center, and K is the number of scattering centers. As shown in Fig. 1, the range rk, n can be calculated by

$$ {r_{k,n}} = {x_{k}}\cos {\varphi_{n}}\cos {\beta_{n}} + {y_{k}}\cos {\varphi_{n}}\sin {\beta_{n}} + {z_{k}}\sin {\varphi_{n}} $$
(3)

where (xk,yk,zk) is the 3D location of the kth scattering center. The time derivative of rk, n is now given by

$$\begin{array}{@{}rcl@{}} {f_{k,n}} &=& - \left({{{\dot \varphi }_{n}}\cos {\beta_{n}}\sin {\varphi_{n}} + {{\dot \beta }_{n}}\sin {\beta_{n}}\cos {\varphi_{n}}} \right)\frac{{2{x_{k}}}}{\lambda }\\ &+& \left({{{\dot \beta }_{n}}\cos {\beta_{n}}\cos {\varphi_{n}} - {{\dot \varphi }_{n}}\sin {\beta_{n}}\sin {\varphi_{n}}} \right)\frac{{2{y_{k}}}}{\lambda }\\ &+& {{\dot \varphi }_{n}}\cos {\varphi_{n}}\frac{{2{z_{k}}}}{\lambda } \end{array} $$
(4)

where fk, n is the Doppler frequency of the kth scattering center, λ denotes the wavelength, and \(\left ({{{\dot \varphi }_{n}},{{\dot \beta }_{n}}} \right)\) is the time derivative of (φn,βn). Obviously, ISAR images formed by range Doppler method can offer the range rk, n and the Doppler frequency fk, n. Therefore, the foundation of projection from the target 3D geometry to ISAR image is provided without cross-range scaling constraint in (3) and (4).

3 3D reconstruction using 2D ISAR images

Based on the signal model shown in Section 2, a 3D reconstruction algorithm is proposed for space targets with multistatic ISAR systems. In this algorithm, we use the projection equations between the target 3D geometry and ISAR images offered by multiple sensors properly spaced. To illustrate this process from the target 3D geometry to ISAR images mathematically, the transformation in matrix form is expressed by

$$\begin{array}{@{}rcl@{}} \begin{array}{l} \left[ {\begin{array}{c} {{\mathbf{R}_{n}}}\\ {{\mathbf{F}_{n}}} \end{array}} \right]{\mathbf{p}_{k}} = \left[ {\begin{array}{c} {{r_{k,n}}}\\ {{f_{k,n}}} \end{array}} \right]\\ {\mathbf{R}_{n}} = {\left[ {\begin{array}{c} {\cos {\varphi_{n}}\cos {\beta_{n}}}\\ {\cos {\varphi_{n}}\sin {\beta_{n}}}\\ {\sin {\varphi_{n}}} \end{array}} \right]^{\mathrm{T}}}\\ {\mathbf{F}_{n}} = \frac{2}{\lambda }\!{\left[\! {\begin{array}{c} { - {{\dot \varphi }_{n}}\cos {\beta_{n}}\sin {\varphi_{n}} \!- \!{{\dot \beta }_{n}}\sin {\beta_{n}}\cos {\varphi_{n}}}\\ {{{\dot \beta }_{n}}\cos {\beta_{n}}\cos {\varphi_{n}} \!- \!{{\dot \varphi }_{n}}\sin {\beta_{n}}\sin {\varphi_{n}}}\\ {{{\dot \varphi }_{n}}\cos {\varphi_{n}}} \end{array}} \right]^{\mathrm{T}}} \end{array} \end{array} $$
(5)

where pk(=[xk,yk,zk]T) is the 3D-reconstructed position of the kth scattering center and Dn(=[Rn,Fn]T) is the projection matrix of the nth sensor. Under multistatic ISAR systems, a set of equations can be expressed as follows:

$$ \left[ {\begin{array}{c} {{\mathbf{D}_{1}}}\\ \vdots \\ {{\mathbf{D}_{N}}} \end{array}} \right]{\mathbf{p}_{k}} = \left[ {\begin{array}{c} {{r_{k,1}}}\\ {{f_{k,1}}}\\ \vdots \\ {\begin{array}{c} {{r_{k,N}}}\\ {{f_{k,N}}} \end{array}} \end{array}} \right]. $$
(6)

Then a least-squares sense is adopted to solve the over-determined equations.

$$\begin{array}{@{}rcl@{}} {\mathbf{p}_{k}} &= &{\mathbf{D}^{\mathrm{T}}}{\left({\mathbf{D}{\mathbf{D}^{\mathrm{T}}}} \right)^{- 1}}\mathbf{I}\\ \mathbf{D} &=& \left[ {\begin{array}{c} {{\mathbf{D}_{1}}}\\ \vdots \\ {{\mathbf{D}_{N}}} \end{array}} \right], \mathbf{I} = \left[ {\begin{array}{c} {{r_{k,1}}}\\ {{f_{k,1}}}\\ \vdots \\ {\begin{array}{c} {{r_{k,N}}}\\ {{f_{k,N}}} \end{array}} \end{array}} \right]. \end{array} $$
(7)

In (7), the reconstructed position of scattering center pk can be calculated by the composite projection matrix D and the trajectory matrix I. Obviously, the proposed algorithm does not need cross-range scaling, and trajectory matrix of scattering centers directly calculated from the gravity model. To obtain the composite projection matrix D and the trajectory matrix I, the process is outlined in Fig. 2. Firstly, the observation angles of sensor relative to space targets can be estimated by using kinematic formulas and coordinate system transformation. Then we establish the composite projection matrix D by using the observation angles (azimuth and elevation angles) to sensor. Secondly, we assume that scattering centers are sufficiently separated such that each peak in the ISAR image corresponds to a single scattering center. In order to extract scattering centers from ISAR image, a watershed algorithm is adopted to segment ISAR image into high-energy regions [15]. As a result, the range and Doppler frequency of scattering centers can be extracted from the maximum of every region. Considering projective positions of scattering centers in different images may vary widely under multistatic SAR systems, it is necessary to associate scattering centers between different ISAR images. In this paper, an association cost function based on projective transform and epipolar geometry is developed. As the cost function is an assignment with 0–1 linear programming, the Jonker-Volgenant algorithm is used to build a one-to-one correspondence between two scattering centers. As these scattering centers from different images are associated, the ranges and Doppler frequencies of the same scattering centers in different images can be used to build the matrix I. The projection matrix Dn is discussed in Section 3.1 and scattering centers association is analyzed in Section 3.2.

Fig. 2
figure 2

The block diagram of the algorithm

3.1 Projection matrix

For space targets in the steady trajectory, we can use orbital elements to establish the projection matrix. As shown in Fig. 3, a satellite orbits around the earth in its regular orbit, and we assume that sensors on the Earth’s surface can receive returned signal on observation time. The processing steps are listed as follows:

Fig. 3
figure 3

Satellite orbital model

(1) Transform the location of sensor from the Earth-centered, Earth-fixed (ECEF) reference system to the Earth-centered inertial (ECI) reference system, as shown in Fig. 3. By coordinate system transformation, the location of sensor is given by

$$ {\bar{\mathbf{r}}_{ECI}} = {\mathbf{R}_{{Z_{I}}}}\left({{\alpha_{G}}} \right){\bar{\mathbf{r}}_{ECEF}} $$
(8)

where \({\mathbf {R}_{{Z_{I}}}}\) is the rotation matrix rotating around the ZI -axis [16], αG is the Greenwich Hour Angle, and \({\bar {\mathbf {r}}_{ECEF}}\) is the location of the sensor in ECEF reference system defined by longitude and latitude.

(2) Transform the location of the sensor from the ECI reference system to the orbit plane (O,Xa,Ya,Za) reference system. The orbit plane reference system is used to describe the motion of the satellite. The Xa -axis points at the flight direction of satellite, Za axis points at sub-satellite point, and Ya axis is normal to the orbit plane. The transformation consists of two steps: coordinate system transformation and coordinate system translation. Firstly, a temporary coordinate system which centers the Earth’s core and is parallel to the orbit reference system is built by rotating the ECI reference system

$$ {\bar{\mathbf{r'}}_{a}} = {\mathbf{R}_{{Z_{a}}}}\left(\mu \right){\mathbf{R}_{{X_{a}}}}\left(i \right){\mathbf{R}_{{Z_{a}}}}\left(\Omega \right){\bar{\mathbf{r}}_{ECI}} $$
(9)

where \({\mathbf {R}_{{Z_{a}}}}\) is the rotation matrix rotating around the Za -axis, \({\mathbf {R}_{{X_{a}}}}\) is the rotation matrix rotating around the Xa -axis, μ is the argument of perigee, i is the orbit inclination, and Ω is the right ascension. Secondly, in the temporary coordinate system, the location of the satellite is given by applying Kepler’s laws and the two body kinematics equations

$$ {\bar{\mathbf{r}}_{s}} = {\left[ {\begin{array}{cccc} {\frac{{\rho \cos \left(\gamma \right)}}{{1 + e\cos \left(\gamma \right)}}}&{\frac{{\rho \sin \left(\gamma \right)}}{{1 + e\cos \left(\gamma \right)}}}&0 \end{array}} \right]^{\mathrm{T}}} $$
(10)

where ρ denotes the semi-latus rectum, e denotes the eccentricity, and γ is the true anomaly [17]. Then, the location of the sensor in the orbit (O,Xa,Ya,Za) reference system is given by translating the origin of the temporary coordinate system to the center of satellite

$$ {\bar{\mathbf{r}}_{a}} = \mathbf{B}{\bar{\mathbf{r'}}_{a}} + {\left[ {\begin{array}{ccc} 0&0&{ - \left\| {{\bar{\mathbf{r}}_{s}}} \right\|} \end{array}} \right]^{\mathrm{T}}} $$
(11)

where \(\mathbf {B} = \left [ {\begin {array}{ccc} 0&1&0\\ 0&0&{ - 1}\\ { - 1}&0&0 \end {array}} \right ]\) is used to adjust the coordinate axis.

(3) Transform the location of sensor in the orbit (O,Xa,Ya,Za) reference system to the body (O,Xn,Yn,Zn) reference system. In the body reference system, the location of sensor is shown by

$$ {\bar{\mathbf{r}}_{n}} = {\mathbf{R}_{{X_{n}}}}\left({{\psi_{s}}} \right){\mathbf{R}_{{Y_{n}}}}\left({{\gamma_{s}}} \right){\mathbf{R}_{{Z_{n}}}}\left({{\phi_{s}}} \right){\bar{\mathbf{r}}_{a}} $$
(12)

where \({\bar {\mathbf {r}}_{n}}\left ({ = {{\left [ {{x_{n}}, {y_{n}}, {z_{n}}} \right ]}^{\mathrm {T}}}} \right)\) is the coordinate value of sensor in the body reference system; \({\mathbf {R}_{{X_{n}}}}\), \({\mathbf {R}_{{Y_{n}}}}\), and \({\mathbf {R}_{{Z_{n}}}}\) are the rotation matrices rotating around the Xn, Yn, and Zn axes; and (ψs,γs,ϕs) are the roll, pitch, and yaw angles of the satellite. Obviously, while ψs=0,γs=0 and ϕs=0, the body reference system is equal to the orbit reference system. We use the method in [18] to estimate roll, pitch, and yaw angles. The basic idea is to exploit the phase history of the strongest scatterers in different images. Firstly, the brightest spots in the different images as corresponding to the same scatterer of the target are associated by using the simple nearest neighboring method. Secondly, considering that the rotation vector is involved in the Doppler frequency of the scatterer, a Doppler matching-based estimation technique processing scheme is proposed to recover the Doppler frequency of the scatterer. Then, the yaw, pitch, and roll rotation motions can be estimated by matching the Doppler frequency. Finally, the elevation angle φn and azimuth angle βn of the nth sensor relative to the satellite are obtained by

$$\begin{array}{*{20}l} {\varphi_{n}} = &arc\sin \left({\frac{{{z_{n}}}}{{\left\| {{\bar{r}_{n}}} \right\|}}} \right)\\ {\beta_{n}} =& arc\cos \left({\frac{{{x_{n}}}}{{\left\| {{\bar{r}_{n}}} \right\|}}\frac{1}{{\cos {\varphi_{n}}}}} \right). \end{array} $$
(13)

Generally, the ISAR targets are non-cooperative with unknown motion. For specific space targets, they usually undergo steadily moving motion, so orbital elements can be used to estimate the observation angles of the sensor. By using the elevation angle and azimuth angle, the projection matrix can be constructed.

3.2 Scattering center association

As space targets are viewed under multistatic ISAR systems and their scattering centers are viewed in different orientations as well, radar data in general are not associated. In different ISAR images, the projective positions of scattering center may vary widely. To tackle the problem about the association scattering center between different images, we propose a new association method based on projective transform and epipolar geometry. In fact, scattering center association can be considered as an assignment procedure which assigns each unassociated scattering center to the associated one. To arrive at the minimum assignment cost, we establish a cost function between the mth and nth image as follows:

$$\begin{array}{*{20}l} \min &\sum\limits_{p = 1}^{P} {\sum\limits_{q = 1}^{Q} {{k_{m,n}}\left({p,q} \right)\left({{g_{m,n}}\left({p,q} \right) + {e_{m,n}}\left({p,q} \right)} \right)}} \\ {\mathrm{s}}{\mathrm{.t}}{\mathrm{.}}&\sum\limits_{p = 1}^{P} {{k_{m,n}}\left({p,q} \right)} = 1\\ &\sum\limits_{q = 1}^{Q} {{k_{m,n}}\left({p,q} \right)} = 1\\ &{k_{m,n}}\left({p,q} \right) = 1{\text{or}}0 \end{array} $$
(14)

where P is the number of scattering centers in the mth image and Q is the number of scattering centers in the nth image. We assume the mth image is obtained by the mth sensor and the nth image is obtained by the nth sensor. km, n(p, q) is the control variable which is called association matrix between the mth image and the nth image. While the pth scattering center in the mth image is associated with the qth scattering center in the nth image, the value of km, n(p, q) is 1. On the contrary, the value of km, n(p, q) is 0. The variables gm, n(p, q) and em, n(p, q) are geometry coefficient and error coefficient which are analyzed below.

3.2.1 Geometry coefficient g m, n(p, q)

Epipolar geometry describes the mapping relationship between the two images. The 3D geometry structure corresponding to its scattering centers on the image should locate on a line in another image. Therefore, the probable location of the corresponding scattering center projecting on another imaging plane is restricted. The geometry coefficient gm, n(p, q) indicates the Euclidean distances between the probable projective position of the pth scattering center in the nth image and the position of the qth scattering center. The mathematical expression of geometry coefficient gm, n(p, q) is presented as follows.

Firstly, the position of 3D geometry structure corresponding to the pth scattering center be obtained by

$$\begin{array}{@{}rcl@{}} \begin{array}{c} \left[ {\begin{array}{c} {{{\tilde{x}}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{y}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{z}_{p}}\left(\tilde{t} \right)} \end{array}} \right] = \left({{\mathbf{R}_{m}} \times {\mathbf{F}_{m}}} \right)\tilde{t} + {\left[ {\begin{array}{c} {{\mathbf{R}_{m}}}\\ {{\mathbf{F}_{m}}}\\ {{\mathbf{R}_{m}} \times {\mathbf{F}_{m}}} \end{array}} \right]^{- 1}}\\ \cdot \left[ {\begin{array}{c} {{r_{p,m}}}\\ {{f_{p,m}}}\\ 0 \end{array}} \right], \tilde{t} \in \left[ {{\tilde{t}_{a}},{\tilde{t}_{b}}} \right] \end{array} \end{array} $$
(15)

where \(\left [ {{\tilde {x}_{p}}\left (\tilde {t} \right),{\tilde {y}_{p}}\left (\tilde {t} \right),{\tilde {z}_{p}}\left (\tilde {t} \right)} \right ]\) is the probable position of 3D geometry structure, (rp, m,fp, m) are the range and Doppler frequency of the pth scattering center in the mth image and “ ×” is the multiplication cross. The size of 3D geometry structure distributed on the coordinate should be limited in a certain range, so the value range of \(\tilde {t}\) are \(\tilde {t}_{a}\) to \(\tilde {t}_{b}\), which are user-defined values. The projection on the nth image of this 3D geometry structure is the epipolar line segment \(\vec l_{n}^{p,m}\)

$$ \vec l_{n}^{p,m}:\left[ {\begin{array}{c} {{\tilde{r}_{p,n}}\left(\tilde{t} \right)}\\ {{\tilde{f}_{p,n}}\left(\tilde{t} \right)} \end{array}} \right] = {\mathbf{D}_{n}}\left[ {\begin{array}{c} {{\tilde{x}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{y}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{z}_{p}}\left(\tilde{t} \right)} \end{array}} \right] $$
(16)

where \(\left ({{\tilde {r}_{p,n}}\left (\tilde {t} \right),{\tilde {f}_{p,n}}\left (\tilde {t} \right)} \right)\) are the projective range and Doppler frequency of the pth scattering center in the nth image. While the pth scattering center is associated with the qth scattering center, the qth scattering center should be on the line segment \(\vec l_{n}^{p,m}\). Obviously, since there are deviations caused by noise etc., the qth scattering center may be not just right on the line segment \(\vec l_{n}^{p,m}\), but near \(\vec l_{n}^{p,m}\). Finally, the geometry coefficient gm, n(p, q) is presented by

$$ {g_{m,n}}\left({p,q} \right) = \min {\left\| {\begin{array}{c} {{\tilde{r}_{p,n}}\left(\tilde{t} \right) - {r_{q,n}}}\\ {\frac{{\lambda {T_{p}}}}{2}\left({{\tilde{f}_{p,n}}\left(\tilde{t} \right) - {f_{q,n}}} \right)} \end{array}} \right\|_{2}} $$
(17)

where min(∙) donates the minimization and \(\frac {{\lambda {T_{p}}}}{2}\) is used to keep the same units of the range and Doppler frequency. In (17), the geometry coefficient gm, n(p, q) indicates the difference between the pth and qth scattering center in the same projection plane. The smaller value of gm, n(p, q) represents higher possibility of which the two scattering centers are correlative. However, while the target is complicated, there may be more than one scattering centers near \(\vec l_{n}^{p,m}\). Then the error coefficient em, n(p, q) is presented to evaluate the association possibility of these scattering centers.

3.2.2 Error coefficient e m, n(p, q)

In (7), the 3D position of the scattering center is able to be reconstructed by the least-squares solution with minimum mean squared error. While trajectory matrix I has a large error, for example, scattering center association is inaccurate, there may have a big reconstruction error between the 3D-reconstructed position and the 3D real position. Here, the error coefficient em, n(p, q) regards the reconstruction error as the assignment cost. The mathematical expression of the error coefficient em, n(p, q) is presented as follows.

Firstly, based on projective transform, the 3D-reconstructed position can be obtained by associating the pth scattering center with the qth one

$$\begin{array}{*{20}l} {{\tilde{\mathbf{p}}}_{p,q}} = & {\tilde{\mathbf{D}}^{\mathrm{T}}}\left(\tilde{\mathbf{D}}{{\tilde{\mathbf{D}}}^{\mathrm{T}}} \right)^{-1} \tilde{\mathbf{I}} \\ \tilde{\mathbf{D}} = &\left[ {\begin{array}{c} {{\mathbf{D}_{m}}}\\ {{\mathbf{D}_{n}}} \end{array}} \right], \tilde{\mathbf{I}} = \left[ {\begin{array}{c} {{r_{p,m}}}\\ {{f_{p,m}}}\\ {{r_{q,n}}}\\ {{f_{q,n}}} \end{array}} \right] \end{array} $$
(18)

where Dm and Dn denote the projection matrixes of the mth and nth sensors and rp, m and fp, m are the range and Doppler frequency of the pth scattering center in the mth image. Similarly, rq, n and fq, n are the range and Doppler frequency of the qth scattering center in the nth image. Then the projection ranges and Doppler frequencies of \({\tilde {\mathbf {p}}_{p,q}}\) on the two images are expressed by

$$\begin{array}{*{20}l} \left[ {\begin{array}{c} {{\tilde{r}_{p,m}}}\\ {{\tilde{f}_{p,m}}} \end{array}} \right] = &{\mathbf{D}_{m}}{{\tilde{\mathbf{p}}}_{p,q}}\\ \left[ {\begin{array}{c} {{\tilde{r}_{q,n}}}\\ {{\tilde{f}_{q,n}}} \end{array}} \right] = &{\mathbf{D}_{n}}{{\tilde{\mathbf{p}}}_{p,q}}. \end{array} $$
(19)

According to (19), the error coefficient em, n(p, q) is given by

$$\begin{array}{@{}rcl@{}} \begin{array}{c} {e_{m,n}}\left({p,q} \right) ={\left\| {\left[ {\begin{array}{*{20}{c}} {{\tilde{r}_{p,m}}}\\ {{\tilde{r}_{q,n}}} \end{array}} \right] - \left[ {\begin{array}{l} {{r_{p,m}}}\\ {{r_{q,n}}} \end{array}} \right]} \right\|_{2}}\\ + \frac{{\lambda {T_{p}}}}{2}{\left\| {\left[ {\begin{array}{l} {{\tilde{f}_{p,m}}}\\ {{\tilde{f}_{q,n}}} \end{array}} \right] - \left[ {\begin{array}{l} {{f_{p,m}}}\\ {{f_{q,n}}} \end{array}} \right]} \right\|_{2}}. \end{array} \end{array} $$
(20)

In (20), based on the projective transform, the error coefficient em, n(p, q) indicates the difference between the 3D reconstructed position and the 3D real one in the image domain. As error coefficient em, n(p, q) is smaller, the association possibility is higher.

The next step is to optimize the cost function. The cost function is an assignment problem with 0–1 linear programming, which is extremely complex to optimize. Enumeration methods are used for this assignment problem; however, they need too much computation. In this paper, the linear assignment problem can be efficiently solved by the Jonker-Volgenant algorithm [19]. The Jonker-Volgenant algorithm is a joint optimization process which decreases the cost of computation.

As analyzed above, the proposed association method simplifies the association problem between the two images to the one between line and image. However, this method needs a high range and Doppler frequency resolution of images. To get better performance, interpolation method [20] can be utilized to enhance image resolution.

4 Results and discussion

In this section, examples of numerical simulation are presented to evaluate the performance of the proposed algorithm. Firstly, the performance analysis of the proposed association method is given in comparison to other association methods. Then, the proposed algorithm is tested with simulation data. The conditions of the simulation are shown in Table 1.

Table 1 Simulation conditions

On the conditions shown in Table 1, the range resolution of the image is Δr=c/2B=0.075m, and the Doppler frequency resolution of the image is Δfd=fp/H=0.39Hz.

4.1 Experiment 1

Here, an experiment is presented to compare the performance of the proposed association method which is defined as epipolar geometry projective (EGP) method with the nearest neighbor (NN) method, robust point matching (RPM) method [21], and coherent point drift (CPD) method [22]. The NN method is the most common association method and the main characteristics of the NN method are simple and fast. The RPM method and CPD method are association methods for optical images which can recover the transformation and assign correspondences between two images. There are 100 scattering centers in every image and each scattering center is placed at different separations from each other.

We evaluate the robustness of the proposed association method of the observation angle (θ,ϕ). To eliminate the interactions between the elevation angle θ and azimuth angle ϕ, the experiment falls into two parts: the location of sensors only varies in the azimuth angle, and the location of sensors only varies in the elevation angle. We use the corrected associated rate to evaluate the performance of these association method. The corrected associated rate ωc can be calculated by

$$\begin{array}{*{20}l} {\omega_{c}} = \frac{{{N_{c}}}}{{{N_{\text{total}}}}} \end{array} $$
(21)

where Ntotal is the total number of scattering centers in the two images and Nc is the number of scattering centers which are correctly associated in the two images. The correct associated rates of the proposed method, NN method, RPM method, and CPD method, are shown in Fig. 4. The association performance of the NN method is the worst. It is because that the positions of scattering centers in different images may vary widely. Besides, it is also seen that the proposed method is more robust to the rotation angle than the RPM method and the CPD method. The reason is that the proposed method uses epipolar geometry to build the projection relationship between the scattering center and epipolar line, so the association problem between the two images is simplified to the one between line and image.

Fig. 4
figure 4

Correct associated rate with rotation angles changing. a Keeping the same elevation angle and changing azimuth angle. b Keeping the same azimuth angle and changing elevation angle

4.2 Experiment 2

We consider first a point-scatterer target which consists of a platform and a solar panel shown in Fig. 5. Eighty-five scattering centers (red dot) are used to form the target, each with unit reflectivity amplitude. Table 2 shows the position of sensors in the ECEF reference system. The reconstruction result along with the real positions of scatter centers is shown in Fig. 5. It can be seen that all the strong scatter centers can be formed correctly, agreeing with the real positions.

Fig. 5
figure 5

Simulated aura satellite configuration and 3D reconstruction result. a The 3D model of aura satellite. b Estimated positions of scattering centers along with the true positions. c Top views. d Side views

Table 2 The positions of radar sensors

The reconstruction performance is measured by the coordinate errors Δxk, Δyk, and Δzk which are given by

$$\begin{array}{*{20}l} \begin{array}{l} \Delta {x_{k}} = \left| {{x_{k}} - {\hat{x}_{k}}} \right|\\ \Delta {y_{k}} = \left| {{y_{k}} - {\hat{y}_{k}}} \right|\\ \Delta {z_{k}} = \left| {{z_{k}} - {\hat{z}_{k}}} \right| \end{array} \end{array} $$
(22)

where (xk,yk,zk) is the true position of the kth scattering center in the body reference system and \(\left ({{\hat {x}_{k}},{\hat {y}_{k}},{\hat {z}_{k}}} \right)\) is the estimated position of the kth scattering center in the body reference system. As shown in Fig. 6, we can see that the absolute errors are so small that the position errors are less than the range resolution, which confirms the efficiency of the proposed algorithm.

Fig. 6
figure 6

The reconstruction accuracy for the algorithm. a The absolute errors in x coordinates. b The absolute errors in y coordinates. c The absolute errors in z coordinates

To evaluate the performance of the proposed algorithm in the presence of noise, different levels of complex Gaussian noise have been added to the ISAR data to produce different signal-to-noise ratios (SNRs) ranging from 0 to 20 dB. The average errors ex, ey, and ez are given by

$$\begin{array}{*{20}l} \begin{array}{l} {e_{x}} = \frac{1}{{{N_{m}}}}\frac{1}{K}\sum\limits_{i = 1}^{{N_{m}}} {\sum\limits_{k = 1}^{K} {\Delta {x_{k,i}}}} \\ {e_{y}} = \frac{1}{{{N_{m}}}}\frac{1}{K}\sum\limits_{i = 1}^{{N_{m}}} {\sum\limits_{k = 1}^{K} {\Delta {y_{k,i}}}} \\ {e_{z}} = \frac{1}{{{N_{m}}}}\frac{1}{K}\sum\limits_{i = 1}^{{N_{m}}} {\sum\limits_{k = 1}^{K} {\Delta {z_{k,i}}} } \end{array} \end{array} $$
(23)

where Nm is the number of Monte Carlo trials. In this experiment, the number of Monte Carlo trials is 500. Figure 7 depicts the relationship between the SNRs and the average errors when various data sizes are used. It can be seen that with the increase of the SNRs, the reconstruction accuracy improves as well. While the SNRs are over 10 dB, the average errors remaining stable are less than the range resolution. The reason is that the accuracy of association is mainly influenced by the range resolution and Doppler frequency resolution.

Fig. 7
figure 7

Relationship between the SNRs and the average errors

Next, the estimation accuracy of the 3D-reconstructed position of scattering center (xk,yk,zk) is specified by the Cramer-Rao lower bound (CRLB). Firstly, the CRLB for the target range rk and Doppler frequency fk are given as follows [9]

$$\begin{array}{*{20}l} \text{CRLB}\left\{ {{r_{k}}} \right\} = &{\left({\frac{c}{2}} \right)^{2}}\frac{6}{{4{\pi^{2}}\eta {B^{2}}}}\\ \text{CRLB}\left\{ {{f_{k}}} \right\} = &\frac{6}{{4{\pi^{2}}t_{{\text{pri}}}^{2}\eta M\left({{M^{2}} - 1} \right)}} \end{array} $$
(24)

where η is the signal-to-noise power ratio, tpri is the pulse recurrence interval and M is the total number of hits to observe ISAR images. In the following discussion, the vector-form representation of the parameters \({\bar {g}_{n}}\) and \({\bar {p}_{k}}\) is introduced as follows

$$ {\bar{g}_{n}} = \left[ {\begin{array}{c} {{r_{k,n}}}\\ {{f_{k,n}}} \end{array}} \right],{\bar{p}_{k}} = \left[ {\begin{array}{c} {{x_{k}}}\\ {{y_{k}}}\\ {{z_{k}}} \end{array}} \right], k = 1,2, \cdots,K. $$
(25)

Then the error sensitivity matrix of (xk,yk,zk) to the range and Doppler frequency directly observed from the ISAR image are calculated from (5) as follows

$$\begin{array}{*{20}l} \frac{{d{{\bar{p}}_{k}}}}{{d{{\bar{g}}_{n}}}} = &\left[ {\begin{array}{ll} {\frac{{\partial {x_{k}}}}{{\partial {r_{k,n}}}}}&{\frac{{\partial {x_{k}}}}{{\partial {f_{k,n}}}}}\\ {\frac{{\partial {y_{k}}}}{{\partial {r_{k,n}}}}}&{\frac{{\partial {y_{k}}}}{{\partial {f_{k,n}}}}}\\ {\frac{{\partial {z_{k}}}}{{\partial {r_{k,n}}}}}&{\frac{{\partial {z_{k}}}}{{\partial {f_{k,n}}}}} \end{array}} \right]\\ = &{\left({\mathbf{D}_{n}^{\mathrm{T}}{\mathbf{D}_{n}}} \right)^{- 1}}\mathbf{D}_{n}^{\mathrm{T}}. \end{array} $$
(26)

The CRLB for (xk,yk,zk) can be expressed using the CRLB for rk, n and fk, n as follows:

$$ \text{CRLB}\left\{ {{{\bar{p}}_{k}}} \right\} = \frac{{d{{\bar{p}}_{k}}}}{{d{{\bar{g}}_{n}}}}\text{CRLB}\left\{ {{{\bar{g}}_{n}}} \right\}{\left({\frac{{d{{\bar{p}}_{k}}}}{{d{{\bar{g}}_{n}}}}} \right)^{\mathrm{T}}} $$
(27)

where \(\text {CRLB}\left \{ {{{\bar {g}}_{n}}} \right \}\) is a diagonal matrix whose diagonal elements are the CRLBs given by (25).

The experiment for the estimation accuracy of the 3D-reconstructed position of the scattering center is provided, and the system parameters in Table 1 are assumed. The CRLBs expressed by (27) are shown in Fig. 8. The horizontal axis represents SNRs, and the vertical axis represents the CRLB. From Fig. 8, while the SNRs are over 20 dB, it is clear that the estimation accuracy is close to the CRLBs. While the SNRs are low, the estimation performance is also influenced by the over-determined projection equations.

Fig. 8
figure 8

CRLBs and estimated variance for the parameters (x, y,z)

4.3 Experiment 3

In this experiment, we use the geometrical theory of diffraction (GTD) data to evaluate the efficiency of the proposed algorithm. Figure 9 shows the 3D Hubble Space Telescope (HST) model. Generally, strong scattering centers of the target are generated from edges, corners, and tips; therefore, these strong scattering centers are able to present the physical structure and interesting component. As shown in Fig. 9, these strong scatter centers are mainly distributed on the edges of solar panels. As solar panels are interesting components of satellite, the experiment is performed to estimate the physical structure of solar panels.

Fig. 9
figure 9

Simulated HST configuration and the simulated ISAR images observed by the sensor. a The 3D model of HST. b The simulated ISAR image by the sensor

Figure 10 shows the 3D reconstruction results along with the real positions of scatter centers. The reconstructed scattering centers are grouped into four straight lines around edges of solar panels. The estimated width of solar panels which is calculated by the average distance of four straight lines is 3.65m (actual width is 3.8m). The estimated length of solar panels which is calculated by the average length of four straight lines is 16.2m (actual width is 16.6m). Besides, there are several reconstructed scattering centers away from solar panels, and the reason is that these scattering centers association is inaccurate.

Fig. 10
figure 10

3D reconstruction result using HST model. a Estimated positions of scattering centers along with the true positions. b Top views. c Front views. d Side views

5 Conclusions

In this paper, a 3D reconstruction algorithm is proposed for space targets with multistatic ISAR systems. The algorithm builds the projection equations between the target 3D geometry and ISAR images without cross-range scaling constraint. The projection equations contain the projection matrix and the trajectory matrix of scattering centers. The first one is obtained by kinematic formulas and coordinate systems transformation, and the second one is ranges and Doppler frequencies extracted from ISAR images. However, the positions of scattering centers in different images may vary widely, and we propose a new method based on projective transform and epipolar geometry to associate scattering centers. Numerical results with simulation data validate its performance in scattering center association and 3D reconstruction.

The future works consist of two parts. Firstly, when the space target is in other control systems, its intrinsic orbit characteristics may lead to an unsteady moving trajectory relative to the radar. Aiming at this problem, we will study the 3D reconstruction of space target for unsteady moving trajectory relative to the radar. Secondly, in fact, as the radar illuminates target, there are some structures of target that are occluded, which adds to the difficulty in scattering center association. Therefore, we will improve the association method to solve the problem of occlusion.