Transactions of Nanjing University of Aeronautics and Astronautics  2018, Vol. 35 Issue (6): 913-923   PDF    
Evaluation of Trajectory Error Effects in BP Based Space Target ISAR Imaging
Wang Ling, Wang Jie, Sun Lingling     
Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, P. R. China
Abstract: The space target imaging is important in the development of space technology. Due to the availability of trajectory information of the space targets and the arising of rapid parallel processing hardware, the back projection (BP) method has been applied to synthetic aperture radar (SAR) imaging and shows a number of advantages as compared with conventional Fourier-domain imaging algorithms. However, the practical processing shows that the insufficient accuracy of the trajectory information results in the degrading of the imaging results. On the other hand, the autofocusing algorithms for BP imaging are not well developed, which is a bottleneck for the application of BP imaging. Here, an analysis of the effect of trajectory errors on the space target imaging using microlocal technology is presented. Our analysis provides an explicit quantitative relationship between the trajectory errors of the space target and the positioning errors in the reconstructed images. The explicit form of the position errors for some typical trajectory errors is also presented. Numerical simulations demonstrate our theoretical findings. The measured position errors obtained from the reconstructed images are consistent with the analytic errors calculated by using the derived formulas. Our results will be used in the development of effective autofocusing methods for BP imaging.
Key words: inverse synthetic aperture radar (ISAR)    imaging    space targets    trajectory error    back projection (BP)    
0 Introduction

Conventional inverse synthetic aperture radar(ISAR) imaging uses range-Doppler (RD) method, in which a simple fast Fourier transform (FFT) is applied in the cross-range dimension to achieve the cross-range resolution. The RD method works under the assumptions of plane wave-front and small rotation-angle in the coherent processing interval (CPI)[1]. Motion compensation including range alignment and phase compensation is necessary before the FFT in cross-range to remove the translational motion component between the target and the radar. This FFT-based imaging method becomes invalid if the target undergoes a larger rotation relative to the radar accompanied by migration through resolution cell (MTRC)[1-3].

Many literatures for ISAR imaging of space targets have appeared[4-9]. In space target imaging, the trajectory information of space targets is always available. The targets mostly undergo stationary motion. This in fact enables the applications of the polar format algorithm (PFA)[10, 11] and back-projection (BP)[12-14] imaging method to integrate longer data than conventional RD method, and hence obtain higher image resolution. As compared to RD and PFA, the BP imaging method has several advantages: (1) it does not need motion compensation required by RD algorithm; (2) it does not need the assumption of plane wave-front; (3) it is applicable to arbitrary trajectories and imaging geometries; (4) it can be implemented efficiently exploiting parallel processing and high speed hardware, e.g. graphic processing unit (GPU). In this paper, we are interested in performing the image formation of space targets using BP method with the known trajectory information.

Due to the limited accuracy of the trajectory data, the errors in the trajectory may lead to mispositioning and smearing of the space target in the reconstructed image. We provide an analysis of the effect of trajectory errors on the image reconstruction and use the microlocal analysis to get explicit expressions of the positioning errors. The results show the relationship between the positioning errors and the trajectory errors, and the dependency of the positioning errors on the space target trajectory, the flying velocity of the space target, etc. We specify the positioning errors due to trajectory errors for some typical types of trajectory errors. Numerical simulations are performed to demonstrate our analysis. Our results can be easily extended to the bistatic ISAR imaging case. The approach for analyzing the positioning errors due to space target trajectory errors is initially presented in our previous conference paper[15].

1 ISAR Imaging Principles

The ISAR imaging from the perspective of BP theory is briefly described in this section.

The received signal of ISAR is given as

$ d\left( {s,t} \right) = \int {{{\rm{e}}^{ - {\rm{j}}\omega \left( {t - \frac{{R\left( {\mathit{\boldsymbol{X}}\left( s \right)} \right)}}{{{c_0}}}} \right)}}A\left( {\mathit{\boldsymbol{X}}\left( s \right),\omega } \right)T\left( {\mathit{\boldsymbol{X}}\left( s \right)} \right){\rm{d}}\omega {\rm{d}}\mathit{\boldsymbol{X}}\left( s \right)} $ (1)

where t denotes the fast time, s the slow-time, c0 the electromagnetic wave speed, ω the angular frequency of the transmitted waveform, X(s) the coordinates of the scatterers on the target, and T(X(s)) the target reflectivity. The s dependency of the scatterer position accounts for the position change as the space target flies along the trajectory. Note that the stop-go assumption is used in Eq. (1), which is valid for the cases of pulsed transmitted waveform and little radial motion existing between the space target and the radar. A(X(s), ω) is a complex amplitude function including the transmitted waveform and the antenna beam patterns.R(X(s)) denotes the total range of the two-way propagation of the electromagnetic wave. For monostatic ISAR, R(X(s)) is given by

$ R\left( {\mathit{\boldsymbol{X}}\left( s \right)} \right) = 2\left| {\mathit{\boldsymbol{X}}\left( s \right) - \mathit{\boldsymbol{p}}} \right| $ (2)

where p denotes the position of radar. Without the loss of generality, we focus on monostatic ISAR imaging in the rest of the discussion.

$ {\rm{Let}}\;\;\;\;\;\;\mathit{\boldsymbol{X}}\left( s \right) = \mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{X'}} $ (3)

where γ(s) denotes the trajectory of space target and X′ the coordinates of the target scatterers in the local coordinate system embedded on the target, as shown in Fig. 1.

Fig. 1 Illustration of imaging geometry for space target imaging where the radar is stationary on the ground

Fig. 1 presents the imaging geometry for space targets. Ox1x2x3 is the absolute or fixed reference coordinate system. Ox1x2x3′ is the local coordinate system embedded on the space target where the origin O′ is usually chosen to be at the mass center of the target. The coordinate system Ox1x2x3′ is translated from the origin of Ox1x2x3 by the space target trajectory γ(s). Note that the rotation of the space target is not considered during the synthetic aperture time.

Substituting Eqs. (2), (3) into Eq. (1), and making the change of variables XX′, we have

$ d\left( {s,t} \right) = \int {{{\rm{e}}^{ - {\rm{j}}\omega \left( {t - \frac{{2\left| {\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{X'}} - \mathit{\boldsymbol{p}}} \right|}}{{{c_0}}}} \right)}}A\left( {s,\mathit{\boldsymbol{X'}},\omega } \right)T\left( {\mathit{\boldsymbol{X'}}} \right){\rm{d}}\omega {\rm{d}}\mathit{\boldsymbol{X'}}} $ (4)

In the rest of our discussion, X′ will be replaced with X for notational simplicity.

A three-dimensional (3D) image can be reconstructed as long as the data is collected in 3D space, i.e., γ(s) undergoes sufficient variation in R3. We focus on the two-dimensional (2D) image formation. Let zR2 denote the position in the reconstructed 2D image. By using the filtered BP (FBP) method[12], the image can be formed as

$ \hat T\left( \mathit{\boldsymbol{z}} \right) = \int {{{\rm{e}}^{{\rm{j}}\omega \left( {t - \frac{{R\left( {s,\mathit{\boldsymbol{Z}}} \right)}}{{{c_0}}}} \right)}}Q\left( {\mathit{\boldsymbol{z}},s,\omega } \right)d\left( {s,t} \right){\rm{d}}\omega {\rm{d}}s{\rm{d}}t} $ (5)

Here

$ R\left( {s,{\bf{Z}}} \right) = 2\left| {\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}} \right| $ (6)

where ZR3, Z=(z, 0). The image is assumed to be reconstructed on a zero-height plane, i.e., z3=0.

In Eq. (5), the phase term accounts for the matched filtering. The filter Q can be chosen according to a variety of criteria[16, 17] to compensate the amplitude term in Eq. (5) or to enhance the edges in the images, etc.

Substituting Eq. (4) into Eq. (5), we have

$ \hat T\left( \mathit{\boldsymbol{z}} \right) = \int {{{\rm{e}}^{{\rm{j}}\mathit{\Phi }\left( {\mathit{\boldsymbol{\omega }},\mathit{\boldsymbol{Z}},\mathit{\boldsymbol{X}},s} \right)}}Q\left( {\mathit{\boldsymbol{z}},s,\mathit{\boldsymbol{\omega }}} \right)A\left( {\mathit{\boldsymbol{X}},s,\mathit{\boldsymbol{\omega }}} \right){\rm{d}}\mathit{\boldsymbol{\omega }}{\rm{d}}s} $ (7)

where Φ(ω, Z, X, s)=$\frac{\mathit{\boldsymbol{\omega }}}{{{c_0}}}\left[{R\left( {s, \mathit{\boldsymbol{X}}} \right)-R\left( {s, \mathit{\boldsymbol{Z}}} \right)} \right]$. Note that X=(x, x3) and Z=(z, 0).

The Hörmander-Sato lemma[18-22] tells us that the imaging operator in fact reconstructs a scatterer at location x on the target at position z in the image satisfying the following conditions[12, 19, 20].

$ {\partial _\omega }\mathit{\Phi }\left( {\mathit{\boldsymbol{Z}},\mathit{\boldsymbol{X}},s,\mathit{\boldsymbol{\omega }}} \right) = 0 \Rightarrow R\left( {s,\mathit{\boldsymbol{Z}}} \right) = R\left( {s,\mathit{\boldsymbol{X}}} \right) $ (8)
$ {\partial _s}\mathit{\Phi }\left( {\mathit{\boldsymbol{Z}},\mathit{\boldsymbol{X}},s,\mathit{\boldsymbol{\omega }}} \right) = 0 \Rightarrow {f_{\rm{d}}}\left( {s,\mathit{\boldsymbol{Z}}} \right) = {f_{\rm{d}}}\left( {s,\mathit{\boldsymbol{X}}} \right) $ (9)

Here fd denotes the Doppler frequency which is given by

$ {{f}_{\text{d}}}\left( s,\mathit{\boldsymbol{X}} \right)=\frac{\mathit{\boldsymbol{\omega }}}{{{c}_{0}}}\mathit{\boldsymbol{\dot \gamma }}\left( s \right)\cdot \overset\frown{\mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{X}}-\mathit{\boldsymbol{p}}} $ (10)

where ${\mathit{\boldsymbol{\hat u}}}$ denotes the unit vector of the vector u, $\overset\frown{\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}}$ the radar look-direction[12, 23]. $\mathit{\boldsymbol{\dot \gamma }}\left( s \right)$ is the first derivative of γ(s) with respect to s denoting the flying velocity of the space target. Eqs. (8) and (9) show that the reconstructed scatterer at z has the same range and Doppler frequency with the scatterer located at X[12, 19, 20].

2 Analysis of Positioning Errors Due to Trajectory Errors

In this section, we analyze reconstruction errors in ISAR images due to errors in the trajectory of the space target using microlocal analysis[18, 24, 25].

Let γ(s) denote the ideal or assumed trajectory and γε(s)=γ(s)+εΔγ(s) the true trajectory of the space target where ε is a small constant. In an ideal case where the target is tracked with sufficient accuracy, i.e.Δγ(s)≈0, the scatterer X on the target is mapped to z0 in the image. We refer to z0 as the correct position of the scatterer in the reconstructed image. Note that the mapping from X to z0 is associated with the projection of 3D coordinates to 2D coordinates.

Let z=z0z be the erroneous reconstructed position due to trajectory error Δγ(s). Using Eqs. (8) and (9), we have[23]

$ \begin{array}{l} R\left( {{\mathit{\boldsymbol{\gamma }}^\varepsilon }\left( s \right),\mathit{\boldsymbol{z}}} \right) = R\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),{\mathit{\boldsymbol{z}}_0}} \right) \Rightarrow \\ R\left( {\mathit{\boldsymbol{\gamma }}\left( s \right) + \varepsilon \Delta \mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right) = R\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}} - \Delta \mathit{\boldsymbol{z}}} \right) \end{array} $ (11)
$ \begin{array}{l} {f_{\rm{d}}}\left( {{\mathit{\boldsymbol{\gamma }}^\varepsilon }\left( s \right),\mathit{\boldsymbol{z}}} \right) = {f_{\rm{d}}}\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),{\mathit{\boldsymbol{z}}_0}} \right) \Rightarrow \\ {f_{\rm{d}}}\left( {\mathit{\boldsymbol{\gamma }}\left( s \right) + \varepsilon \Delta \mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right) = {f_{\rm{d}}}\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}} - \Delta \mathit{\boldsymbol{z}}} \right) \end{array} $ (12)

By expanding the terms on the left side of Eqs. (11) and (12) in Taylor series around ε=0 and keeping the first-order terms, Eqs. (11) and (12) will respectively become

$ \begin{array}{l} {\nabla _z}R\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right) \cdot \Delta \mathit{\boldsymbol{z}} = \\ - \varepsilon {\partial _\varepsilon }R\left( {\mathit{\boldsymbol{\gamma }}\left( s \right) + \varepsilon \Delta \mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right)\left| {_{\varepsilon = 0}} \right. \end{array} $ (13)
$ \begin{array}{l} {\nabla _z}{f_{\rm{d}}}\left( {\mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right) \cdot \Delta \mathit{\boldsymbol{z}} = \\ - \varepsilon {\partial _\varepsilon }{f_{\rm{d}}}\left( {\mathit{\boldsymbol{\gamma }}\left( s \right) + \varepsilon \Delta \mathit{\boldsymbol{\gamma }}\left( s \right),\mathit{\boldsymbol{z}}} \right)\left| {_{\varepsilon = 0}} \right. \end{array} $ (14)

By simplifying Eqs. (13) and (14), we will obtain

$ \Delta \mathit{\boldsymbol{z}}\cdot \overset\frown{\mathit{\Xi }\left( s,\mathit{\boldsymbol{z}} \right)}=\frac{-\varepsilon \Delta \mathit{\boldsymbol{\gamma }}\left( s \right)\cdot \overset\frown{\left( \mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}} \right)}}{\left| \mathit{\Xi }\left( s,\mathit{\boldsymbol{z}} \right) \right|} $ (15)
$ \begin{align} &\Delta \mathit{\boldsymbol{z}}\cdot \overset\frown{\mathit{\dot{\Xi }}\left( s,\mathit{\boldsymbol{z}} \right)}=\frac{1}{\left| \mathit{\dot{\Xi }}\left( s,\mathit{\boldsymbol{z}} \right) \right|}\left[ -\varepsilon \Delta \dot{\gamma }\left( s \right)\cdot \right. \\ &\left. \overset\frown{\left( \mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}} \right)}-\varepsilon {\mathit{\boldsymbol{\dot \gamma }}}\left( s \right)\cdot \frac{\Delta {{\mathit{\boldsymbol{\gamma }}}^{\bot }}\left( \mathit{\boldsymbol{Z}},s \right)}{\left| \mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}} \right|} \right] \\ \end{align} $ (16)

where

$ \mathit{\Xi }\left( s,\mathit{\boldsymbol{z}} \right)={{\mathit{\boldsymbol{D}}}_{z}}\overset\frown{\mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}}} $ (17)
$ \mathit{\dot \Xi }\left( {s,\mathit{\boldsymbol{z}}} \right) = {\mathit{\boldsymbol{D}}_z}\frac{{{{\mathit{\boldsymbol{\dot \gamma }}}^ \bot }\left( {\mathit{\boldsymbol{Z}},s} \right)}}{{\left| {\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}} \right|}} $ (18)

and

$ {\mathit{\boldsymbol{D}}_z} = \left( {\begin{array}{*{20}{c}} 1&0&0\\ 0&1&0 \end{array}} \right) $ (19)

where Dz denotes the projection matrix that projects a 3D vector onto the image plane. Thus, Ξ denotes the projection of radar look-direction onto the image plane. We refer to $\Delta \mathit{\boldsymbol{z}} \cdot \overset\frown{\mathit{\Xi }\left( {s, \mathit{\boldsymbol{z}}} \right)}$ as the radial positioning error and denote it by Δzr, z1Oz2.

In Eq.(18), ${{\mathit{\boldsymbol{\dot \gamma }}}^ \bot }\left( {\mathit{\boldsymbol{Z}}, s} \right)$ denotes the component of the velocity perpendicular to the radar look-direction, which is given by

$ \begin{align} &{{{{\mathit{\boldsymbol{\dot \gamma }}}}}^{\bot }}\left( \mathit{\boldsymbol{Z}},s \right)={\mathit{\boldsymbol{\dot \gamma }}}\left( s \right)-\left[ {\mathit{\boldsymbol{\dot \gamma }}}\left( s \right)\cdot \overset\frown{\left( \mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}} \right)} \right]\cdot \\ &\ \ \ \ \ \ \overset\frown{\left( \mathit{\boldsymbol{\gamma }}\left( s \right)+\mathit{\boldsymbol{Z}}-\mathit{\boldsymbol{p}} \right)} \\ \end{align} $ (20)

We refer to the direction of ${{\mathit{\boldsymbol{\dot \gamma }}}^ \bot }\left( s \right)$ as the transverse radar look-direction[23]. ${\mathit{\dot \Xi }}$ defined by Eq. (18) denotes the projection of the transverse radar look-direction onto the image plane. Thus, $\Delta \mathit{\boldsymbol{z}} \cdot \overset\frown{\mathit{\dot \Xi }\left( {s, \mathit{\boldsymbol{z}}} \right)}$ is refered to as the transverse positioning error and can be denoted by Δzt. Accordingly, Δγ(s) in Eq. (16) denotes the component of the trajectory error perpendicular to the radar look-direction $\overset\frown{\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}}.\Delta \mathit{\boldsymbol{\dot \gamma }}\left( s \right)$ is the derivative of Δγ(s) with respect to s, denoting the velocity error of the space target. The details of the derivations of Eqs. (15) and (16) are presented in Ref.[26].

From Eqs. (15) and (16), we have the following findings:

(1) The radial positioning error Δzr is determined by the radial component of the trajectory errors. The transverse positioning error Δzt depends on the radial component of the velocity error, the transverse component of the trajectory error, the velocity of the space target and the range between the radar and target. Note that we refer to the radial component as the projection of a vector onto the radar look-direction and the transverse component as the projection of a vector onto the plane perpendicular to the radar look-direction.

(2) The first term in the square bracket of Eq. (16) makes more contribution to the transverse positioning error than the second term due to the large range term in the denominator of the second term. This implies that the value of Δzt is more affected and determined by the radial velocity error of the space target.

(3) The dependency of Δzr and Δzt on s gives the explanation of smearing in the reconstructed image. In the case of wide-aperture imaging where the change of the radar look-direction becomes significant during the CPI and the case where the trajectory errors are time-varying, the positioning errors accordingly change with time leading to the defocusing of the targets.

The positioning errors induced by typical trajectory errors including constant, linear, quadratic and sinusoidal trajectory errors are analyzed. The results are listed in Table 1. The angles are defined as follows.

Table 1 Positioning errors due to different types of trajectory errors

ψ:Grazing angle of radar look direction (the angle between $\overset\frown{ {\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}} }$ and its projection on the image plane).

ψ:Grazing angle of radar transverse look direction (the angel between ${{\mathit{\boldsymbol{\dot \gamma }}}^ \bot }\left( {\mathit{\boldsymbol{Z}}, s} \right)$ and its projection on the image plane).

θ:The angel between $\mathit{\boldsymbol{\dot \gamma }}\left( s \right)$ and Δγ(s).

φ:Viewing angle (the angle between $\mathit{\boldsymbol{\dot \gamma }}\left( s \right)$ and $\overset\frown{\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}}$).

Fig. 2 illustrates the related vectors and angles. The gray plane denotes the image plane denoted by z1Oz2. The coordinates Oz1z2z3 coincides with the local coordinates Ox1x2x3′ at the starting time of the synthetic aperture. The dark red and green arrows denote the directions of Ξ and ${\mathit{\dot \Xi }}$, respectively.

Fig. 2 Illustration of angles and vectors used in trajectory error analysis

Note that in Table 1, the superscript r and t denote the radial and transverse components of a vector, respectively. For example, Δγ0rγ0·$\overset\frown{\left( {\mathit{\boldsymbol{\gamma }}\left( s \right) + \mathit{\boldsymbol{Z}} - \mathit{\boldsymbol{p}}} \right)}$ and Δγ0t = |Δγ0|. Ae sinωs in the last row denotes a 3D sinusoidal function, i.e., Aesinωs=[Aesin1ω1s Aesin2ω2s Aesin3ω3s].

Nextly the positioning errors for each case in Table 1 are analyzed in detail.

(1) Constant trajectory error Δγ0 The second row of Table 1 gives positioning errors induced by the constant trajectory error. The radial error Δzr is dependent on the radial component of Δγ0 and the transverse error Δzt is dependent on the transverse component of Δγ0. For a relatively short synthetic aperture, the radar look-direction does not change much. Δzr and Δzt are approximately constant with only position shift in the reconstructed image rather than defocusing. However, for a large synthetic aperture, the radar look-direction undergoes relatively large change leading to large changes of angles ψ, ψ and θ. Thus, Δzr and Δzt may vary significantly with slow time, which leads to the smearing of the reconstructed target even if the trajectory error is constant.

(2) Linear trajectory error ves The third row of Table 1 shows the positioning errors due to the linear trajectory error ves where ve denotes the velocity error. For a short synthetic aperture where the radar look-direction and related angles are almost constant, Δzr and Δzt vary linearly with slow time with the radial velocity error ver and transverse velocity error vet being the slopes, respectively. As shown by the first term in the third row and the third column, Δzt is also dependent on ver. This ver dependent term is much larger than the second linear term in Δzt and also larger than the radial positioning error Δzr due to the multiplication of the range term |γ(s)+Z-p|.

(3) Quadratic trajectory error aes2/2 Ignoring the changes of the radar look-direction and the related angles in a relatively short imaging time, Δzr and Δzt are both quadratically dependent on the radial and transverse acceleration error, aer and aet, respectively. The transverse positioning error Δzt is much larger than the radial component Δzr due to the range dependent term, as listed in the fourth row and the third column.

(4) Sinusoidal trajectory error Aesinωs As show in the fifth row of Table 1, Δzr and Δzt oscillate with time in the case of sinusoidal trajectory errors. Similar to the cases (2) and (3), Δzt is much larger than radial positioning error and mainly depends on the radial component of the sinusoidal trajectory error. Thus, the target gets smeared in the transverse radar look-direction if there is a radial component of the sinusoidal trajectory error (Aeωcosωs)r.

3 Demonstrations 3.1 Configuration and parameters

In this section, simulations are conducted to demonstrate our analysis. The satellite tool kit (STK) is used to obtain the real trajectory data of in-orbit space targets, which is required by the raw data generation. The transmitted waveform is a linear frequency modulated signal. The parameters of the orbit, waveform and system configuration are listed in Table 2.

Table 2 System parameters used in simulations

The total passing time of the orbit used in the simulations is 860.43 s. We select a part of the trajectory in the simulations, which corresponds a CPI of 3.75 s having 1 500 pulses. The range between the target and the radar at the starting time is 680.85 km.

In the current configuration, the synthetic aperture length is much smaller than the radar-to-target range. Thus, the radar look-direction and transverse look-direction do not change significantly during the coherent integration time.

The image plane is selected as the x1′O1x2′ plane at the starting time of the synthetic aperture. The size of the scene considered in the image reconstruction is 20 m2×20 m2 and is discretized into 300 pixels×300 pixels with a pixel size of 0.067 m×0.067 m approximately. A point target is assumed in the simulations. The target is located at [0 0 0] in the local coordinate system O1x1x2x3′ with unit reflectivity, corresponding to the [150 150] th pixel in the image.

The position errors in the reconstructed images are measured and compared with the analytic ones calculated using Eqs. (15), (16). The sub-aperture processing is used in measuring the position errors. The data is divided into several patches with or without overlapping. The images are reconstructed using each patch of data, which refer to as the sub-images. The position errors in each sub-image are measured and then interpolated to obtain the positioning errors over the aperture.

3.2 Results

Fig. 3 shows the reconstructed image of the point target with correct trajectory information. The target is well-focused at [-0.085 m, -0.085 m], which basically coincides with the true position of the point [0 m, 0 m]. The trails intersecting at the target position indicate the directions of Ξ and ${\mathit{\dot \Xi }}$, i.e. the projections of the radar look-direction and transverse look-direction on the image plane, as indicated by the white arrows in Fig. 3.

Fig. 3 Image of a point target reconstructed using correct trajectory information

3.2.1 Constant trajectory errors

We assumed a constant trajectory error, Δγ0=[2 m, 2 m, 0 m] in the image formation. Fig. 4 shows the reconstructed result using the erroneous trajectory. The yellow circle denotes the true position of the target.

Fig. 4 Reconstructed ISAR image for the case of a constant trajectory error

It can be seen that the target is well reconstructed, however at an erroneous position of [-2.01 m, -2.01 m]. The positioning error is a constant position shift, which is constant with our analysis in Section 2.

Fig. 5 presents the analytic and measured radial and transverse positioning error. It can be seen that the measured and analytic values match very well. Note that the discretization of the image and the imaging resolution determined by the configuration and waveform parameters affect the accuracy of the measured position errors, which leads to a discrepancy between the measured and analytic values.

Fig. 5 The analytic and measured radial and transverse positioning errors in the case of a constant trajectory error

By comparing the radial and transverse components of the positioning errors, we can see that Δzt is smaller than Δzr. This is also indicated in the image shown in Figure 4, in which the position shift in the radial direction is dominant. The curves in Fig. 5 show that Δzr is nearly constant during the integration time, while Δzt undergoes a larger change with the slow time than Δzr. This may be due to the larger change of angles θ and φ, as shown by the term in the second row and the third column. However, since Δzt varies in a small range, there is no visible smearing in the reconstructed image, as shown in Fig. 4.

3.2.2 Liner trajectory error

We considered a linear trajectory error, Δγ(s)=ves, where ve=[0.001 m/s, 0.001 m/s, 0 m/s] in the image formation. Fig. 6 shows the reconstructed image under the assumption of linear trajectory error. It can be seen that the target is reconstructed at a position shifting mainly in the transverse look-direction as compared to the target true position. This is consistent with our analysis in Section 2. The radial component of the velocity error induces a much larger transverse positioning error than the radial positioning error.

Fig. 6 The reconstructed ISAR image in the case of a linear trajectory error ve= [0.001 m/s, 0.001 m/s, 0 m/s]

Fig. 7 shows the analytic and measured radial and transverse positioning error. It can be seen that the measured values match with the analytic ones well. Both the radial and transverse error change linearly with time. Δzt is larger than Δzr, as indicated by the red and black lines in Fig. 7. This is consistent with the image shown in Fig. 6 and the theoretical result in Section 2.

Fig. 7 The analytic and measured radial and transverse positioning errors in the case of a linear trajectory error, ve =[0.001 m/s, 0.001 m/s, 0 m/s]

3.2.3 Quadratic trajectory errors

We assume Δγ(s)= $\frac{1}{2}$aes2 where ae=[0.000 2 m/s2, 0.000 2 m/s2, 0 m/s2]. Fig. 8 shows the reconstructed image. Note that in this case the size of the scene is increased to be 40 m2×40 m2 to show the complete signature of the target in the reconstructed image.

Fig. 8 The reconstructed image in the case of a quadratic trajectory error, ae =[0.000 2 m/s2, 0.000 2 m/s2, 0 m/s2]

It is seen that the target is defocused with a streaking artifact. As compared to the correct reconstruction in Fig. 3, the smearing occurs mainly in the transverse look-direction.

Fig. 9 presents the analytic and measured radial and transverse positioning errors in the case of a quadratic trajectory error. The measured positioning errors indicated by the dashed lines match with the analytic ones indicated by the solid lines. The radial positioning error approaches zero, while the transverse positioning error is much larger than the radial component and undergoes a large change during the integration time. This in fact leads to the smeared target signature, as shown in Fig. 8. It can be also seen from Fig. 8 that the transverse positioning error changes quadratically with the time. This is consistent with our theoretical analysis in Section 2.

Fig. 9 The analytic and measured radial and transverse positioning errors in the case of a quadratic trajectory error, ae=[0.000 2 m/s2, 0.000 2 m/s2, 0 m/s2]

3.2.4 Sinusoidal trajectory errors

We assume a sinusoidal error, Δγ(s)=[0.01sin(πs/16) m, 0.01cos(πs/16) m, 0 m] in the trajectory of the target. Fig. 10 shows the reconstructed image under the assumption of this sinusoidal trajectory error.

Fig. 10 The reconstructed image in the case of a sinusoidal trajectory error Δγ(s)=[0.01sin(πs/16 m, 0.01cos(πs/16) m, 0 m]

From Fig. 10, it is seen that the target is not focused at its true position. The target energy spreads severely in the transverse look-direction. Comparing the target signature with that of in Fig. 8, i.e. the image reconstructed in the case of a quadratic trajectory error, we see the smearing occurs mainly in the transverse look-direction similarly, but in the case of a sinusoidal trajectory error, the smearing is centered around the true position of the target rather than the one-side smearing in Fig. 8 in the case of a quadratic trajectory error.

Fig. 11 shows the analytic and measured radial and transverse components of the positioning error. The consistency of the measured and analytic values verifies the correctness of our formulas of the positioning errors.

Fig. 11 The analytic and measured radial and transverse positioning errors in the case of a sinusoidal trajectory error, Δγ(s)=[0.01sin(πs/16) m, 0.01cos(πs/16) m, 0 m]

It can be seen from Fig. 11 that the radial positioning error is much close to zero, but the transverse positioning error is large and varies in a sinusoidal law. This explains the smeared target signature in Fig. 10, which is due to the oscillation of the transverse positioning error around zero.

3.3 Remarks

From the demonstration above, we see that our analysis provides a qualitative relationship between the smeared target signatures in the reconstructed images and the trajectory errors. The deterministic trajectory errors in the demonstration including constant, linear, quadratic and sinusoidal errors are mainly considered. The random trajectory errors are wideband noise-like errors. Given a specified data set, the realization of a random trajectory error can be regarded as a combination of a number of sinusoidal signals, which leads to a superposition of the defocusing effects as shown in Fig. 10.

Our results given by Eqs. (15), (16) indicate that a straightforward inversion from 2D positioning errors (a common 2D imaging is assumed.) to six independent unknowns (three components of the transmitter trajectory errors and three for the receiver) is not solvable. However, under certain scenarios, the relationship between the positioning errors and the trajectory errors may be simplified, e.g., in the monostatic configuration. In the monostatic case, the trajectory error can be expressed in terms of a radial component and a transverse component. Thus, the inversion may be solvable. Since the true position of the target is not known a prior, the position errors cannot be determined with only the measurements of the erroneous positions. We may divide the data into several sub-apertures and measure the change of the erroneous positions through the sub-apertures. The differential position errors in horizontal and vertical directions are then used to retrieve the radial and transverse trajectory errors. We are developing autofocusing methods on this track.

4 Conclusions

We presented an approach to analyze the reconstruction errors in ISAR imaging of space targets due to errors in the trajectory information. We derived the explicit formulas of the positioning errors in the radar look-direction and transverse look-direction using microlocal analysis and obtained a quantitative relationship between the trajectory errors of the space targets and positioning errors in the reconstructed images. The constant trajectory error leads to constant position shifts in the reconstructed images for a relatively short imaging time. The linear trajectory error leads to a much large position shift in the transverse look-direction for a relatively short imaging time. For wide-aperture imaging or long imaging time, the dependency of the positioning errors on slow time cannot be ignored and may lead to defocusing of the target rather than only a constant position shift. The quadratic trajectory error leads to obvious smearing in radar traverse look-direction and small radial positioning error. The sinusoidal trajectory error leads to smearing of the target in radar transverse look-direction centered at the target true position. Numerical simulation results demonstrated our theoretical analysis. The autofocusing research may benefit from our results. To develop the autofocusing method exploiting the obtained quantitative relationship between the trajectory error and reconstructed position error is one of our undergoing works.

Acknowledgements

This work is supported by the National Natural Science Foundation of China(No.61871217) and the Foundation of Graduate Innovation Center in Nanjing University of Aeronautics and Astronautics (No. kfjj20170404), China.

References
[1]
WEHNER D R. High resolution radar[M]. Norwood, MA: Artech House Inc, 1987: 484.
[2]
WALKER J L. Range-Doppler imaging of rotating objects[J]. IEEE Trans, 1980, 16(1): 23-52.
[3]
DOERRY A W. Synthetic aperture radar processing with polar formatted subapertures[C]//1994 IEEE Conference on Signals, Systems and Computer. USA: IEEE, 1994: 1210-1215. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=471651
[4]
WANG F, EIBERT T F, JIN Y Q. Simulation of ISAR imaging for a space target and reconstruction under sparse sampling via compressed sensing[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(6): 3432-3441. DOI:10.1109/TGRS.2014.2376940
[5]
BAI X R, ZHOU F, XING M D, et al. Scaling the 3-D image of spinning space debris via bistatic inverse synthetic aperture radar[J]. IEEE Geoscience & Remote Sensing Letters, 2010, 7(3): 430-434.
[6]
DU Y, JIANG Y, ZHOU W. An accurate two-step ISAR cross-range scaling method for earth-orbit target[J]. IEEE Geoscience & Remote Sensing Letters, 2017, 14(11): 1893-1897.
[7]
XU Z, ZHANG L, XING M, et al. Interesting components detection for space satellites from inverse synthetic aperture radar image via feature probabilistic estimation[J]. Image Processing IET, 2014, 9(6): 506-515.
[8]
ZHU J, ZHU S, LIAO G. High-resolution radar imaging of space debris based on sparse representation[J]. IEEE Geoscience & Remote Sensing Letters, 2015, 12(10): 2090-2094.
[9]
NING Y, BAI X R, ZHOU F, et al. Method for inverse synthetic aperture radar imaging of space debris using improved genetic algorithm[J]. IET Radar Sonar & Navigation, 2017, 11(5): 812-821.
[10]
THOMPSON P, WAHL D E, EICHEL P H, et al. Spotlight-mode synthetic aperture radar:A signal processing approach[M]. Springer US: Kluwer Academic Publishers, 1996: 330-332.
[11]
JAKOWATZ C V. Spotlight-mode synthetic aperture radar:A signal processing approach[J]. Springer Berlin, 1996, 26(52): 330-332.
[12]
YARMAN C E, YAZICI B, CHENEY M. Bistatic synthetic aperture radar imaging for arbitrary flight trajectories[J]. IEEE Transactions on Image Processing, 2008, 17(1): 84-93.
[13]
WANG S, FAN C, HUANG X, et al. Bistatic ISAR imaging based on BP algorithm[C]//Progress in Electromagnetic Research Symposium. Shanghai, China: IEEE, 2016: 2858-2862. http://ieeexplore.ieee.org/abstract/document/7735141/
[14]
ZHAO H, WANG J, XIONG D, et al. The modified back projection algorithm for bistatic ISAR imaging of space objects[C]//IEEE International Conference on Signal Processing, Communications and Computing. Chengdu, China: IEEE, 2016: 1-5. http://ieeexplore.ieee.org/document/7753650/
[15]
XIAO D, WANG L, WANG X, et al. Motion error analysis for ISAR imaging of space targets[C]//CIE International Conference on Radar. Guangzhou, China: IEEE, 2017: 1-4. http://ieeexplore.ieee.org/document/8059501/
[16]
YAZICI B, CHENEY M, YARMAN C E. Synthetic-aperture inversion in the presence of noise and clutter[J]. Inverse Problems, 2006, 22(5): 1705-1729. DOI:10.1088/0266-5611/22/5/011
[17]
YANIC H C, LI Z M, YAZICI B. Computationally efficient FBP-type direct segmentation of synthetic aperture radar images[J]. Proceedings of SPIE -The International Society for Optical Engineering, 2011, 8051(5): 361-372.
[18]
YAZICI B, KRISHNAN V. Microlocal analysis in imaging: A tutorial[EB/OL]. https: //www.ecse.rpi.edu/~yazici/ICASSPTutorial/microlocal_ICASSP10.html, 2010.
[19]
TRÈVES F. Introduction to pseudodifferential and Fourier integral operators[M]. Berlin: Springer, 1980.
[20]
HÖRMANDER L. Fourier integral operators. Ⅰ[J]. Acta Mathematica, 1971, 127(1): 79.
[21]
HÖRMANDER L. The analysis of linear partial differential operators Ⅰ[J]. American Mathematical Monthly, 2003, 92(10): 745.
[22]
KRISHNAN V P, QUINTO E T. Microlocal aspects of common offset synthetic aperture radar imaging[J]. Inverse Problems & Imaging, 2013, 5(3): 659-674.
[23]
WANG L, YAZICI B, CAGRI YANIK H. Antenna motion errors in bistatic SAR imagery[J]. Inverse Problems, 2015, 31(6): 1-32.
[24]
NOLAN C J, CHENEY M. Microlocal analysis of synthetic aperture radar imaging[J]. Journal of Fourier Analysis & Applications, 2004, 10(2): 133-148.
[25]
GRIGIS A, SJÖSTRAND J. Microlocal analysis for differential operators[J]. London Mathematical Society Lecture Note Series, 1994, 18(4): 431-432.
[26]
XIAO D L. Study on ISAR imaging of space targets using BP technology[D].Nanjing: Nanjing University of Aeronautics and Astronautics, 2017.