1 Introduction

Polarization is an essential property of the light. It is important to measure the polarization state in many applications, such as remote sensing [1,2,3], biomedicine [4, 5], sky polarized light navigation [6], fluorescence polarization immunoassay [7], ellipsometry [8,9,10], seismic acquisition [11] and so on.

The state-of-the-art methods for polarization measurement can be summarized as four typical classes, interferometric polarimeter [12, 13], temporally modulated polarimeter [14, 15], division-of-amplitude polarimeter [16, 17] and spatially modulated polarimeter [6, 10, 18,19,20,21,22,23]. The first three families of methods need to carry out a serial of measurements with different orthogonal states of polarization. They usually suffer from high computation, poor stability, and complex system structure [9]. Contrarily, spatially modulated polarimeter methods are more simple and efficient. There is no need of optical components twisting, and polarization direction can be obtained by a single measurement [24]. However, the classical spatial modulated polarimetry methods are highly dependent on spatial modulation devices. They were difficult to be deployed in the realistic scenarios.

To address the issue of the classical spatial modulated polarimetry methods, vectorial optical field-based spatially polarization modulated polarimetry methods have been proposed recently [6, 18, 21,22,23]. In the strategy, the polarization information is recorded by the modulated intensity pattern of the input light. In this way, when the pattern of the light is captured by a camera, the polarization measurement can be transformed into the problem of analyzing the irradiance image. In [22, 23], when a zero-order vortex quarter-wave retarder was used as a space variant birefringence device to achieve spatial modulation for all polarization components, a normalized least-squares method and a hybrid gradient descent algorithm were proposed, respectively, to calculate the polarization state from the irradiance images. Researchers have found that, when the input light was analyzed by an azimuthally (a radially) spatial modulator, the irradiance image has hourglass-shaped gray distribution. In other words, the darkest line of the irradiance image is parallel (or perpendicular) to the polarization direction [21]. Consequently, the polarization direction of the input light can be captured by extracting the darkest line from the image. In [6, 10], the global Radon transform (GRT) was adopted. Gao and Lei [18] also chosen GRT to get the intensity modulation curve from which the four Stokes parameters of the input light can be measured. Lei and Liu [21] compared the accuracy and cost time of different image processing algorithms such as interesting area detection (IAD), local correlation (LC), GRT and so on. They found that the precision of the IAD was low, and the Radon transform was quite sensitive to image noise. The LC had more stable and higher accuracy, but it was time-consuming like IAD and Radon transform.

To measure the polarization direction more robust and faster, a novel method is presented in this paper. Motivated by GRT and LC, the method contains three stages: coarse estimation, local Radon transform (LRT), and error compensation (EC). At the first, the coarse direction is estimated based on threshold segmentation. Then, LRT is performed in a local angle range while the coarse estimated direction is taken as the center angle. In the end, the accurate darkest direction (parallel or perpendicular to the polarization direction) is gotten by EC, which establishes a quantitative link between the error of coarse estimation and the correlation between LRTs. The advantages of our algorithm are fourfold.

Firstly, the proposed method is robust to noise owing to the gray integral operation in LRT.

Secondly, the utilization of EC makes the proposed method highly precise.

Thirdly, different to the GRT with small angle interval [6, 10, 18], LRT is only need to be computed in a local angle domain with large angle interval. It is therefore more computationally efficient.

Finally, since most of the processing can be done by looking up tables generated offline, our algorithm is suitable for real-time task for its high speed.

The outline of this article is as follows. In Sect. 2, the coarse estimation, LRT, and EC are introduced, followed by a flowchart to summarize our method. Section 3 shows several experimental results. Finally, some conclusions are drawn in Sect. 4.

2 Method

It has been verified that, when the input light was analyzed by an azimuthally (or a radially spatial modulator, the hourglass-shaped intensity pattern of the modulated light satisfies Malus’s law [21]. In other words, the gray distribution of the irradiance image, as shown in Fig. 1, is directly proportional to the square of the cosine of the angle between the azimuthal angle and the darkest direction. The darkest direction, which is parallel (or perpendicular) to polarization direction, has the minimum radial integral value of the image. To capture the darkest direction accurately and quickly, our method include three stages: coarse estimation, LRT and EC. They are introduced as follow.

Fig. 1
figure 1

The illustration of LRT

2.1 Coarse estimation

In our algorithm, the darkest direction is first coarsely estimated based on threshold segmentation. To reduce the computational complexity, threshold segmentation is processed on the pixels on the circles with certain radiuses rather than all the pixels in the image. Given a set of radiuses (e. g, \(r_{1} ,r_{2} ,r_{3} , \ldots ,r_{N}\)), the pixels on the circles with different radiuses are collected. Then, the pixels are divided into two parts (i.e., bright area and dark area) based on a predefined threshold \(T\). The average azimuthal angle of pixels in the dark area, denoted by \(\theta_{{\text{c}}}\), is treated as the coarse darkest direction, i.e.,

$$\theta_{{\text{c}}} = {\text{mean}}(\arg I(r,\theta ) < T{\kern 1pt} {\kern 1pt} ),\quad 3r = r_{1} ,r_{2} , \ldots ,r_{N} ,\theta \in \left[ {0^{ \circ } ,180^{ \circ } } \right).$$
(1)

where \(I(r,\theta )\) is the gray value of the pixel with the coordinate \((r,\theta )\).

2.2 Local Radon transform

In this stage, Radon transform [25] is adopted to compute the integral of an image along specified directions. Suppose that \(f\) is a 2-D function, the integral of \(f\) along the radial line \(l(\theta_{i} ) = \left\{ {x,y:x\sin \theta_{i} - y\cos \theta_{i} = 0} \right\}\) is given by

$$g(\theta_{i} ) = \int\limits_{ - \infty }^{\infty } {\int\limits_{ - \infty }^{\infty } {f(x,y)\delta (x\sin \theta_{i} - y\cos \theta_{i} ){\text{d}}x{\text{d}}y} } .$$
(2)

For digital images, Eq. (2) can be transferred as

$$g(\theta_{i} ) = \sum\limits_{d = - v}^{v} {\sum\limits_{x} {\sum\limits_{y} {I(x,y)W(x\sin \theta_{i} - y\cos \theta_{i} )} } } .$$
(3)

In Eq. (3), \(I(x,y)\) is the gray of the pixel with the rectangular coordinate \((x,y)\). \(W(\xi )\), the weight of the pixel \((x,y)\) for integration along \(l(\theta_{i} )\), can be obtained by

$$W(\xi ) = \frac{d - \left| \xi \right|}{d}.$$
(4)

\(d\) is the distance threshold to determine whether the pixel \((x,y)\) is on the line \(l(\theta_{i} )\).

Obviously, GRT needs to compute the integral of the image along radial lines orientated from \(0^{ \circ }\) to \(180^{ \circ }\). Moreover, to have the accurate result, the angle interval that the GRT adopts should be as small as possible. Different from GRT, LRT only needs to capture the integral of the image in a local angle range, in which, the coarse darkest direction (i.e., \(\theta_{{\text{c}}}\)) is taken as the center angle. For example, assuming the angle range and angle interval for LRT are \(\pm \theta_{T}\) and \(\theta_{s}\), LRT is gotten while the radial integral values are arranged in azimuth order. It is \(G(\theta_{{\text{c}}} ) = \left\{ {g(\theta_{i} )} \right\}\;\left( {\theta_{i} = \theta_{{\text{c}}} - \theta_{T} + (i - 1)\theta_{s} ,i = 1,2 \ldots ,2\theta_{T} /\theta_{s} + 1} \right)\).

As illustrated in Fig. 1, the actual darkest direction of the irradiance image is \(25^{ \circ }\). As the image is disturbed by Gaussian white noise (\(\mu = \sigma^{2} = 0.01\)), the darkest direction calculated by coarse estimation is \(25.06^{ \circ }\)(the white solid line in Fig. 1), LRT is composed of the normalized integral values of the image along the radial lines (the white dotted line in Fig. 1) counterclockwise oriented from \(145.06^{ \circ }\) to \(85.06^{ \circ }\). Here, \(\theta_{T}\) is set to be \(60^{ \circ }\).

2.3 Error correction

Theoretically, the darkest direction has the minimum value in LRT. It is regrettable that, the radial integral value of the image is always disturbed by the noise. For instance, the LRT of the image (shown in Fig. 1) is displayed in Fig. 2. The actual darkest direction of the image is \(25^{ \circ }\), yet the direction that has the minimum value in LRT is \(25.6^{ \circ }\). Apparently, the direction with the minimum value is not the actual darkest direction under the noise. To address this issue, EC is developed to explore the error of coarse estimation.

Fig. 2
figure 2

The normalized LRT of the image in Fig. 1

Assuming we have two modulate irradiance images (\({\text{Im}}_{1}\) and \({\text{Im}}_{2}\)) with hourglass-shaped gray distribution, and the darkest directions of two images are \(\theta_{d1}\) and \(\theta_{d2}\), respectively, \(G_{1} (\theta_{d1} - \theta_{a} )\) and \(G_{2} (\theta_{d2} - \theta_{a} )\) has the best correlation. That is,

$${\text{corr}}(G_{1} (\theta_{d1} - \theta_{a} ),G_{2} (\theta_{d2} - \theta_{a} )) = \mathop {\max }\limits_{\theta \in [0,\pi )} [{\text{corr}}(G_{1} (\theta ),G_{2} (\theta_{d2} - \theta_{a} ))].$$
(5)

\(G_{1} (\theta_{d1} - \theta_{a} )\) and \(G_{2} (\theta_{d2} - \theta_{a} )\) denote the LRTs of \({\text{Im}}_{1}\) and \({\text{Im}}_{2}\) while \(\theta_{d1} - \theta_{a}\) and \(\theta_{{d_{2} }} - \theta_{a}\) are the centers of the local angle ranges for integration, i.e., \(G_{1} (\theta_{d1} - \theta_{a} ) = \{ g_{1} (\theta_{i} )\} {\kern 1pt} {\kern 1pt} {\kern 1pt} (\theta_{i} = \theta_{d1} - \theta_{a} - \theta_{T} + (i - 1)\theta_{s} )\), and \(G_{2} (\theta ) = \{ g_{2} (\theta_{i} )\} {\kern 1pt} {\kern 1pt} {\kern 1pt} (\theta_{i} = \theta - \theta_{T} + (i - 1)\theta_{s} )\). Similarly, \(G_{1} (\theta )\) is the LRT of the image \({\text{Im}}_{1}\) while the center of the local integral angle range is \(\theta\). \(\theta_{a}\) is an arbitrary angle.

Let the coarsely estimated darkest direction for \({\text{Im}}_{2}\) is \(\theta_{c}\), the error of the coarse estimation is \(\theta_{e}\). From Eq. (5), we can infer that, the LRT of \({\text{Im}}_{1}\) that has the best correlation with \(G_{2} (\theta_{c} )\) is \(G_{1} (\theta_{d1} - \theta_{e} )\). This inference can be represented as

$${\text{corr}}(G_{1} (\theta_{d1} - \theta_{e} ),G_{2} (\theta_{c} )) = \mathop {\max }\limits_{\theta \in [0,\pi )} [{\text{corr}}(G_{1} (\theta ),G_{2} (\theta_{c} ))].$$
(6)

In Eq. (5), the range for \(\theta\) is \([0,\pi )\). In fact, the optimal \(\theta\) fluctuates around \(\theta_{d1}\) as a result of the small error of coarse estimation. To reduce calculation, the range for \(\theta\) can be decreased to \(\left[ {\theta_{d1} - \theta_{M} ,\theta_{d1} + \theta_{M} } \right]\). Substituting \(\theta_{e} = \theta_{d2} - \theta_{c}\) into Eq. (5), we can have

$$\theta_{e} = \theta_{d1} - \mathop {\arg {\kern 1pt} \max }\limits_{{\theta \in [\theta_{d1} - \theta_{M} ,\theta_{d1} + \theta_{M} ]}} [{\text{cor}}r(G_{1} (\theta ),G_{2} (\theta_{c} ))].$$
(7)

Equation (6) explores the link between the error of coarse estimation and the correction between LRTs. Based on Eq. (6), the actual darkest direction of \({\text{Im}}_{2}\) can be captured by

$$\theta_{d2} = \theta_{c} + \theta_{d1} - \mathop {\arg {\kern 1pt} \max }\limits_{{\theta \in [\theta_{d1} - \theta_{M} ,\theta_{d1} + \theta_{M} ]}} [{\text{corr}}(G_{1} (\theta ),G_{2} (\theta_{c} - \theta ))].$$
(8)

In Eq. (7), the range for \(\theta\) is \([0,\pi )\). In fact, the optimal \(\theta\) fluctuates around \(\theta_{d1}\) as a result of the small error of coarse estimation. To reduce calculation, the range for \(\theta\) can be decreased to \(\left[ {\theta_{d1} - \theta_{M} ,\theta_{d1} + \theta_{M} } \right]\).

In practice, according to Malus’s law, \({\text{Im}}_{1}\) can be generated and treated as the model image. Apparently, LRTs of \({\text{Im}}_{1}\) also satisfies Malus’s law. That is, the integral of the image at the direction \(\theta_{i}\) is

$$g(\theta_{i} ) = A\cos^{2} \left( {\theta_{i} - \theta_{d1} + \frac{\pi }{2}} \right).$$
(9)

\(A\) is a coefficient decided by the image brightness. \(\theta_{d1}\) is the darkest direction of the image. Depending on Eq. (8), a set of LRTs of \({\text{Im}}_{1}\) (i.e., \(G_{1} (\theta_{i} )\),\(\theta_{i} = \theta_{d1} - \theta_{M} + (i - 1)\theta_{r}\)) can be gotten. For the input image \({\text{Im}}_{2}\), substituting \(G_{2} (\theta_{c} )\) into Eq. (7), the corrected darkest direction can be captured.

Taking the image in Fig. 1 as an example, the working mechanism is illustrated in Fig. 3. In this experiment, the darkest direction of the model image is \(0^{ \circ }\). According to Eq. (8), a set of LRTs of the model image are generated while the center angle changes from \(- 5^{ \circ } (175^{ \circ } )\) to \(5^{ \circ }\), in \(0.01^{ \circ }\) increment. For the input image shown in Fig. 1, the predicted darkest direction estimated by coarse estimation is \(\theta_{{\text{c}}} = 25.6^{ \circ }\). In EC stage, we found that, \(G_{1} (0.6^{ \circ } )\) has the best correlation value with \(G_{2} (\theta_{c} )\). G1(□) and G2(□) denote the LRTs of the model image and the input image, respectively. Finally, according to Eq. (7), the estimated darkest direction of the input image is corrected to be \(25^{ \circ }\).

Fig. 3
figure 3

Illustration of EC

2.4 Implementation details

In practice, once the parameters of the algorithm are given, some intermediate data including the coordinates of pixels used for the coarse estimation, the coordinates and weights of pixels for LRT computation keep unchanged while different input images are treated. Hence, these data can be computed ahead and saved in tables which are named as circle pixel coordinate table (CPCT), integral pixel coordinate table (IPCT), and integral pixel weight table (IPWT), respectively. It should be noted that, due to the different darkest direction of the input images, the coordinates and weights of pixels for gray integration should be saved while the azimuth angle changes from \(0^{ \circ }\) to \(180^{ \circ }\). In addition, the LRTs of the model images with different center angles, which are independent to the input image, also can be captured offline using Eq. (8) and saved.

The flow chart and pseudo code of our method are shown in Fig. 4.

Fig. 4
figure 4

a Flow chart for our proposed method, b Pseudo code for our algorithm

3 Results and discussion

In this section, the performance of algorithms is tested on synthetic data and real data. We first evaluate the sensitivity of our algorithm (LRT + EC) to the angle range (i.e., \(\theta_{T}\)), the angle interval (i.e., \(\theta_{s}\)) and the image noise, and then the performance of LRT + EC is compared with two state-of-the-art methods, including GRT [18] and IAD + LC [21]. The CUP of our computer is Intel (R) Core (TM) i7-10710U@1.10G, the RAM is 1.61 GHz.

In practice, the marginal and central parts are not well modulated owing to the vignetting effect of optical system and the imperfection of optical components [18], so the algorithms are performed on the ring area with the inner and outer radiuses are 300 pixels and 500 pixels, respectively. In the following experiments, the radiuses of circles (denoted by \(r\) in Eq. (1)) used for coarse estimation changes from 300 to 450 pixels in steps of 5 pixels, and the gray threshold for segmentation is 0.4. The image, in which the darkest direction of hourglass-shaped gray distribution pattern is \(0^{ \circ }\), is taken as the model image. A set of LRTs of the model image are generated while the center angle changes from \(- 0.3^{ \circ } (179.7^{ \circ } )\) to \(0.3^{ \circ }\) in step of \(0.01^{ \circ }\).

3.1 Experiments on synthetic data

In this section, experiments are performed on synthetic data. The images are all generated according to Malus’s law [26]. We first evaluate the sensitivity of our algorithm (LRT + EC) to some parameters and noise. Moreover, the computational complexity is tested. All experimental results in this section are counted based on 500 Monte Carlo simulations.

3.1.1 Performance to angle range and angle interval of LRT

Generally, the larger the angle range (\(\theta_{T}\)) and the smaller the angle interval (\(\theta_{s}\)), the algorithm will get the higher accuracy. However, the complexity of the algorithm will increase significantly. To get appropriate parameters to guarantee the algorithm acts well in both accuracy and runtime, the performance of the algorithm to \(\theta_{T}\) and \(\theta_{s}\) are tested, respectively. In this set of experiments, the Gaussian white noise (\(\mu = 0.01\) and \(\sigma^{2} { = }0.005\)) is added on the synthetic images. Figure 5 compared the mean absolute error (MAE) of the results when \(\theta_{s}\) is 0.6 and \(\theta_{T}\) changes from \(30^{ \circ }\) to \(85^{ \circ }\). It reviews that the MAE of the algorithm increases rapidly with the increase of \(\theta_{T}\), and then becomes flat when \(\theta_{T}\) is larger than \(60^{ \circ }\). Hence, to have low computation complexity, \(\theta_{T}\) is suggested to be \(60^{ \circ }\). Moreover, the MAE of our algorithm while \(\theta_{s}\) changes from \(0.1^{ \circ }\) to \(2^{ \circ }\) is summarized in Fig. 6, and experimental results imply that the smaller \(\theta_{s}\) is, the better our algorithm performs.

Fig. 5
figure 5

Performance of LRT + EC with respect to \(\theta_{T}\)

Fig. 6
figure 6

Performance of LRT + EC with respect to \(\theta_{s}\)

As described above, in the following experiments, \(\theta_{T}\) and \(\theta_{s}\) are setted to be \(60^{ \circ }\) and \(0.6^{ \circ }\), respectively.

3.1.2 Performance to image noise

Apparently, it is easy to extract the darkest direction precisely from the images without noise, but it is more difficult to have the same precision on the real images which are usually disturbed by the noise. As the noise including shot noise, thermal noise, and dark current noise generally obey the Gaussian distribution [10], to effect the performance of our algorithm to noise, the Gaussian white noise is added while \(\mu = 0.01\) and \(\sigma^{2}\) changes from 0 to 0.01. MAE of different algorithms are compared in Fig. 7. It can be observed that the performance of all algorithms decrease when the \(\sigma^{2}\) increases, and LRT + EC and IAD + LC outperform GRT. For fair comparison, in this set of experiments, the gray threshold for ROI extraction in IAD + LC is also setted to be 0.4.

Fig. 7
figure 7

Performance of algorithms with respect to noise

Furthermore, comparing the performance of coarse estimation (the green line shown in Fig. 7) and LRT + EC, we can note that, the MAE of LRT + EC is much smaller. To test the effect of integration on the performance of our algorithm, in EC, we choose the gray of pixels on the circle (\(r = 400\)) to replace LRT. The result of this method (the rose line in Fig. 7) explores that the accuracy can be highly improved by gray integration.

3.1.3 Processing time

To make an overall comparison, it is necessary to analyze the processing time of each algorithm. Table 1 indicates that, LRT + EC has the lowest computation complexity.

Table 1 The running time of algorithms

3.2 Experiments on real data

To verify the accuracy of algorithms on real data, 20 modulated intensity images are captured continuously in the same state. One of these images is shown in Fig. 8. The 20 images are analyzed by three different algorithms, and the calculated results are illustrated in Fig. 9. Apparently, LRT + EC is most robust. Furthermore, this conclusion also can be verified by the root mean square error (RMSE) of the results (given in Table 2).

Fig. 8
figure 8

One of real images used for experiments

Fig. 9
figure 9

Measured results of 20 images grabbled in the same state by three algorithms

Table 2 RMSE of algorithms

4 Conclusions

In some spatial polarization modulated polarimetry schemes, the polarization direction of the input light can be achieved by image processing algorithms. In this paper, an efficient image processing algorithm, named LRT + EC, is proposed to extract the polarization direction from the irradiance image of the modulated input light. Different to GRT which performs the Radon transform in the global angle range, LRT is obtained by integrating the image along the radial lines orientated in a local angle range. In addition, LRT and EC can be completed by looking up tables generated offline. Therefore, time consumption of our algorithm is reduced to less than 0.01 s, which meet the real-time requirement well. Moreover, owing to the EC which establish the link between the error of coarse estimation and the correlation between LRTs, the accuracy and robustness of the algorithm are highly improved. The experimental results on synthesized and real data verify that, our proposed algorithm outperforms the state-of-the-art methods including GRT and IAD + LC.