1 Introduction

Recently, the demand for a new digital video format compared with the full high definition (HD) has led to the development of ultra HD with ≥ 4 K resolution, wide color gamut (WCG), and high-dynamic range (HDR). In particular, HDR processing is attracting interest in industries associated with image displays. Furthermore, the compression standards for HDR images have been defined, such as the JPEG XT standard [1]. However, use of a higher number of data bits is not enough to represent the real scene. Although ultra HD includes a higher level of peak luminance, there is an absolute luminance difference between the actual and display environments, which indicates a disparity in the human visual system (HVS). Because of the luminance difference, one could perceive differences between the displayed and real scenes.

HDR image reproduction consists of HDR image (or radiance map) construction and tone mapping. First, an HDR image is constructed from multi-exposed low-dynamic range (LDR) images using a camera response function [2,3,4]. Then, a single LDR image including as much of the luminance information possible, an HDR-like LDR image, is produced from the HDR image using tone-mapping operators (TMOs). Because an HDR image cannot be displayed on general display devices that do not support an HDR format, it is necessary to convert an HDR image to an LDR image. TMOs are categorized as global and local operators. Typically, global operators use a spatially invariant function, therefore, the computation is fast [5,6,7]; local operators, however, employ a spatially variant function depending on adjacent pixel levels [8,9,10,11,12,13]. HDR imaging produces a resulting image that captures objects in both bright and dark regions. Therefore, to improve object identification in a variety of scenes, recent HDR imaging applications adopt local-tone-map operators. These tone-mapping methods for HDR images have dealt with the luminance difference problem. In general, the tone-mapping methods increase the intensity of dark areas in the image that have too low luminance to be perceived, and reduce the intensity of bright areas to avoid saturation. Reinhard et al., who adopted photographic techniques to develop the tone-mapping algorithm, compressed the scaled luminance with approximated key values using simple sigmoidal transfer curves [6]. Sigmoidal transfer curves are the representative operators in tone mapping because the electrophysiological model for the response of the rod cones in HVS is described by these curves [14,15,16]. However, many sigmoidal transfer curves have been defined by empirical parameters. Therefore, it is difficult to determine the parameters to reproduce the resulting images at optimal levels. In addition, in order to perceive the brightness of the displayed images to be identical or similar to the brightness of the real scene, the psychophysical manipulation of parameters is needed.

The compression of the dynamic range gives rise to artifacts and leads to image degradation. The tone-mapping methods are classified into two types: the global, and the local. In the case of a local-tone mapping, which highly compresses the dynamic range, the degradation, such as the loss or exaggeration of detail, is the most common problem. To compensate for the loss of detail, Durand and Dorsey used an accelerated bilateral filter for the display of HDR images [7]. The HDR image is then decomposed into the base and detail layers. The range of the base layer is compressed by a scale factor in the log domain, while the magnitude of the detail layer is not changed. Therefore, the detail is not lost. This decomposition method preserves the details [17, 18]. However, the level of the gap between the compressed base layer and preserved detail layer results in the halo artifact, which represents exaggerated detail near edges.

In this paper, we propose lightness-preserved tone mapping and halo-artifact reduction using anti-halo image composition based on the visual achromatic response. A “lightness-preserved mapping” reproduces lightness of a displayed image corresponding to that of a real scene. In this approach, we first clarify the difference between the color appearance attributes of a displayed image and a real scene using CIECAM02 [19]. Based on the concept of lightness preservation, a previously empirically generated parameter is refined. Second, we show that the level difference between tone-compressed images, with and without detail-preserving decomposition, can be used as the criterion for detecting the halo-artifact and the two images are fused in accordance to the level difference in order to reduce the degree of the artifact. The proposed method presents a tone-mapping function reflecting the indoor viewing adaptation, that is, the visual adaptation degree according to the surround or display brightness value, was applied. Also the proposed halo compensation technique was applied to the detail decomposition technique in order to compensate the relatively large halo phenomenon while preserving the existing details. In our experimental study, a proposed method is applied to iCAM06, which is based on an image color appearance model. Consequently, our method improves the resulting image in terms of contrast enhancement and halo-artifact reduction.

2 Background

2.1 Intensity transfer curves for tone mapping

A sigmoidal function can describe the relationship between input and output levels in many tone-mapping methods with minor differences. Histogram-based tone mappings, which seem to have no connection with the sigmoidal function, also yield the function similar to sigmoidal curves [20, 21]. Furthermore, the sigmoidal function exhibits a close relation to a physiological model for a photoreceptor. Valeton and van Norren showed that the response of the photoreceptor according to the light intensity, \(I\), can be described using the Michaelis–Menten equation as follows [22]:

$$r\left(I\right)=\frac{{I}^{n}}{{I}^{n}+{\sigma }^{n}},$$
(1)

where \(\sigma \) is associated with the light adaptation, and \(n\) = 0.74 for the rhesus monkey. Because Eq. (1) not only has a sigmoidal shape but also takes into account the light adaptation, many tone mappings have adopted the equation to reproduce images with a sensation as close as possible to the perception of real-world scenes [23, 24]. Figure 1 shows how the two parameters affect the responses in Eq. (1). The change of the σ value shifts the response along the x-axis. This translation makes the bright region in an image decrease and the dark region increase. On the other hand, the n value correlates with the slope of the response. Higher \(n\) values indicate steeper slopes.

Fig. 1
figure 1

Responses determined by Eq. (1) for different parameters. (Left) values of \(\sigma \) equal to 0.1, 1, and 10, with fixed \(n\) = 0.6, and (right) values of \(n\) equal to 0.6, 0.8, and 1.0, for \(\sigma \) = 1

Based on the concept of light adaptation in an image, and contrary to the fact that σ is determined by local luminance, the values of a sensitivity parameter, n, which is a user-controllable free parameter within the range of 0.6–2.0, are different and simply fixed in practice in each study [8, 9, 25]. Therefore, it is difficult for users to determine the value of n for each image, because, as shown in Fig. 2, a different n value has a significant effect on the resulting image. The resulting image has a higher contrast but fewer details at high n values.

Fig. 2
figure 2

Resulting images at different n values for an HDR image “Memorial Church”

2.2 Halo artifact

The halo artifact, which includes overshoot and undershoot signals near the edges that have large gradients, is generally induced by an immoderate detail enhancement. In a tone-mapping method, the image decomposition into the detail and base layers is aimed to preserve the image detail, while a local-tone mapping highly compresses the dynamic range of the image. However, considering that the relative detail level is increased, the decomposition may induce the halo artifact. In addition, as shown in Fig. 3a, an insufficient edge-preserving ability of the filter in the application of image decomposition induces blurred edges (less sharp intensities) at the base layer. Furthermore, the detail layer, which is the difference between an original image and the base layer, contains the halo component. As a result, the halo component in the detail layer is entirely transferred to the resulting image. We call this halo factor as an incomplete decomposition-induced halo factor.

Fig. 3
figure 3

Diagrams illustrating two possible scenarios for generation of halo artifacts. a Halo component due to the incomplete decomposition by the insufficient edge-preserving ability is preserved in the detail layer and b a local-tone mapping operation affected by the surroundings (local intensity level) generates the halo artifact

On the other hand, a local-tone mapping itself gives rise to another halo factor. Generally, for spatially variant dynamic range compression, a local-tone mapping needs the surround image, such as a Gaussian blurred image, which approximates the local level information. In the case of the bright region in Fig. 3b, the intensity level of the surround images decreases near the edges, and the lowered intensity level makes the sigmoidal transfer curve shift to the left, as shown in Fig. 1 (left). Note that, in Eq. (1), \(\sigma \) assumes the role of the surround images. High-level pixels with high surround levels are highly compressed. However, others with the lowered surround levels near the edge are less compressed because they are located on the saturation region of the shifted sigmoidal transfer curve. Consequently, the compressed base layer contains the halo artifact. We call the halo factor as a surround-induced halo factor.

2.3 Lightness estimation

Lightness is defined as “the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting” [26]. This means that although physical luminance of an object is different according to where you measure the object, such as for example, in the office or in the outdoors, lightness is invariant. The CIECAM02, which is the color appearance model for predicting color appearance attributes under a variety of viewing conditions, defines the lightness, \(J\), using the achromatic responses for stimuli, \(A\), and white, \({A}_{\mathrm{W}}\), respectively, as follows:

$$J=100(A/{A}_{\mathrm{W}}{)}^{cz},$$
(2)

where the product of the two exponents, \(c\) and \(z\), is approximately 1.33 in the viewing condition, known as average surround [26].

Additionally, the achromatic responses are defined as follows:

$$A=2{{R}^{^{\prime}}}_{\mathrm{a}}+{{G}^{^{\prime}}}_{\mathrm{a}}+0.05{{B}^{^{\prime}}}_{\mathrm{a}}-0.305,$$
(3)
$${{S}^{^{\prime}}}_{\mathrm{a}}=\frac{400({F}_{\mathrm{L}}{S}^{^{\prime}}/100{)}^{0.42}}{27.13+({F}_{\mathrm{L}}{S}^{^{\prime}}/100{)}^{0.42}}+0.1 \, \left(S=R,G,B\right),$$
(4)
$${F}_{\mathrm{L}}=0.2{k}^{4}(5{L}_{\mathrm{A}})+0.1(1-{k}^{4}{)}^{2}(5{L}_{\mathrm{A}}{)}^\frac{1}{3},$$
(5)
$$k=\frac{1}{5{L}_{\mathrm{A}}+1},$$
(6)

where \({S}_{\mathrm{a}}^{^{\prime}}\) is the compression of the adapted cone response of the stimuli, \({S}^{^{\prime}}\), and \({L}_{\mathrm{A}}\) is the adapting luminance in \(\mathrm{c}\mathrm{d}/{\mathrm{m}}^{2}\). Note that for calculating \({A}_{\mathrm{W}}\) based on Eq. (4), \({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=100\).

3 Lightness-preserved tone mapping

The adaptation states of HVS in the real scene and the office are different. However, a tone mapping compresses the dynamic range of HDR images with no consideration for the adaptation. This careless compression cannot transfer the visual perception in the real scene to one in the office illuminated with fluorescent lights.

Our idea for the reproduction of the visual perception in the real scene is to approximate the real lightness of a scene, \({J}_{\mathrm{R}}\), and then find an adequate tone mapping producing the indoor lightness by the tone-mapped image, \({J}_{\mathrm{I}}\), as similar as possible to \({J}_{\mathrm{R}}\). In particular, with a focus on the change of the viewing condition, we control a sensitivity parameter, \(n\), for optimizing a tone mapping, and try to solve the difficulty in setting the free parameter. The outline of the lightness-preserved tone mapping is shown in Fig. 4.

Fig. 4
figure 4

The flow outline of the process for implementation of lightness-preserved tone mapping

Specifically, for the achromatic stimuli, \(i ({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=i)\). Thus, \({J}_{\mathrm{R}}\) is calculated using Eqs. (2)–(4) as follows:

$${J}_{\mathrm{R}}=100{\left[{A}_{\mathrm{R}}/{A}_{\mathrm{R},\mathrm{W}}\right]}^{cz}=100{\left[\frac{27.13(i/100{)}^{0.42}+({\left.{F}_{\mathrm{L}}\right|}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l} {L}_{\mathrm{A}}}i/100{)}^{0.42}}{27.13+({\left.{F}_{\mathrm{L}}\right|}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l} {L}_{\mathrm{A}}}i/100{)}^{0.42}}\right]}^{cz},$$
(7)

where the local adapting luminance, \({L}_{\mathrm{A}}\), which is associated with the viewing condition, is approximated by a local level of a low-pass filtered HDR image and therefore contains spatially variant levels. This approximation, adopted from iCAM06, implies that HVS locally adapts the varying ambient luminance in the real scene [9].

For estimating \({J}_{\mathrm{I}}\), we assume that \({L}_{\mathrm{A}}\) in the monitor viewing condition is about 20% of maximum display luminance. This assumption is based on the fact that the adapting luminance is not significantly changed in the office. Therefore, HVS adapts to a single adapting luminance in the office, in comparison to the locally different adapting luminance in the real scene. Therefore, \({J}_{\mathrm{I}}\), and the achromatic response for white in the office, \({A}_{\mathrm{I},\mathrm{W}}\), are defined as follows:

$${J}_{\mathrm{I}}=100{\left[\frac{{A}_{\mathrm{I}}}{{A}_{\mathrm{I},\mathrm{W}}}\right]}^{cz},$$
(8)
$${A}_{\mathrm{I},\mathrm{W}}=3.05\cdot \frac{400{\left(\frac{{\left.{F}_{\mathrm{L}}\right|}_{{L}_{\mathrm{A}}=60}i}{100}\right)}^{0.42}}{27.13+{\left(\frac{{\left.{F}_{\mathrm{L}}\right|}_{{L}_{\mathrm{A}}=60}i}{100}\right)}^{0.42}},$$
(9)

where \({A}_{\mathrm{I},\mathrm{W}}\) is derived from Eqs. (3) and (4) with achromatic stimuli, \(i ({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=i)\). For lightness preservation, the desired achromatic response of a tone-mapped image, \({A}_{\mathrm{I}}\), is then derived as follows:

$${A}_{\mathrm{I}}=\frac{{A}_{\mathrm{I},\mathrm{W}}{A}_{\mathrm{R}}}{{A}_{\mathrm{R},\mathrm{W}}}\approx 36.85\cdot \frac{27.13(i/100{)}^{0.42}+({\left.{F}_{\mathrm{L}}\right|}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l} {L}_{\mathrm{A}}}i/100{)}^{0.42}}{27.13+({\left.{F}_{\mathrm{L}}\right|}_{\mathrm{l}\mathrm{o}\mathrm{c}\mathrm{a}\mathrm{l} {L}_{\mathrm{A}}}i/100{)}^{0.42}},$$
(10)

where \(i\) is set to a fixed value of 20, and 36.85 is an approximate value of \({A}_{\mathrm{I},\mathrm{W}}\) with \(i=20\). The value of \(i\) implies that the stimuli have a 20% level of the white, and \({A}_{I}\) is a function of the adapting luminance level. Finally, the desired sensitivity parameter, \({n}_{\mathrm{d}}\), which is a function of \({L}_{\mathrm{A}}\), is obtained as follows:

$${n}_{\mathrm{d}}\left({L}_{\mathrm{A}}\right)=\mathrm{arg}\underset{n}{\mathrm{m}\mathrm{i}\mathrm{n}}\left\{{A}_{\mathrm{T}}\left(n,{L}_{\mathrm{A}}\right)-{A}_{\mathrm{I}}\left({L}_{\mathrm{A}}\right)\right\},$$
(11)

where \({A}_{\mathrm{T}}\) indicates the achromatic responses of a tone-mapped image. The block diagram for the lightness preservation is shown in Fig. 5a.

Fig. 5
figure 5

Block diagram depicting the implementation of the proposed methods

To demonstrate how to find the desired sensitivity parameter using a tone-mapped image, we choose the tone-mapping method, iCAM06. It adopts a sigmoidal transfer function similar to Eq. (1), has two advantages: (1) \({A}_{\mathrm{T}}\) is easily calculated with the use of the color space equal to CIECAM02, (2) the approximation of \({L}_{\mathrm{A}}\) is available owing to the real luminance level in \(\mathrm{c}\mathrm{d}/{\mathrm{m}}^{2}\) as one of the input parameters. The sigmoidal transfer functions in iCAM06 is expressed as follows:

$${{S}^{^{\prime}}}_{\mathrm{a}}=\frac{400({F}_{\mathrm{L}}{S}^{^{\prime}}/{Y}_{\mathrm{W}}{)}^{{n}_{i}}}{27.13+({F}_{\mathrm{L}}{S}^{^{\prime}}/{Y}_{\mathrm{W}}{)}^{{n}_{i}}}+0.1 \, \left(S=R,G,B\right),$$
(12)

where \({Y}_{\mathrm{W}}\) is the level of a Gaussian blurred \(Y\) image in the XYZ color space domain, which represents the white color [3]. In addition, other parameters are equal to the parameters of Eq. (4). To estimate the sensitivity parameter for iCAM06, \({n}_{i}\), the achromatic response is first derived using a tone-mapped image in iCAM06, \({A}_{\mathrm{T},\mathrm{i}\mathrm{C}\mathrm{A}\mathrm{M}06}\), with an achromatic stimulus, \(i\), based on Eqs. (3) and (12), as follows:

$${A}_{\mathrm{T},\mathrm{i}\mathrm{C}\mathrm{A}\mathrm{M}06}=3.05\cdot \frac{400({F}_{\mathrm{L}}i/{Y}_{\mathrm{W}}{)}^{{n}_{i}}}{27.13+({F}_{\mathrm{L}}i/{Y}_{\mathrm{W}}{)}^{{n}_{i}}},$$
(13)

where the ratio of \(i\) to \({Y}_{\mathrm{W}}\) is 0.2, as mentioned in Eq. (10). Subsequently, from Eqs. (10), (11), and (13), \({n}_{i}\) is derived in accordance to

$${n}_{i}=\mathrm{log}\left(\frac{27.13{A}_{I}}{1220-{A}_{I}}\right)/\mathrm{log}\left(0.2{F}_{\mathrm{L}}\right).$$
(14)

However, this equation is too complicated to be directly applied to the sigmoidal transfer function. To simplify the equation, we sample \({n}_{i}\) and then simplify the equation as follows:

$${n}_{i}\approx 0.9644({L}_{\mathrm{A}}/4000{)}^{0.6515}+0.3845.$$
(15)

The sampled data and fitted curve of Eq. (15) are shown in Fig. 6.

Fig. 6
figure 6

Sampled \({n}_{i}\) data (black solid circles) and fitted curve using the data (red line) (color figure online)

4 Anti-halo image composition

The incomplete decomposition-induced halo factor is common in a tone-mapping method using the image decomposition method. To reduce the halo artifact, we generated an anti-halo image composition using the tone-compressed images, with, and without the decomposition. Because the tone-compressed images with and without the decomposition contain more details in the case where the halo artifact is present, and fewer details in the case where there is no halo artifact, the complementary composition of the two images helps to reduce the halo artifact in the resulting image.

The basic form of our composition using the tone-compressed images with decomposition, Id, and without the decomposition, \({I}_{\mathrm{d}\mathrm{n}}\), is defined in accordance to

$$O=f{I}_{\mathrm{d}}+\left(1-f\right){I}_{\mathrm{d}\mathrm{n}},$$
(16)

where \(f\) is a fusion parameter that controls the degree of the detail and the halo artifact in the resulting image, \(O\). We assume that the existence of a large difference between the two images may indicate the presence of the halo artifact as shown in Fig. 3a due to an incomplete decomposition-induced halo factor. Additionally, at the halo region, the gradients must be close to those of the image without the decomposition. Therefore, \(f\) should be a function of the difference, \(D\), between the two images. To determine the parameter, \(f\), the derivative of Eq. (16) with respect to \(D\) is applied.

$$\frac{\mathrm{d}O}{\mathrm{d}D}=\frac{\mathrm{d}f}{\mathrm{d}D}{I}_{\mathrm{d}}+f\frac{\mathrm{d}{I}_{\mathrm{d}}}{\mathrm{d}D}-\frac{\mathrm{d}f}{\mathrm{d}D}{I}_{\mathrm{d}\mathrm{n}}+\left(1-f\right)\frac{\mathrm{d}{I}_{\mathrm{d}\mathrm{n}}}{\mathrm{d}D},$$
(17)

where \(D\) is equal to \({I}_{\mathrm{d}}-{I}_{\mathrm{d}\mathrm{n}}\). For large \(D\) values, the halo phenomenon decreases as \(\mathrm{d}O/\mathrm{d}D\) approaches \(\mathrm{d}{I}_{\mathrm{d}\mathrm{n}}/\mathrm{d}D\) in accordance to our assumption that the gradient of compensated \(O\) should be close to those of \({I}_{\mathrm{d}\mathrm{n}}\) without the decomposition. Therefore,

$$\frac{\mathrm{d}{I}_{\mathrm{d}\mathrm{n}}}{\mathrm{d}D}=\frac{\mathrm{d}f}{\mathrm{d}D}{I}_{\mathrm{d}}+f\frac{\mathrm{d}{I}_{\mathrm{d}}}{\mathrm{d}D}-\frac{\mathrm{d}f}{\mathrm{d}D}{I}_{\mathrm{d}\mathrm{n}}+\left(1-f\right)\frac{\mathrm{d}{I}_{\mathrm{d}n}}{\mathrm{d}D},$$
(18)
$$\frac{\mathrm{d}f}{\mathrm{d}D}({I}_{\mathrm{d}}-{I}_{\mathrm{d}\mathrm{n}})+f\left(\frac{\mathrm{d}{I}_{\mathrm{d}}}{\mathrm{d}D}-\frac{\mathrm{d}{I}_{\mathrm{d}\mathrm{n}}}{\mathrm{d}D}\right)=0,$$
(19)
$$\frac{\mathrm{d}f}{\mathrm{d}D}D+f=0.$$
(20)

Finally, the solution of this differential equation is

$$f=\frac{c}{D},$$
(21)

where \(c\) is a constant for a margin of detail that represents the criteria for detail preservation. In other words, the region where \(D\) is higher than \(c\) is regarded as the region where the halo-artifact exists, and the region where \(D\) is lower than \(c\) is a preserved component for the detail in the resulting image. The value of \(c\) is fixed to 1.5. This value correlates with the visual masking effect that the visibility threshold of a test stimulus changes when it is viewed in the vicinity of large visible changes in luminance [27]. On the edge of 13 level height within [0 50], which is a dynamic range of the tone compressed image in Fig. 8, the luminance difference under the margin, \(c=1.5\), is not perceived because of the visual masking effect.

However, \(f\) is infinite when \(D\) equals 0. Therefore, we approximate the solution using a rational function based on the constraint that the value of f within the region where \(D\le c\) is equal to 1. The constraint means that when \(f=1\), the resulting image is analogous to \({I}_{\mathrm{d}}\) for detail preservation, and in the halo region where \(f=c/D\) for \(D>c\), the image is analogous to \({I}_{\mathrm{d}\mathrm{n}}\) for reduction of the halo artifact. The final parameter \({f}_{\mathrm{p}}\) can be defined as follows:

$${f}_{\mathrm{p}}\left(D\right)=\frac{2.58+{D}^{0.74}}{2.58+{D}^{1.74}}.$$
(22)

As shown in Fig. 7, the value of \({f}_{\mathrm{p}}\) within the range \(D=[0 1.5]\) is approximately equal to 1, and is similar to the value of \(f\) within the region where \(D>1.5\). Consequently, the region with a small \(D\) value is influenced by \({I}_{\mathrm{d}}\), while other regions with larger \(D\) values are influenced by \({I}_{\mathrm{d}\mathrm{n}}\). The block diagram for the halo-artifact reduction is shown in Fig. 5b.

Fig. 7
figure 7

The f curve depicted in black color is in accordance to Eq. (21) for \(c=1.5\), and the curved in red color is in accordance to Eq. (22) for \({f}_{\mathrm{p}}\) (color figure online)

In justification of the successful implementation of our method for the reduction of its halo artifact, four images are shown in Fig. 8. These comprise of tone-compressed images with and without the decomposition, \({I}_{\mathrm{d}}\) and \({I}_{\mathrm{d}\mathrm{n}}\), respectively, the fused image, \(O\), and the fusion parameter, \({f}_{\mathrm{p}}\). As shown in Fig. 8, the halo region in \({I}_{\mathrm{d}}\) is well depicted in \({f}_{\mathrm{p}}\) as the black region, and the fused image has no halo artifact and no loss of detail. In addition, to ensure that our method performs well, enlarged images, and plots for \(Y\) levels of white scanned lines are shown in Fig. 9. As a result, with a detail margin of \(c=1.5\), the halo artifact is reduced in the fused image, and the loss of detail is simultaneously minimized in the fused image. Note that, to preserve the highlight spots, such as glare spots with a large difference, \({f}_{\mathrm{p}}\) is refined based on the application of a median filter using a 3 × 3 mask.

Fig. 8
figure 8

Image examples indicating the halo-artifact reduction (\({I}_{\mathrm{d}}\): iCAM06: proposed)

Fig. 9
figure 9

Subimages and level curves for the image examples in Fig. 8

5 Simulation

In this section, we present evidence in justification of the performance of the proposed methods. We evaluated the overall image quality using the block diagram depicted in Fig. 5. To apply the proposed methods, the tone processing base of iCAM06 is adopted as a test bed in addition to the aforementioned advantages for the lightness-preserved tone mapping, it contains the image decomposition process using a fast bilateral filter. First, to compare the result images in terms of the halo artifact, the results by iCAM06 and proposed method are shown in Fig. 10. The halo artifact is easily seen around the edges in the iCAM06 method; however, in the proposed method, the halo artifact is invisible through anti-halo image composition. For a more clear comparison, gradient map images have been used. It can be observed that the proposed method is effective for halo removal in the border region between the building and the background, or at the sky region where the luminance difference around the tree is large.s

Fig. 10
figure 10

Result images for the comparison of halo artifact using a gradient map

Second, in terms of overall image quality, the proposed method is compared with other tone mappings, such as a photographic tone reproduction (PTR) [2], retinex-based tone mapping (RTM) [28], calibrated image appearance reproduction (CIAR) [29], and hybrid L1–L0 layer decomposition model (L1–L0) [30], linear scale model (LS) [31], and iCAM06. A few resulting images are shown in Figs. 11, 12 and 13. As seen by the test images, the images generated using the proposed method have better color appearance in the bright areas of the results such as the lighting areas in Fig. 11, the outdoor scene in Fig. 12, the color checker in Fig. 13. Moreover, it can be confirmed that the overall contrast and detail renderings are enhanced in relatively large tone changings in the dim area of the input image. In order to compare the color reproduction performance of the bright region, the results of each image have been compared with respect to the original HDR image without tone processing. The proposed method has fairly good image reproduction performance in terms of color reproduction and detail representation. Correspondingly, the result images are more natural because of the effects of the preservation of the lightness.

Fig. 11
figure 11

Resulting images for Las Vegas Store image

Fig. 12
figure 12

Resulting images for 507 Motor Show image

Fig. 13
figure 13

Resulting images for Lab Booth image

To conduct an objective assessment for the image quality, the resulting images are measured using three indices, namely, NR-CDIQA, TMQI_S, and chroma difference for ten test images of courtesy HDR files, we used the source of the images from “Mark Fairchild’s HDR Photographic Survey” [32]. First, NR-CDIQA is a no-reference contrast distortion metric [33]. Specifically, this metric is based on the natural scene statistical features, such as the mean, standard deviation, skewness, and kurtosis, of image intensity so that the score of NR-CDIQA represents the contrast distortion based on how natural the image is. Note that the CID2013 database is used for training [34]. Second, in comparison to the contrast metric, TMQI_S is a reference structural fidelity metric [35]. The score of TMQI_S shows a structural distortion between the tone-mapped image and the reference HDR image. In addition, the chroma difference in the CIELAB color space was used to show a numerical comparison of the color appearance among the result images.

The resulting scores of the three indices are shown in Fig. 14. Although some of the scores of the proposed method attain the lowest values, on average, the performance of the proposed method is better than that of the others. The fidelity measurement in TMQI_S uses local standard deviations for the structural information. When the structural information of the tone-mapped image is close to that of input HDR image, the fidelity score is higher. In particular, for the PTR method, the reproductions of the low tone regions in Fig. 12 (507 Motor Show) and Fig. 13 (Lab Booth) images were not performed well, that is, the worse the global tone compression performance, the better the score results. Therefore, the image quality comparison using a reference fidelity measurement should be used as an index to compare the performance of representing details between the results showing similar global tone compression performance. For 24 color points of the color checker in Fig. 13 Lab Booth image, the relative chroma difference scores are calculated from the Euclidean distance between the result image and the no-toned reference image on the ab axis. The score shows how well those preserve chroma information after tone processing. The accuracy of color reproduction for the reference image was found to be the best in the proposed method on average.

Fig. 14
figure 14

Metric scores, including NR-CDIQA, TMQI_S, and Chroma difference

The emphasis on the resulting scores is that the performance of the proposed method is significantly better than iCAM06, even though it is based on iCAM06. It is noted that the proposed methods, lightness-preserved tone mapping, and halo-artifact reduction, will improve the performance of a local-tone mapping with image decomposition.

6 Conclusions

Even though a large number of studies exist for HDR imaging, local-tone-mapping methods have suffered from the free parameter and the halo-artifact problems. In addition, the significant change in the viewing conditions when one watches the image is not considered so that a tone-mapped image has failed to transfer the sensation of the real scene. In this paper, we proposed the tone reproduction method for the lightness-preserved tone mapping and halo-artifact reduction. First, the lightness-preserved tone mapping intends to reproduce the visual perception of the real scene in the tone-mapped image on the indoor display. Particularly, the matching is implemented by an adaptive sensitivity parameter that decreases the number of free parameters in a local-tone mapping. Second, the halo-artifact reduction using anti-halo image composition fuses the two images with and without image decomposition according to their difference, which indicates the halo region. The simulation with the proposed method embedded in iCAM06 and the other tone mappings shows that the proposed methods improve the performance of a local-tone mapping in terms of image quality.