Abstract
High-dynamic range image reproduction using tone-mapping algorithms are used for image processing to reduce the dynamic range of a real scene to be displayed on low-dynamic range devices. General tone-mapping methods are associated with the determination of local parameters for tone compression and the halo artifact connected to detail enhancements. This paper proposes a lightness-preserved tone-mapping method and a halo-artifact reduction method based on the property of visual achromatic responses. For the reproduction of the visual perception of the real scene on displays, the sensitivity parameter of the sigmoidal transfer curves is optimized for the lightness-preserved tone mapping, and anti-halo image decomposition method is applied for the halo-artifact reduction. In results, it is confirmed that the proposed method improves the performance of the tone-mapping method effectively.
Similar content being viewed by others
1 Introduction
Recently, the demand for a new digital video format compared with the full high definition (HD) has led to the development of ultra HD with ≥ 4 K resolution, wide color gamut (WCG), and high-dynamic range (HDR). In particular, HDR processing is attracting interest in industries associated with image displays. Furthermore, the compression standards for HDR images have been defined, such as the JPEG XT standard [1]. However, use of a higher number of data bits is not enough to represent the real scene. Although ultra HD includes a higher level of peak luminance, there is an absolute luminance difference between the actual and display environments, which indicates a disparity in the human visual system (HVS). Because of the luminance difference, one could perceive differences between the displayed and real scenes.
HDR image reproduction consists of HDR image (or radiance map) construction and tone mapping. First, an HDR image is constructed from multi-exposed low-dynamic range (LDR) images using a camera response function [2,3,4]. Then, a single LDR image including as much of the luminance information possible, an HDR-like LDR image, is produced from the HDR image using tone-mapping operators (TMOs). Because an HDR image cannot be displayed on general display devices that do not support an HDR format, it is necessary to convert an HDR image to an LDR image. TMOs are categorized as global and local operators. Typically, global operators use a spatially invariant function, therefore, the computation is fast [5,6,7]; local operators, however, employ a spatially variant function depending on adjacent pixel levels [8,9,10,11,12,13]. HDR imaging produces a resulting image that captures objects in both bright and dark regions. Therefore, to improve object identification in a variety of scenes, recent HDR imaging applications adopt local-tone-map operators. These tone-mapping methods for HDR images have dealt with the luminance difference problem. In general, the tone-mapping methods increase the intensity of dark areas in the image that have too low luminance to be perceived, and reduce the intensity of bright areas to avoid saturation. Reinhard et al., who adopted photographic techniques to develop the tone-mapping algorithm, compressed the scaled luminance with approximated key values using simple sigmoidal transfer curves [6]. Sigmoidal transfer curves are the representative operators in tone mapping because the electrophysiological model for the response of the rod cones in HVS is described by these curves [14,15,16]. However, many sigmoidal transfer curves have been defined by empirical parameters. Therefore, it is difficult to determine the parameters to reproduce the resulting images at optimal levels. In addition, in order to perceive the brightness of the displayed images to be identical or similar to the brightness of the real scene, the psychophysical manipulation of parameters is needed.
The compression of the dynamic range gives rise to artifacts and leads to image degradation. The tone-mapping methods are classified into two types: the global, and the local. In the case of a local-tone mapping, which highly compresses the dynamic range, the degradation, such as the loss or exaggeration of detail, is the most common problem. To compensate for the loss of detail, Durand and Dorsey used an accelerated bilateral filter for the display of HDR images [7]. The HDR image is then decomposed into the base and detail layers. The range of the base layer is compressed by a scale factor in the log domain, while the magnitude of the detail layer is not changed. Therefore, the detail is not lost. This decomposition method preserves the details [17, 18]. However, the level of the gap between the compressed base layer and preserved detail layer results in the halo artifact, which represents exaggerated detail near edges.
In this paper, we propose lightness-preserved tone mapping and halo-artifact reduction using anti-halo image composition based on the visual achromatic response. A “lightness-preserved mapping” reproduces lightness of a displayed image corresponding to that of a real scene. In this approach, we first clarify the difference between the color appearance attributes of a displayed image and a real scene using CIECAM02 [19]. Based on the concept of lightness preservation, a previously empirically generated parameter is refined. Second, we show that the level difference between tone-compressed images, with and without detail-preserving decomposition, can be used as the criterion for detecting the halo-artifact and the two images are fused in accordance to the level difference in order to reduce the degree of the artifact. The proposed method presents a tone-mapping function reflecting the indoor viewing adaptation, that is, the visual adaptation degree according to the surround or display brightness value, was applied. Also the proposed halo compensation technique was applied to the detail decomposition technique in order to compensate the relatively large halo phenomenon while preserving the existing details. In our experimental study, a proposed method is applied to iCAM06, which is based on an image color appearance model. Consequently, our method improves the resulting image in terms of contrast enhancement and halo-artifact reduction.
2 Background
2.1 Intensity transfer curves for tone mapping
A sigmoidal function can describe the relationship between input and output levels in many tone-mapping methods with minor differences. Histogram-based tone mappings, which seem to have no connection with the sigmoidal function, also yield the function similar to sigmoidal curves [20, 21]. Furthermore, the sigmoidal function exhibits a close relation to a physiological model for a photoreceptor. Valeton and van Norren showed that the response of the photoreceptor according to the light intensity, \(I\), can be described using the Michaelis–Menten equation as follows [22]:
where \(\sigma \) is associated with the light adaptation, and \(n\) = 0.74 for the rhesus monkey. Because Eq. (1) not only has a sigmoidal shape but also takes into account the light adaptation, many tone mappings have adopted the equation to reproduce images with a sensation as close as possible to the perception of real-world scenes [23, 24]. Figure 1 shows how the two parameters affect the responses in Eq. (1). The change of the σ value shifts the response along the x-axis. This translation makes the bright region in an image decrease and the dark region increase. On the other hand, the n value correlates with the slope of the response. Higher \(n\) values indicate steeper slopes.
Based on the concept of light adaptation in an image, and contrary to the fact that σ is determined by local luminance, the values of a sensitivity parameter, n, which is a user-controllable free parameter within the range of 0.6–2.0, are different and simply fixed in practice in each study [8, 9, 25]. Therefore, it is difficult for users to determine the value of n for each image, because, as shown in Fig. 2, a different n value has a significant effect on the resulting image. The resulting image has a higher contrast but fewer details at high n values.
2.2 Halo artifact
The halo artifact, which includes overshoot and undershoot signals near the edges that have large gradients, is generally induced by an immoderate detail enhancement. In a tone-mapping method, the image decomposition into the detail and base layers is aimed to preserve the image detail, while a local-tone mapping highly compresses the dynamic range of the image. However, considering that the relative detail level is increased, the decomposition may induce the halo artifact. In addition, as shown in Fig. 3a, an insufficient edge-preserving ability of the filter in the application of image decomposition induces blurred edges (less sharp intensities) at the base layer. Furthermore, the detail layer, which is the difference between an original image and the base layer, contains the halo component. As a result, the halo component in the detail layer is entirely transferred to the resulting image. We call this halo factor as an incomplete decomposition-induced halo factor.
On the other hand, a local-tone mapping itself gives rise to another halo factor. Generally, for spatially variant dynamic range compression, a local-tone mapping needs the surround image, such as a Gaussian blurred image, which approximates the local level information. In the case of the bright region in Fig. 3b, the intensity level of the surround images decreases near the edges, and the lowered intensity level makes the sigmoidal transfer curve shift to the left, as shown in Fig. 1 (left). Note that, in Eq. (1), \(\sigma \) assumes the role of the surround images. High-level pixels with high surround levels are highly compressed. However, others with the lowered surround levels near the edge are less compressed because they are located on the saturation region of the shifted sigmoidal transfer curve. Consequently, the compressed base layer contains the halo artifact. We call the halo factor as a surround-induced halo factor.
2.3 Lightness estimation
Lightness is defined as “the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting” [26]. This means that although physical luminance of an object is different according to where you measure the object, such as for example, in the office or in the outdoors, lightness is invariant. The CIECAM02, which is the color appearance model for predicting color appearance attributes under a variety of viewing conditions, defines the lightness, \(J\), using the achromatic responses for stimuli, \(A\), and white, \({A}_{\mathrm{W}}\), respectively, as follows:
where the product of the two exponents, \(c\) and \(z\), is approximately 1.33 in the viewing condition, known as average surround [26].
Additionally, the achromatic responses are defined as follows:
where \({S}_{\mathrm{a}}^{^{\prime}}\) is the compression of the adapted cone response of the stimuli, \({S}^{^{\prime}}\), and \({L}_{\mathrm{A}}\) is the adapting luminance in \(\mathrm{c}\mathrm{d}/{\mathrm{m}}^{2}\). Note that for calculating \({A}_{\mathrm{W}}\) based on Eq. (4), \({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=100\).
3 Lightness-preserved tone mapping
The adaptation states of HVS in the real scene and the office are different. However, a tone mapping compresses the dynamic range of HDR images with no consideration for the adaptation. This careless compression cannot transfer the visual perception in the real scene to one in the office illuminated with fluorescent lights.
Our idea for the reproduction of the visual perception in the real scene is to approximate the real lightness of a scene, \({J}_{\mathrm{R}}\), and then find an adequate tone mapping producing the indoor lightness by the tone-mapped image, \({J}_{\mathrm{I}}\), as similar as possible to \({J}_{\mathrm{R}}\). In particular, with a focus on the change of the viewing condition, we control a sensitivity parameter, \(n\), for optimizing a tone mapping, and try to solve the difficulty in setting the free parameter. The outline of the lightness-preserved tone mapping is shown in Fig. 4.
Specifically, for the achromatic stimuli, \(i ({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=i)\). Thus, \({J}_{\mathrm{R}}\) is calculated using Eqs. (2)–(4) as follows:
where the local adapting luminance, \({L}_{\mathrm{A}}\), which is associated with the viewing condition, is approximated by a local level of a low-pass filtered HDR image and therefore contains spatially variant levels. This approximation, adopted from iCAM06, implies that HVS locally adapts the varying ambient luminance in the real scene [9].
For estimating \({J}_{\mathrm{I}}\), we assume that \({L}_{\mathrm{A}}\) in the monitor viewing condition is about 20% of maximum display luminance. This assumption is based on the fact that the adapting luminance is not significantly changed in the office. Therefore, HVS adapts to a single adapting luminance in the office, in comparison to the locally different adapting luminance in the real scene. Therefore, \({J}_{\mathrm{I}}\), and the achromatic response for white in the office, \({A}_{\mathrm{I},\mathrm{W}}\), are defined as follows:
where \({A}_{\mathrm{I},\mathrm{W}}\) is derived from Eqs. (3) and (4) with achromatic stimuli, \(i ({R}^{^{\prime}}={G}^{^{\prime}}={B}^{^{\prime}}=i)\). For lightness preservation, the desired achromatic response of a tone-mapped image, \({A}_{\mathrm{I}}\), is then derived as follows:
where \(i\) is set to a fixed value of 20, and 36.85 is an approximate value of \({A}_{\mathrm{I},\mathrm{W}}\) with \(i=20\). The value of \(i\) implies that the stimuli have a 20% level of the white, and \({A}_{I}\) is a function of the adapting luminance level. Finally, the desired sensitivity parameter, \({n}_{\mathrm{d}}\), which is a function of \({L}_{\mathrm{A}}\), is obtained as follows:
where \({A}_{\mathrm{T}}\) indicates the achromatic responses of a tone-mapped image. The block diagram for the lightness preservation is shown in Fig. 5a.
To demonstrate how to find the desired sensitivity parameter using a tone-mapped image, we choose the tone-mapping method, iCAM06. It adopts a sigmoidal transfer function similar to Eq. (1), has two advantages: (1) \({A}_{\mathrm{T}}\) is easily calculated with the use of the color space equal to CIECAM02, (2) the approximation of \({L}_{\mathrm{A}}\) is available owing to the real luminance level in \(\mathrm{c}\mathrm{d}/{\mathrm{m}}^{2}\) as one of the input parameters. The sigmoidal transfer functions in iCAM06 is expressed as follows:
where \({Y}_{\mathrm{W}}\) is the level of a Gaussian blurred \(Y\) image in the XYZ color space domain, which represents the white color [3]. In addition, other parameters are equal to the parameters of Eq. (4). To estimate the sensitivity parameter for iCAM06, \({n}_{i}\), the achromatic response is first derived using a tone-mapped image in iCAM06, \({A}_{\mathrm{T},\mathrm{i}\mathrm{C}\mathrm{A}\mathrm{M}06}\), with an achromatic stimulus, \(i\), based on Eqs. (3) and (12), as follows:
where the ratio of \(i\) to \({Y}_{\mathrm{W}}\) is 0.2, as mentioned in Eq. (10). Subsequently, from Eqs. (10), (11), and (13), \({n}_{i}\) is derived in accordance to
However, this equation is too complicated to be directly applied to the sigmoidal transfer function. To simplify the equation, we sample \({n}_{i}\) and then simplify the equation as follows:
The sampled data and fitted curve of Eq. (15) are shown in Fig. 6.
4 Anti-halo image composition
The incomplete decomposition-induced halo factor is common in a tone-mapping method using the image decomposition method. To reduce the halo artifact, we generated an anti-halo image composition using the tone-compressed images, with, and without the decomposition. Because the tone-compressed images with and without the decomposition contain more details in the case where the halo artifact is present, and fewer details in the case where there is no halo artifact, the complementary composition of the two images helps to reduce the halo artifact in the resulting image.
The basic form of our composition using the tone-compressed images with decomposition, Id, and without the decomposition, \({I}_{\mathrm{d}\mathrm{n}}\), is defined in accordance to
where \(f\) is a fusion parameter that controls the degree of the detail and the halo artifact in the resulting image, \(O\). We assume that the existence of a large difference between the two images may indicate the presence of the halo artifact as shown in Fig. 3a due to an incomplete decomposition-induced halo factor. Additionally, at the halo region, the gradients must be close to those of the image without the decomposition. Therefore, \(f\) should be a function of the difference, \(D\), between the two images. To determine the parameter, \(f\), the derivative of Eq. (16) with respect to \(D\) is applied.
where \(D\) is equal to \({I}_{\mathrm{d}}-{I}_{\mathrm{d}\mathrm{n}}\). For large \(D\) values, the halo phenomenon decreases as \(\mathrm{d}O/\mathrm{d}D\) approaches \(\mathrm{d}{I}_{\mathrm{d}\mathrm{n}}/\mathrm{d}D\) in accordance to our assumption that the gradient of compensated \(O\) should be close to those of \({I}_{\mathrm{d}\mathrm{n}}\) without the decomposition. Therefore,
Finally, the solution of this differential equation is
where \(c\) is a constant for a margin of detail that represents the criteria for detail preservation. In other words, the region where \(D\) is higher than \(c\) is regarded as the region where the halo-artifact exists, and the region where \(D\) is lower than \(c\) is a preserved component for the detail in the resulting image. The value of \(c\) is fixed to 1.5. This value correlates with the visual masking effect that the visibility threshold of a test stimulus changes when it is viewed in the vicinity of large visible changes in luminance [27]. On the edge of 13 level height within [0 50], which is a dynamic range of the tone compressed image in Fig. 8, the luminance difference under the margin, \(c=1.5\), is not perceived because of the visual masking effect.
However, \(f\) is infinite when \(D\) equals 0. Therefore, we approximate the solution using a rational function based on the constraint that the value of f within the region where \(D\le c\) is equal to 1. The constraint means that when \(f=1\), the resulting image is analogous to \({I}_{\mathrm{d}}\) for detail preservation, and in the halo region where \(f=c/D\) for \(D>c\), the image is analogous to \({I}_{\mathrm{d}\mathrm{n}}\) for reduction of the halo artifact. The final parameter \({f}_{\mathrm{p}}\) can be defined as follows:
As shown in Fig. 7, the value of \({f}_{\mathrm{p}}\) within the range \(D=[0 1.5]\) is approximately equal to 1, and is similar to the value of \(f\) within the region where \(D>1.5\). Consequently, the region with a small \(D\) value is influenced by \({I}_{\mathrm{d}}\), while other regions with larger \(D\) values are influenced by \({I}_{\mathrm{d}\mathrm{n}}\). The block diagram for the halo-artifact reduction is shown in Fig. 5b.
In justification of the successful implementation of our method for the reduction of its halo artifact, four images are shown in Fig. 8. These comprise of tone-compressed images with and without the decomposition, \({I}_{\mathrm{d}}\) and \({I}_{\mathrm{d}\mathrm{n}}\), respectively, the fused image, \(O\), and the fusion parameter, \({f}_{\mathrm{p}}\). As shown in Fig. 8, the halo region in \({I}_{\mathrm{d}}\) is well depicted in \({f}_{\mathrm{p}}\) as the black region, and the fused image has no halo artifact and no loss of detail. In addition, to ensure that our method performs well, enlarged images, and plots for \(Y\) levels of white scanned lines are shown in Fig. 9. As a result, with a detail margin of \(c=1.5\), the halo artifact is reduced in the fused image, and the loss of detail is simultaneously minimized in the fused image. Note that, to preserve the highlight spots, such as glare spots with a large difference, \({f}_{\mathrm{p}}\) is refined based on the application of a median filter using a 3 × 3 mask.
5 Simulation
In this section, we present evidence in justification of the performance of the proposed methods. We evaluated the overall image quality using the block diagram depicted in Fig. 5. To apply the proposed methods, the tone processing base of iCAM06 is adopted as a test bed in addition to the aforementioned advantages for the lightness-preserved tone mapping, it contains the image decomposition process using a fast bilateral filter. First, to compare the result images in terms of the halo artifact, the results by iCAM06 and proposed method are shown in Fig. 10. The halo artifact is easily seen around the edges in the iCAM06 method; however, in the proposed method, the halo artifact is invisible through anti-halo image composition. For a more clear comparison, gradient map images have been used. It can be observed that the proposed method is effective for halo removal in the border region between the building and the background, or at the sky region where the luminance difference around the tree is large.s
Second, in terms of overall image quality, the proposed method is compared with other tone mappings, such as a photographic tone reproduction (PTR) [2], retinex-based tone mapping (RTM) [28], calibrated image appearance reproduction (CIAR) [29], and hybrid L1–L0 layer decomposition model (L1–L0) [30], linear scale model (LS) [31], and iCAM06. A few resulting images are shown in Figs. 11, 12 and 13. As seen by the test images, the images generated using the proposed method have better color appearance in the bright areas of the results such as the lighting areas in Fig. 11, the outdoor scene in Fig. 12, the color checker in Fig. 13. Moreover, it can be confirmed that the overall contrast and detail renderings are enhanced in relatively large tone changings in the dim area of the input image. In order to compare the color reproduction performance of the bright region, the results of each image have been compared with respect to the original HDR image without tone processing. The proposed method has fairly good image reproduction performance in terms of color reproduction and detail representation. Correspondingly, the result images are more natural because of the effects of the preservation of the lightness.
To conduct an objective assessment for the image quality, the resulting images are measured using three indices, namely, NR-CDIQA, TMQI_S, and chroma difference for ten test images of courtesy HDR files, we used the source of the images from “Mark Fairchild’s HDR Photographic Survey” [32]. First, NR-CDIQA is a no-reference contrast distortion metric [33]. Specifically, this metric is based on the natural scene statistical features, such as the mean, standard deviation, skewness, and kurtosis, of image intensity so that the score of NR-CDIQA represents the contrast distortion based on how natural the image is. Note that the CID2013 database is used for training [34]. Second, in comparison to the contrast metric, TMQI_S is a reference structural fidelity metric [35]. The score of TMQI_S shows a structural distortion between the tone-mapped image and the reference HDR image. In addition, the chroma difference in the CIELAB color space was used to show a numerical comparison of the color appearance among the result images.
The resulting scores of the three indices are shown in Fig. 14. Although some of the scores of the proposed method attain the lowest values, on average, the performance of the proposed method is better than that of the others. The fidelity measurement in TMQI_S uses local standard deviations for the structural information. When the structural information of the tone-mapped image is close to that of input HDR image, the fidelity score is higher. In particular, for the PTR method, the reproductions of the low tone regions in Fig. 12 (507 Motor Show) and Fig. 13 (Lab Booth) images were not performed well, that is, the worse the global tone compression performance, the better the score results. Therefore, the image quality comparison using a reference fidelity measurement should be used as an index to compare the performance of representing details between the results showing similar global tone compression performance. For 24 color points of the color checker in Fig. 13 Lab Booth image, the relative chroma difference scores are calculated from the Euclidean distance between the result image and the no-toned reference image on the ab axis. The score shows how well those preserve chroma information after tone processing. The accuracy of color reproduction for the reference image was found to be the best in the proposed method on average.
The emphasis on the resulting scores is that the performance of the proposed method is significantly better than iCAM06, even though it is based on iCAM06. It is noted that the proposed methods, lightness-preserved tone mapping, and halo-artifact reduction, will improve the performance of a local-tone mapping with image decomposition.
6 Conclusions
Even though a large number of studies exist for HDR imaging, local-tone-mapping methods have suffered from the free parameter and the halo-artifact problems. In addition, the significant change in the viewing conditions when one watches the image is not considered so that a tone-mapped image has failed to transfer the sensation of the real scene. In this paper, we proposed the tone reproduction method for the lightness-preserved tone mapping and halo-artifact reduction. First, the lightness-preserved tone mapping intends to reproduce the visual perception of the real scene in the tone-mapped image on the indoor display. Particularly, the matching is implemented by an adaptive sensitivity parameter that decreases the number of free parameters in a local-tone mapping. Second, the halo-artifact reduction using anti-halo image composition fuses the two images with and without image decomposition according to their difference, which indicates the halo region. The simulation with the proposed method embedded in iCAM06 and the other tone mappings shows that the proposed methods improve the performance of a local-tone mapping in terms of image quality.
References
Artusi, A., Mantiuk, R.K., Richter, T., Korshunov, P., Hanhart, P., Ebrahimi, T., Agostinelli, M.: JPEG XT: a compression standard for HDR and WCG images. IEEE Sig. Proc. Mag. 33, 118–124 (2016)
Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps form photographs. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, pp. 369–378 (1997)
Robertson, M.A., Borman, S.: Dynamic range inprovement through multiple exposures. Proc. IEEE Conf. Image Process. 3, 159–163 (1999)
Kwon, H.J., Lee, S.H., Lee, G.Y., Sohng, K.I.: Radiance map construction based on spatial and intensity correlations between LE and SE images for HDR imaging. J. Vis. Commun. Image Represent. 38, 695–703 (2016)
Meylan, L.: Tone mapping for high dynamic range images. Ph.D. dissertation, Ecole Polytechnique Fédérale de Lausanne (EPFL) (2006)
Reinhard, E., Stark, M., Shirley, P., Ferwerda, J.: Photographic tone reproduction for digital images. ACM Trans. Gr. 21, 267–276 (2002)
Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Gr. 21, 257–266 (2002)
Ledda, P., Santos, L.P., Chalmers, A.: A local model of eye adaptation for high dynamic range images. Proc. ACM AFRIGRAPH’ 04, 151–160 (2004)
Kuang, J., Johnson, G.M., Fairchild, M.D.: iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 18, 406–414 (2007)
Lee, G.Y., Lee, S.H., Kwon, H.J., Sohng, K.I.: Visual sensitivity correlated tone reproduction for low dynamic range images in the compression field. Opt. Eng. 53(11), 113111(1)–113111(13) (2014)
Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)
Lee, G.Y., Lee, S.H., Kwon, H.J., Sohng, K.I.: Visual acuity-adaptive detail enhancement and shadow noise reduction for iCAM06-based HDR imaging. Opt. Rev. 22(2), 232–245 (2015)
Kwon, H.J., Lee, S.H., Lee, G.Y., Sohng, K.I.: Luminance adaptation transform based on brightness functions for LDR image reproduction. Digit. Signal Process. 30, 74–85 (2014)
Ledda, P., Santos, L.P., Chalmers, A.: A local model of eye adaptation for high dynamic range images. In: Proceedings of the 3rd Inernational Conference on Computer Graphics, Virtual Reality, Visualization and Interaction in Africa, pp. 151–160 (2004)
Reinhard, E., Devlin, K.: Dynamic range reduction inspired by photoreceptor physiology. IEEE Trans. Vis. Comput. Gr. 11, 13–24 (2005)
Ferradans, S., Bertalmio, M., Provenzi, E., Caselles, V.: An analysis of visual adaptation and contrast perception for tone mapping. IEEE Trans. Pat. Anal. Mach. Intell. 33, 2002–2012 (2011)
Tumblin, J., Turk, G.: LCIS: a boundary hierarchy for detail-preserving contrast reduction. In: Proceeding of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 83–90 (1999)
Farbman, Z., Fattal, R., Lischinski, D., Szeliski, R.: Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Gr. 27, 67 (2008)
Moroney, N., Fairchild, M.D., Hunt, R.W.G., Li, C.J., Luo, M.R., Newman, T.: The CIECAM02 color appearance model. In: IS&T/SID 10th Color Imaging Conference, pp. 23–27 (2002)
Larson, G.W., Ruchmeier, H., Piatko, C.: A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Trans. Vis. Comput. Gr. 3, 291–306 (1997)
Duan, J., Bressan, M., Dance, C., Qiu, G.: Tone mapping high dynamic range images by novel histogram adjustment. Pat. Recognit. 43, 1847–1862 (2010)
Valeton, J.M., van Norren, D.: Light adaptation of primate cones: an analysis based on extracellular data. Vis. Res. 23, 1539–1547 (1983)
Pattanaik, S.N., Tumblin, J., Yee, H., Greenberg, D.P.: Time-dependent visual adaptation for fast realistic image display. In: Proceedings of the 27th Annual Conference on Computer Graphics and Inteactive Techniques, pp. 47–54 (2000)
Tamburrino, D., Alleysson, D., Meylan, L., Süsstrunk, S.: Digital camera workflow for high dynamic range images using a model of retinal processing. In: Proceedings of SPIE, p. 6817 (2008)
Xie, Z., Stockham, T.G.: Towards the unification of three visual laws and two visual models in brightness perception. IEEE Trans. Syst. Man Cybern. 19, 379–387 (1989)
Fairchild, M.D.: Color appearance models, 3rd edn. Wiley-IS&T, Chichester (2013)
Netravali, A.N., Haskell, B.G.: Digital pictures; Representation and Compression. Perseus, Plenum, New York (1988)
Meylan, L., Süsstrunk, S.: High dynamic range image rendering with a retinex-based adaptive filter. IEEE Trans. Image Process. 15, 2820–2830 (2006)
Reinhard, E., Pouli, T., Kunkel, T., Long, B., Ballestad, A., Damberg, G.: Calibrated image appearance reproduction. ACM Trans. Gr. 31, 201 (2012)
Liang, Z., Xu, J., Zhang, D., Cao, Z., Zhang, L.: A hybrid L1–L0 layer decomposition model for tone mapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4758–4766 (2018)
Reinhard, E., Heidrich, W., Pattanaik, S., Debevec, P., Ward, G.: High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, Burlington (2010)
Fairchild, M.D.: The HDR photographic survey, reference to https://rit-mcsl.org/fairchild//HDR.html. Accessed 2018
Fang, Y., Ma, K., Wang, Z., Lin, W., Fang, Z., Zhai, G.: No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Process. Lett, 22, 838–842 (2015)
Gu, K., Zhai, G., Yang, X., Zhang, W., Liu, M.: Subjective and objective quality assessment for images with contrast change. In: IEEE International Conference on Image Processing, pp. 383–387 (2013)
Yeganeh, H., Wang, Z.: Objective quality assessment of tone mapped images. IEEE Trans. Image Process. 22, 657–667 (2013)
Acknowledgments
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) and the BK21 Plus project funded by the Ministry of Education, Korea (NRF-2019R1D1A3A03020225, 21A20131600011).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lee, GY., Kwon, HJ. & Lee, SH. HDR image reproduction based on visual achromatic response. Opt Rev 27, 361–374 (2020). https://doi.org/10.1007/s10043-020-00604-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10043-020-00604-w