Color conversion matrices and chromatic adaptation transforms (CATs) are of central importance when converting a scene captured by a digital camera in the camera raw space into a color image suitable for display using an output-referred color space. In this article, the nature of a typical camera raw space is investigated, including its gamut and reference white. Various color conversion strategies that are used in practice are subsequently derived and examined. The strategy used by internal image-processing engines of traditional digital cameras is shown to be based upon color rotation matrices accompanied by raw channel multipliers, in contrast to the approach used by smartphones and commercial raw converters, which is typically based upon characterization matrices accompanied by conventional CATs. Several advantages of the approach used by traditional digital cameras are discussed. The connections with the color conversion methods of the DCRaw open-source raw converter and the Adobe digital negative converter are also examined, along with the nature of the Adobe color and forward matrices. |
1.IntroductionConsider converting a scene captured by a digital camera in the camera raw space into a digital image suitable for display using an output-referred color space. At the very least, there are two issues of fundamental importance that must be addressed when attempting to correctly reproduce the appearance of color. The first is that the response functions of digital cameras differ from those of the human visual system (HVS). A widely used approach to this issue is to consider color spaces as vector spaces and to account for the differences in response by introducing a color conversion matrix. A type of color conversion matrix that is commonly encountered is the characterization matrix that defines the linear relationship between the camera raw space and the CIE XYZ reference color space: In general, camera raw spaces are not colorimetric, so the above conversion is approximate. The relationship can be optimized for a given illuminant by minimizing the color error. Significantly, this means that the optimum depends upon the nature of the scene illuminant,1,2 including its white point (WP). The characterization methodology for determining the optimum is described in Sec. 2.4, along with an illustration of how should be normalized in practice.The second issue that must be addressed is the perception of the scene illumination WP. Although the various adaptation mechanisms employed by the HVS are complex and not fully understood, it is thought that the HVS naturally uses a chromatic adaptation mechanism to adjust its perception of the scene illumination WP to achieve color constancy under varying lighting conditions.3,4 Since digital camera sensors do not naturally adapt in this manner, incorrect white balance (WB) will arise when the WP of the scene illumination differs from the reference white of the output-referred color space used to encode the output image produced by the camera. As demonstrated in Sec. 3, digital cameras must attempt to emulate this chromatic adaptation mechanism by utilizing an appropriate chromatic adaptation transform (CAT). As discussed in Sec. 4, modern smartphones and commercial raw converters typically calculate the optimum characterization matrix by interpolating between two preset characterization matrices according to an estimate of the scene illumination WP, and the CAT is implemented after applying . In traditional digital cameras, the color conversion is typically reformulated in terms of raw channel multipliers and color rotation matrices . This approach offers several advantages, as discussed in Sec. 5. A similar but computationally simpler approach is used by the DCRaw open-source raw converter, as discussed in Sec. 6. The open-source Adobe® digital negative (DNG) converter offers two color conversion methods, and the nature of the Adobe color matrices and forward matrices are discussed in Sec. 7. Finally, conclusions are drawn in Sec. 8. 2.Camera Raw Space2.1.GamutThe camera raw space for a given camera model arises from its set of spectral responsivity functions or camera response functions: where is the elemental charge, is the wavelength, is Planck’s constant, and is the speed of light. The external quantum efficiency for mosaic is defined by where is the color filter array (CFA) transmittance function for mosaic , is the charge collection efficiency or internal quantum efficiency of a photoelement, and is the interface transmittance function.5 The fill factor is defined by , where is the photosensitive detection area at a photosite and is the photosite area. The spectral passband of the camera should ideally correspond to the visible spectrum, so an infra-red blocking filter is required.Analogous to the eye-cone response functions of the HVS, which can be interpreted as specifying the amounts of the eye cone primaries that the eye uses to sense color at a given , the camera response functions can be interpreted as specifying amounts of the camera raw space primaries at each . For example, the measured Nikon D700 camera response functions are shown in Fig. 1. However, a camera raw space is colorimetric only if the Luther-Ives condition is satisfied,7–9 meaning that the camera response functions must be an exact linear transformation of the eye-cone response functions, which are indirectly represented as a linear transformation from the CIE color-matching functions for the standard observer. Although the eye-cone response functions are suited to capturing detail using the simple lens of the human eye, digital cameras use compound lenses that have been corrected for chromatic aberration. Consequently, camera response functions are designed with other considerations in mind.10,11 For example, better signal-to-noise performance is achieved by reducing the overlap of the response functions, which corresponds to a characterization matrix with smaller off-diagonal elements.10–12 Indeed, minor color errors can be traded for better signal-to-noise performance.10–13 On the other hand, increased correlation in the wavelength dimension can improve the performance of the color demosaicing procedure.14 Due to such trade-offs along with filter manufacturing constraints, camera response functions are not exact linear transformations of the eye-cone response functions in practice. Consequently, camera raw spaces are not colorimetric, so cameras exhibit metameric error. Metamers are different spectral power distributions (SPDs) that are perceived by the HVS to be the same color when viewed under exactly the same conditions. Cameras that exhibit metameric error produce different color responses to these metamers. Camera metameric error can be determined experimentally and quantified by the digital still camera sensitivity metamerism index (DSC/SMI).8,15 Figure 2 shows the spectral locus of the HVS on the chromaticity diagram, which is a 2D projection of the CIE XYZ color space that describes the relative proportions of the tristimulus values. Note that the spectral locus itself is horseshoe-shaped rather than triangular due to the fact that overlap of the eye-cone response functions prevents the eye cones from being independently stimulated, so the chromaticities corresponding to chromaticity coordinates positioned outside the spectral locus are invisible or imaginary as they are more saturated than pure spectrum colors. The gamut of the camera raw space for the Nikon D700 camera is also plotted on Fig. 2 and compared with several standard output-referred color spaces, namely sRGB,16 Adobe® RGB,17 and ProPhoto RGB.18 Due to the positions of the camera raw space primaries on the chromaticity diagram, certain regions of the camera raw space gamut do not reach the spectral locus of the HVS as these regions lie outside the triangular shape accessible to additive linear combinations of the three primaries. Furthermore, a notable consequence of camera metameric error is that the camera raw space gamut is warped away from the triangular shape accessible to the additive linear combinations of the three primaries. Certain regions are even pushed outside of the triangle accessible to the CIE XYZ color space.19 See Ref. 19 for additional examples. To determine the gamut of a camera raw space, the first step is to measure the camera response functions using a monochromator at a discrete set of wavelengths according to method A of the ISO 17321-1 standard.15 For each wavelength, the camera response functions yield raw relative tristimulus values in the camera raw space. The second step is to convert into relative CIE XYZ values by applying a characterization matrix that satisfies Eq. (1). Subsequently, the chromaticity coordinates corresponding to the spectral locus of the camera raw space can be calculated using the usual formulas, and . Since a given characterization matrix is optimized for use with the characterization illuminant, i.e., the scene illumination used to perform the characterization, another consequence of camera metameric error is that the camera raw space gamut may vary according to the characterization matrix applied. The gamut of the Nikon D700 camera raw space shown in Fig. 2 was obtained using a characterization matrix optimized for CIE illuminant D65. Figure 3 shows how the gamut changes when a characterization matrix optimized for CIE illuminant A is applied instead. 2.2.Raw ValuesColor values in a camera raw space are expressed in terms of digital raw values for each raw color channel, which are analogous to tristimulus values in CIE color spaces. For a CFA that uses three types of color filters such as a Bayer CFA,20 the raw values expressed using output-referred units, i.e., data/digital numbers (DN) or analog-to-digital units, belong to the following set of raw channels denoted here using calligraphic symbols: Although vector notation has been used here to represent a Bayer block, a true raw pixel vector is obtained only after the color demosaic has been performed, in which case there will be four raw values associated with each photosite. The Bayer CFA uses twice as many green filters as red and blue, which means that two values and associated with different positions in each Bayer block will be obtained in general. This is beneficial in terms of overall signal-to-noise ratio since photosites belonging to the green mosaics are more efficient in terms of photoconversion. Furthermore, the Bayer pattern is optimal in terms of reducing aliasing artifacts when three types of filters are arranged on a square grid.14 Although it is thought that a greater number of green filters provides enhanced resolution for the luminance signal since the standard 1924 CIE luminosity function for photopic vision peaks at 555 nm,20 it has been argued that a Bayer CFA with two times more blue pixels than red and green would in fact be optimal for this purpose.14 When demosaicing raw data corresponding to a standard Bayer CFA, the final output will show false mazes or meshes if the ratio between and varies over the image.21 Software raw converters may average and together to eliminate such artifacts.21Since there are fundamentally only three camera response functions, , , and , color characterization for a Bayer CFA regards and as a single channel, . The raw values can be expressed as follows: The camera response functions are defined by Eq. (2), the integration is over the spectral passband of the camera, is the average spectral irradiance at the photosite, and is a constant. Expressions for and are given in the Appendix.The actual raw values obtained in practice are quantized values modeled by taking the integer part of Eq. (5). When transforming from the camera raw space, it is useful to normalize the raw values to the range [0,1] by dividing Eq. (5) by the raw clipping point, which is the highest available DN. 2.3.Reference WhiteUsing the above normalization, the reference white of a camera raw space is defined by the unit vector Expressed in terms of CIE XYZ tristimulus values or chromaticity coordinates with , the reference white of a camera raw space is the WP of the scene illumination that yields maximum equal raw values for a neutral subject. (The WP of a SPD is defined by the CIE XYZ tristimulus values that correspond to a 100% neutral diffuse reflector illuminated by that SPD.)It follows that the reference white of a camera raw space can in principle be determined experimentally by finding the illuminant that yields equal raw values for a neutral subject. Note that if the DCRaw open-source raw converter is used to decode the raw file, it is essential to disable WB. In terms of CIE colorimetry, the camera raw space reference white is formally defined by where and the subscripts denote that the WP is that of the scene illumination. The characterization matrix converts from the camera raw space to CIE XYZ and should be optimized for the required scene illumination. The optimum is unknown at this stage but can in principle be determined using the optimization procedure to be outlined in Sec. 2.4.Although CIE color spaces use normalized units such that their reference whites correspond to WPs of CIE standard illuminants, camera raw spaces are not naturally normalized in such a manner. Consequently, the reference white of a camera raw space is not necessarily a neutral color as it is typically located far away from the Planckian locus and so does not necessarily have an associated correlated color temperature (CCT). Note that a WP can be associated with a CCT provided its chromaticity coordinates are sufficiently close to the Planckian locus, but there are many such coordinates that correspond to the same CCT. To distinguish between them, a value, informally referred to as a color tint, can be assigned.22 This is determined by converting into chromaticity coordinates on the CIE 1960 UCS chromaticity diagram,23,24 where isotherms are normal to the Planckian locus. In this representation, CCT is a valid concept only for coordinates positioned a distance from the Planckian locus that is within along an isotherm.25 To see that the reference white of a camera raw space is far from the Planckian locus, consider the Nikon D700 raw values for a neutral diffuse reflector illuminated by CIE illuminants A and D65, respectively, where and are example characterization matrices optimized for CIE illuminants A and D65, respectively. As shown in Fig. 4, the WPs of these standard illuminants are very close to the Planckian locus. Illuminant A has and , and illuminant D65 has and . Evidently, the above Nikon D700 raw values are very different from the camera raw space unit vector, and it would be necessary to apply large multipliers to the red and blue raw pixel values in both cases. These multipliers are known as raw channel multipliers as they are typically applied to the red and blue raw channels before the color demosaic as part of the color conversion strategy used by the internal image-processing engines of traditional digital cameras.An estimate of the Nikon D700 reference white can be obtained by approximating Eq. (7) using a readily available characterization matrix in place of . Applying yields , which corresponds to . This has an associated as the value is just within the allowed limit, but Fig. 4 shows that the color tint is a strong magenta. This is true of typical camera raw spaces in general.21 A similar estimate for the Olympus E-M1 camera yields , which corresponds to . This does not have an associated CCT, and the color tint is a very strong magenta. Although the fact that camera raw space reference whites are not neutral in terms of CIE colorimetry has no bearing on the final reproduced image, it will be shown in Sec. 5 that the camera raw space reference white is utilized as a useful intermediary step in the color conversion strategy used by traditional digital cameras. 2.4.Camera Color CharacterizationRecall the linear transformation from the camera raw space to CIE XYZ defined by Eq. (1): where is a characterization matrix: The color conversion is approximate since the Luther-Ives condition is not satisfied exactly. As mentioned in the introduction, can be optimized for the characterization illuminant, i.e., the scene illumination used to perform the characterization.1,2 The optimum matrix is dependent upon the SPD itself, but it largely depends upon the characterization illumination WP provided the illuminant is representative of a real world SPD.Characterization matrices optimized for known illuminants can be determined by color-error minimization procedures based upon photographs taken of a standard color chart.2 Although various minimization techniques have been developed, including WP-preserving techniques,26 the procedure discussed below is based on the standardized method B of ISO 17321-1.15 Note that ISO 17321-1 uses processed images output by the camera rather than raw data and consequently requires inversion of the camera opto-electronic conversion function (OECF).27 The OECF defines the nonlinear relationship between irradiance at the sensor plane and the digital output levels of a viewable output image such as a JPEG file produced by the camera. To bypass the need to experimentally determine the OECF, a variation of method B from ISO 17321-1 is described below. This method uses the DCRaw open-source raw converter to decode the raw file so that the raw data can be used directly.28,29
Provided WB was disabled in step 3, the characterization matrix can be used with arbitrary scene illumination. However, optimum results will be obtained for scene illumination with a WP that closely matches that of the characterization illuminant. Figure 5 shows how the matrix elements of an optimized characterization matrix vary as a function of characterization illuminant CCT for the Olympus E-M1 camera. For the same camera, Fig. 6(a) shows a photo of a color chart in the camera raw space taken under D65 illumination. When the camera raw space values are interpreted as RGB values in the sRGB color space for display purposes without any color characterization matrix applied, a strong green color tint is revealed, which arises from the greater transmission of the green Bayer filter. Figure 6(b) shows the same photo converted into the sRGB color space by applying an optimized characterization matrix , followed by a matrix that converts the colors from the CIE XYZ color space to sRGB. Evidently, the colors are now displayed correctly. 2.5.Characterization Matrix NormalizationNormalization of a characterization matrix refers to scaling of the entire matrix so that all matrix elements are scaled identically. A typical normalization applied in practice is to ensure that the matrix maps between the characterization illuminant WP expressed using the CIE XYZ color space and the camera raw space such that the raw data just saturates when a 100% neutral diffuse reflector is photographed under the characterization illuminant. The green raw channel is typically the first to saturate. For example, if the characterization illuminant is D65, then can be normalized such that its inverse provides the following mapping: where . Since the green raw channel is typically the first to saturate under most types of illumination, it will typically be the case that , whereas and .For example, the Olympus E-M1 characterization matrices used by Fig. 5 for the 4200 and 6800 K calibration illuminants are defined by These matrices are normalized such that the WP of the characterization illuminant maps to raw values where the green raw channel just reaches saturation:3.White BalanceA remarkable property of the HVS is its ability to naturally adjust to the ambient lighting conditions. For example, if a 100% neutral diffuse reflector is placed in a photographic scene illuminated by daylight, the reflector appears to be neutral white. Later in the day when there is a change in the chromaticity or CCT of the scene illumination, the color of the reflector would be expected to change accordingly. However, the reflector will continue to appear neutral white. In other words, the perceived color of objects remains relatively constant under varying types of scene illumination, which is known as color constancy.3,4 The chromatic adaptation mechanism by which the HVS achieves color constancy is complex and not fully understood, but a simplified explanation is that the HVS aims to discount the chromaticity of the illuminant.30 Back in 1902, von-Kries postulated that this is achieved by an independent scaling of each eye cone response function.3,4 The color stimulus that an observer adapted to the ambient conditions considers to be neutral white (perfectly achromatic with 100% relative luminance) is defined as the adapted white.31 Since camera response functions do not naturally emulate the HVS by discounting the chromaticity of the scene illumination, an output image will appear too warm or too cold if it is displayed using illumination with a WP that does not match the adapted white for the photographic scene at the time the photograph was taken. This is known as incorrect WB. The issue can be solved by implementing the following computational strategy.
The CAT needs to be applied as part of the overall color conversion from the camera raw space to the chosen output-referred color space. Different approaches for combining these components exist. The typical approach used in color science is to convert from the camera raw space to CIE XYZ, apply the CAT, and then convert to the chosen output-referred color space. In the case of sRGB, where is a characterization matrix that converts from the camera raw space to CIE XYZ and is optimized for the scene AW, the matrix applied in the CIE XYZ color space is a CAT that adapts the AW to the D65 reference white of the sRGB color space, and finally is the matrix that converts from CIE XYZ to the linear form of the sRGB color space: In particular, the AW in the camera raw space is mapped to the reference white of the output-referred color space defined by the unit vector in the output-referred color space: When the encoded output image is viewed on a calibrated display monitor, a scene object that the HVS regarded as being white at the time the photograph was taken will now be displayed using the D65 reference white. Ideally, the ambient viewing conditions should match those defined as appropriate for viewing the sRGB color space.If the scene illumination WP estimate is far from the true scene illumination WP, then incorrect WB will be evident to the HVS. If the scene illumination CCT estimate is higher than the true CCT, then the photo will appear too warm. Conversely, if the scene illumination CCT estimate is lower than the true CCT, then the photo will appear too cold. Figure 7(a) shows a photo of a color chart taken under 2700 K CCT tungsten illumination using the Olympus E-M1 camera. A characterization matrix was applied to convert the colors into CIE XYZ, followed by to convert the colors to sRGB. Evidently, the true color of the scene illumination is revealed since no chromatic adaptation has been performed by the camera. In other words, the photo appears too warm in relation to the D65 reference white of the sRGB color space. Figure 7(b) shows the same photo after white balancing by including a CAT that chromatically adapts the scene illumination WP to the sRGB color space D65 reference white, which has a 6504 K CCT and color tint. 3.1.Chromatic Adaptation TransformsA CAT is a computational technique for adjusting the WP of a given SPD. It achieves this goal by attempting to mimic the chromatic adaptation mechanism of the HVS. In the context of digital cameras, the most important CATs are the Bradford CAT and raw channel scaling. In 1902, von-Kries postulated that the chromatic adaptation mechanism be modeled as an independent scaling of each eye cone response function,3,4 which is equivalent to scaling the , , and tristimulus values in the LMS color space. To illustrate the von-Kries CAT, consider adapting the scene illumination WP estimate (the AW) to the WP of D65 illumination: In this case, the von-Kries CAT that must be applied to all raw pixel vectors can be written as The matrix transforms each raw pixel vector into a diagonal matrix in the LMS color space. Modern forms of include matrices based on the cone fundamentals defined by the CIE in 200637 and the Hunt–Pointer–Estevez transformation matrix38 defined by After applying , the , , and values are independently scaled according to the von-Kries hypothesis. In the present example, the scaling factors arise from the ratio between the AW and D65 WPs. These can be obtained from the following WP vectors: Finally, the inverse of the transformation matrix is applied to convert each raw pixel vector back into the CIE XYZ color space.The Bradford CAT39 can be regarded as an improved version of the von-Kries CAT. A simplified linearized version is recommended by the ICC for use in digital imaging.40 The linear Bradford CAT can be implemented in an analogous fashion as the von-Kries CAT, the difference being that the , , and tristimulus values are replaced by , , and , which correspond to a “sharpened” artificial eye cone space. The transformation matrix is defined by Analogous to the independent scaling of the eye cone response functions hypothesized by von-Kries, a type of CAT can be applied in the camera raw space by directly scaling the raw channels. Consider a Bayer block for the AW obtained by photographing a 100% neutral diffuse reflector under the scene illumination. The following operation will adapt the AW to the reference white of the camera raw space: where The diagonal scaling factors, known as raw channel multipliers, can be obtained directly from the raw data using the AW calculated by the camera. For example, for D65 scene illumination, in which case where , , and are extracted from the Bayer block for a 100% neutral diffuse reflector photographed under D65 scene illumination.In the context of digital cameras, the type of CAT defined by raw channel multipliers has been found to work better in practice, particularly for extreme cases.21,32 A reason for this is that the raw channel multipliers are applied in the camera raw space prior to application of a color conversion matrix. The camera raw space corresponds to a physical capture device, but CATs such as the linear Bradford CAT are applied in the CIE XYZ color space after applying a color conversion matrix that contains error. In particular, color errors that have been minimized in a nonlinear color space such as CIE LAB will be unevenly amplified, so the color conversion will no longer be optimal.41 4.Smartphone CamerasSmartphone manufacturers, along with commercial raw conversion software developers, typically implement the conventional type of computational color conversion strategy used in color science that was introduced in Sec. 3. Since the camera raw space is transformed into CIE XYZ as the first step, image processing techniques can be applied in the CIE XYZ color space (or following a transformation into some other intermediate color space) before the final transformation to an output-referred RGB color space. Consider the white-balanced transformation from the camera raw space to an output-referred RGB color space. Unlike in traditional digital cameras, the color demosaic is typically carried out first, so the vector notation used for the camera raw space below refers to raw pixel vectors rather than Bayer blocks. In the case of sRGB, the transformation that must be applied to every raw pixel vector is defined by The conversion can be decomposed into three steps.
Finally, the digital output levels of the output image are determined by applying the nonlinear gamma encoding curve of the output-referred color space and reducing the bit depth to 8. In modern digital imaging, encoding gamma curves are designed to minimize visible banding artifacts when the bit depth is reduced, and the non-linearity introduced is later reversed by the display gamma.28 To see that WB is correctly achieved, the above steps can be followed for the specific case of the raw pixel vector that corresponds to the AW. As required by Eq. (20), it is found that this maps to the reference white of the output-referred color space defined by the unit vector in that color space: Although the matrix transformation defined by Eq. (29) appears to be straightforward, the characterization matrix should in principle be optimized for the AW. However, it is impractical to determine a characterization matrix optimized for each possible scene illumination WP that could occur. For example, if CCTs are specified to the nearest Kelvin and color tint is neglected, then 12,000 matrices would be required to cover scene illumination WPs from 2000 to 14,000 K. The computationally simplest solution used on some mobile phone cameras is to approximate the optimized characterization matrix using a single fixed matrix optimized for a representative illuminant. For example, this could be D65 illumination, in which case optimized for the AW is approximated as . The drawback of this very simple approach is that the color conversion loses some accuracy when the scene illumination WP differs significantly from the WP of the representative illuminant. As described below, an advanced solution to the problem is to adopt the type of approach used by the Adobe DNG converter.32 The idea is to interpolate between two preset characterization matrices that are optimized for use with either a low-CCT or high-CCT illuminant. For a given scene illumination, an interpolated matrix optimized for the CCT of the AW can be determined. 4.1.Interpolation AlgorithmIf using the advanced approach mentioned above, the optimized characterization matrix required by Eq. (29) can be calculated by interpolating between two characterization matrices and based on the scene illumination CCT estimate denoted by CCT(AW), together with the CCTs of the two characterization illuminants denoted by and , respectively, with . For example, illuminant 1 could be a low-CCT illuminant such as CIE illuminant A, whereas illuminant 2 could be a high-CCT illuminant such as D65. The first step is to appropriately normalize and . Although characterization matrices are typically normalized according to their corresponding characterization illuminant WPs as demonstrated in Sec. 2.5, it is more convenient to normalize and according to a common WP when implementing an interpolation algorithm. Unfortunately, the AW cannot be expressed using the CIE XYZ color space at this stage since is yet to be determined. Instead, the common WP could be chosen to be the reference white of the output-referred color space, which is D65 for sRGB. In this case, and should both be scaled according to Eq. (15): where and .Unless the smartphone utilizes a color sensor that can directly estimate the scene illumination WP in terms of chromaticity coordinates, the AW is calculated by the camera in terms of raw values , , and , so the AW cannot be expressed using the CIE XYZ color space prior to the interpolation. However, the corresponding CCT(AW) requires knowledge of the chromaticity coordinates, which means converting to CIE XYZ via a matrix transformation that itself depends upon the unknown CCT(AW). This problem can be solved using a self-consistent iteration procedure.32
After the interpolation has been carried out, inherits the normalization of Eq. (34). However, the AW can now be expressed using the CIE XYZ color space, so can be renormalized to satisfy Eq. (31). If the smartphone utilizes a color sensor that can directly estimate the scene illumination WP in terms of chromaticity coordinates, then only steps 2 and 3 above are required. 5.Traditional Digital CamerasConsider again the white-balanced transformation from the camera raw space to an output-referred RGB color space. In the case of sRGB, the transformation is defined by Eq. (29): where adapts the scene illumination WP estimate (the AW) to the sRGB color space D65 reference white. Traditional camera manufacturers typically re-express the above equation in the following manner: This equation can be interpreted by decomposing the conversion into two steps.
Combining Eqs. (39) and (42) shows that overall WB is achieved since the raw pixel vector corresponding to the AW is mapped to the reference white of the output-referred color space: Like the characterization matrix , the color rotation matrix should in principle be optimized for the scene illumination. Rather than use an interpolation-based approach, the reformulation in the form of Eq. (37) enables traditional camera manufacturers to adopt an alternative and computationally simple approach that can be straightforwardly implemented on fixed-point number architecture. 5.1.Multiplier and Matrix DecouplingAlthough Eq. (37) appears to be a straightforward reformulation of Eq. (29), it has several advantages that arise from the raw channel multipliers contained within the WB matrix having been extracted. As shown in Fig. 8, the variation of the elements of a color rotation matrix with respect to CCT is very small. The stability is greater than that of the elements of a conventional characterization matrix , as evident from comparison of Figs. 5 and 8. Consequently, it suffices to determine a small set of preset color rotation matrices that cover a range of WPs or CCTs, with each matrix optimized for a particular preset WP or CCT: where . When the AW is calculated by the camera, the color rotation matrix optimized for the closest-matching WP or CCT preset can be selected. However, the WB matrix appropriate for the AW is always applied prior to , so the overall color conversion can be expressed as Since is decoupled from the rotation matrices, this approach will achieve correct WB without the need to interpolate the rotation matrices.It should be noted that the camera raw space correctly represents the scene (albeit via a non-standard color model) and that the raw channel multipliers contained within are not applied to “correct” anything concerning the representation of the true scene white by the camera raw space, as often assumed. The multipliers are applied to chromatically adapt the AW to the reference white of the camera raw space as part of the overall CAT required to achieve WB by emulating the chromatic adaptation mechanism of the HVS. As shown in Fig. 4, the reference white of a camera raw space is typically a magenta color when expressed using CIE colorimetry, but it serves as a useful intermediary stage in the required color transformation as it facilitates the extraction of a channel scaling component that can be decoupled from the matrix operation. Other advantages of the reformulation include the following.
5.2.Example: Olympus E-M1Although the color matrices used by the camera manufacturers are generally unknown, certain manufacturers such as Sony and Olympus do reveal information about the color rotation matrices used by their cameras that can be extracted from the raw metadata. Table 1 lists the data illustrated in Fig. 8 for the preset color rotation matrices used by the Olympus E-M1 digital camera, along with the scene illumination CCT ranges over which each matrix is applied. Figure 9 shows how the raw channel multipliers for the same camera vary as a function of CCT. The data was extracted from raw metadata using the freeware “ExifTool” application.48 The color conversion strategy of the camera can be summarized as follows.
The Olympus E-M1 camera also includes several scene illumination presets. The color rotation matrices and associated raw channel multipliers for these scene presets are listed in Table 2. For a given CCT, notice that the scene preset matrices and multipliers are not necessarily the same as those listed in Table 1. This is because the scene preset renderings include a color tint away from the Planckian locus, so the chromaticity coordinates are not necessarily the same as those listed in Table 1 for a given CCT. For the same reason, notice that the “fine weather,” “underwater,” and “flash” scene mode presets actually use the same color rotation matrix but use very different raw channel multipliers. Table 1Raw-to-sRGB color rotation matrices corresponding to ranges of in-camera custom CCTs for the Olympus E-M1 camera with 12-100/4 lens and v4.1 firmware. The middle column lists the matrices extracted from the raw metadata, which are 8-bit fixed-point numbers. By dividing by 256, the right column lists to four decimal places the same matrices such that each row sums to unity rather than 256.
Table 2Raw-to-sRGB color rotation matrices and associated raw channel multipliers corresponding to in-camera scene modes for the Olympus E-M1 camera with 12-100/4 lens and v4.1 firmware. All values are 8-bit fixed-point numbers that can be divided by 256. Since the scene mode presets include a color tint away from the Planckian locus, the multipliers and matrices do not necessarily have the same values as the custom CCT presets with the same CCT listed in Table 1.
For any given camera model, all preset color rotation matrices are dependent on factors such as the output-referred color space selected by the user in the camera settings (such as sRGB or Adobe® RGB), the lens model used to take the photograph, and the firmware version. Due to sensor calibration differences between different examples of the same camera model, there can also be a dependence on the individual camera used to take the photograph. For example, Fig. 10(a) shows a photo of a color chart in the camera raw space taken under D65 illumination. Like Fig. 6(a), the green color tint arises from the fact that the camera raw space values are being interpreted as RGB values in the sRGB color space for display purposes without any color characterization matrix applied to convert the colors. Figure 10(b) shows the same photo after applying the diagonal WB matrix to chromatically adapt the AW to the camera raw space reference white. The raw channel multipliers remove the green tint, but the photo remains in the camera raw space. Remarkably, the colors appear realistic, although desaturated. To illustrate that the camera raw space reference white is actually a magenta color when expressed using CIE colorimetry, Fig. 10(c) converts (b) to the sRGB color space without any further chromatic adaptation by applying a conventional characterization matrix followed by . In contrast, Fig. 10(d) was obtained by applying the appropriate raw channel multipliers followed by the sRGB color rotation matrix in place of and . The color rotation matrix includes a CAT that adapts the camera raw space reference white to the sRGB color space D65 reference white. In this particular case, , so the color rotation matrix defined by Eq. (40) becomes Substituting into Eq. (37) yields Consequently, the rotation matrix reverses the effect of the WB matrix since the scene and display illumination is the same.6.DCRaw Open-Source Raw ConverterThe widely used DCRaw open-source raw converter (pronounced “dee-see-raw”) written by D. Coffin can process a wide variety of raw image file formats. It is particularly useful for scientific analysis as it can decode raw files without demosaicing, it can apply linear tone curves, and it can directly output to the camera raw space and the CIE XYZ color space. Some relevant commands are listed in Table 3. However, DCRaw by default outputs directly to the sRGB color space with a D65 illumination WP by utilizing a variation of the traditional digital camera strategy described in the previous section.28 Recall that the color rotation matrix optimized for use with the scene illumination is defined by Eq. (40): Although digital cameras typically use a small set of preset rotation matrices optimized for a selection of preset illuminants, DCRaw instead takes a very computationally simple approach that uses only a single rotation matrix optimized for D65 scene illumination, . This is achieved using a characterization matrix optimized for D65 illumination, which means that the matrix contained within is replaced by and the matrix is not required: The diagonal WB matrix contains raw channel multipliers appropriate for D65 illumination: The overall transformation from the camera raw space to the linear form of sRGB is defined by which can be more explicitly written as Consequently, all chromatic adaptation is performed using raw channel multipliers. Notice that the WB matrix appropriate for the scene illumination estimate is always applied to the raw data in Eq. (50), so WB is always correctly achieved in principle.Table 3A selection of relevant DCraw commands available in version 9.28. Note that the RGB output colorspace options use color rotation matrices and so should only be used with the correct raw channel multipliers due to the inbuilt CAT.
Although the color transformation matrix is optimized for D65 scene illumination, applying the color rotation matrix to transform from the camera raw space to sRGB is valid for any scene illumination CCT since color rotation matrices vary very slowly as a function of CCT, as evident from Fig. 8. However, is the optimum choice for D65 scene illumination, so a drawback of this simplified approach is that the overall color transformation loses some accuracy when the scene illumination differs significantly from D65. 6.1.Example: Olympus E-M1DCRaw uses color rotation matrices obtained via Eq. (48), so a characterization matrix is required for a given camera model. For this purpose, DCRaw uses the Adobe “ColorMatrix2” matrices from the Adobe® DNG converter.32 Due to highlight recovery logic requirements, the Adobe matrices map in the opposite direction to the conventional characterization matrices defined in Sec. 2.4, and therefore where is a normalization constant. For the Olympus E-M1 digital camera, the DCRaw source code stores the ColorMatrix2 entries in the following manner: Dividing by 10,000 and rearranging in matrix form yields Recall from Sec. 2.5 that characterization matrices are typically normalized so that the WP of the characterization illuminant maps to raw values such that the maximum value (typically the green channel) just reaches saturation when a 100% neutral diffuse reflector is photographed under the characterization illuminant. Although the ColorMatrix2 matrices are optimized for CIE illuminant D65, they are by default normalized according to the WP of CIE illuminant D50 rather than D65: where . Accordingly, they need to be rescaled for use with DCRaw: where . In the present example, it is found that , so By considering the unit vector in the sRGB color space, the above matrix can be used to obtain the raw tristimulus values for the D65 illumination WP: where converts from the linear form of sRGB to CIE XYZ. Now Eq. (49) can be used to extract the raw channel multipliers for scene illumination with a D65 WP: Finally, the color rotation matrix can be calculated from Eq. (48): Each row sums to unity as required. The form of the matrix is similar to the in-camera Olympus matrices listed in Table 1. For comparison purposes, the appropriate listed matrix is the one valid for scene illuminant CCTs ranging from 5800 to 6600 K. Some numerical differences are expected since D65 illumination has a color tint. Other numerical differences are likely due to differing characterization practices between Olympus and Adobe. Additionally, Adobe uses HSV (hue, saturation, and value) tables to emulate the final color rendering of the in-camera JPEG processing engine.6.2.DCRaw and MATLABAs shown in Table 3, DCRaw includes many commands that are useful for scientific research. However, it is important to note that the RGB output color space options use color rotation matrices rather than the concatenation of the raw to CIE XYZ and CIE XYZ to RGB matrices. Since color rotation matrices include an inbuilt CAT, these options will only achieve the expected result in combination with the correct raw channel multipliers. For example, setting each raw channel multiplier to unity will not prevent some partial chromatic adaptation from being performed if the sRGB output is selected since the DCRaw color rotation matrix incorporates the matrix, which is a type of . A robust way to use DCRaw for scientific research is via the “dcraw -v -D -4 -T filename” command, which provides linear 16-bit TIFF output in the raw color space without white balancing, demosaicing, or color conversion. Subsequent processing can be performed after importing the TIFF file into MATLAB® using the conventional “imread” command. Reference 49 provides a processing tutorial. The color chart photos in the present article were produced using this methodology. For example, after importing the file into MATLAB via the above commands, a viewable output image in the sRGB color space without any white balancing can be obtained by applying the appropriate characterization matrix after the color demosaic, followed by direct application of the standard CIE XYZ to sRGB matrix, . 7.Adobe DNGThe Adobe® DNG is an open-source raw file format developed by Adobe.32,50 The freeware DNG Converter can be used to convert any raw file into the DNG format. Although the DNG converter does not aim to produce a viewable output image, it does perform a color conversion from the camera raw space into the profile connection space (PCS) based on the CIE XYZ color space with a D50 illumination WP.40 (This is not the actual reference white of CIE XYZ, which is CIE illuminant E.) Consequently, the color processing model used by the DNG converter must provide appropriate characterization matrices along with a strategy for achieving correct WB in relation to the PCS. When processing DNG files, raw converters can straightforwardly map from the PCS to any chosen output-referred color space and associated reference white. The DNG specification provides two different color processing models referred to here as method 1 and method 2. Method 1 adopts a similar strategy as smartphones and commercial raw converters, the difference being that the data remain in the PCS. By using raw channel multipliers, method 2 adopts a similar strategy as traditional digital cameras. However, the multipliers are applied in conjunction with a so-called forward matrix instead of a rotation matrix since the mapping is to the PCS rather than to an output-referred RGB color space. 7.1.Method 1: Color MatricesThe transformation from the camera raw space to the PCS is defined as follows: Here is an Adobe color matrix optimized for the scene AW. Due to highlight recovery logic requirements, Adobe color matrices map in the direction from the CIE XYZ color space to the camera raw space: This is the opposite direction to a conventional characterization matrix , so After the inverse maps from the camera raw space to CIE XYZ, the linear Bradford CAT is applied to adapt the AW to the WP of the PCS.Analogous to the issue described in Sec. 4 for smartphones, the implementation of Eq. (60) is complicated by the fact that should be optimized for the scene AW. The optimized matrix is determined by interpolating between two color matrices labeled ColorMatrix1 and ColorMatrix2, where ColorMatrix1 should be obtained from a characterization performed using a low-CCT illuminant such as CIE illuminant A and ColorMatrix2 should be obtained from a characterization performed using a high-CCT illuminant such as CIE illuminant D65.32 The optimized matrix is calculated by interpolating between ColorMatrix1 and ColorMatrix2 based on the scene illumination CCT estimate denoted by CCT(AW), together with the CCTs associated with each of the two characterization illuminants denoted by and , respectively, with . 7.2.Color Matrix NormalizationRecall from Sec. 2.5 that characterization matrices are typically normalized so that the characterization illuminant WP in the CIE XYZ color space just saturates the raw data in the camera raw space and that the green raw channel is typically the first to saturate. However, in the present context, the Adobe ColorMatrix1 and ColorMatrix2 matrices require a common normalization that is convenient for performing the interpolation. Analogous to Sec. 4.1, the AW is not known in terms of the CIE XYZ color space prior to the interpolation. Instead, ColorMatrix1 and ColorMatrix2 are by default normalized so that the WP of the PCS just saturates the raw data: where . For example, the default ColorMatrix1 and ColorMatrix2 for the Olympus E-M1 camera are, respectively, normalized as follows:The interpolated initially inherits this normalization. However, after has been determined, the CIE XYZ values for the AW will be known. Consequently, the Adobe DNG SDK source code later re-normalizes Eq. (60) so that the AW in the camera raw space maps to the WP of the PCS when the raw data just saturates: where . This is equivalent to re-normalizing as follows: where and .7.3.Linear Interpolation Based on Inverse CCTThe method 1 interpolation algorithm is the same as that described in Sec. 4.1, except that ColorMatrix1, ColorMatrix2, and replace , , and , respectively. Furthermore, the Adobe DNG specification requires the interpolation method to be linear interpolation based upon inverse CCT.32 Again, the interpolation itself is complicated by the fact that the AW is typically calculated by the camera in terms of raw values , , and , but the corresponding CCT(AW) requires knowledge of the chromaticity coordinates. This means converting to CIE XYZ via a matrix transformation that itself depends upon the unknown CCT(AW), which can be solved using a self-consistent iteration procedure.
Figure 12 illustrates the results of inverse-CCT based linear interpolation using the Adobe color matrices defined by Eq. (64) for the Olympus E-M1 camera. Note that the ColorMatrix2 is the same as that defined by Eq. (53), which was extracted from the DCRaw source code. Since maps in the direction from the CIE XYZ color space to the camera raw space, the inverse of the interpolated can be compared to a conventional characterization matrix at a given illuminant CCT. Figure 13 shows the inverse of the interpolated plotted as a function of CCT, and this figure can be compared with Fig. 5, which shows conventional characterization matrices for the same camera optimized for a selection of CCTs. Although the two plots use different normalizations since the characterization matrices are normalized according to their characterization illuminant WP rather than the WP of the PCS, the variation with respect to CCT is similar. However, it is evident that the interpolated loses accuracy for CCTs below . 7.4.Method 2: Forward MatricesConsider the transformation from the camera raw space to the PCS defined by Eq. (60): where is the Adobe color matrix optimized for the scene AW. Method 2 reformulates the above transformation in the following manner: The color conversion can be decomposed into two steps.
Since the forward matrix should be optimized for the scene AW, it is in practice determined by interpolating between two forward matrices analogous to the interpolation approach used by method 1. The Adobe DNG specification provides tags for two forward matrices labeled ForwardMatrix1 and ForwardMatrix2, which should again be obtained from characterizations performed using a low-CCT illuminant and a high-CCT illuminant, respectively. The same interpolation method described in the previous section should be used, with ForwardMatrix1, ForwardMatrix2, and replacing ColorMatrix1, ColorMatrix2, and , respectively, Figure 14 shows the optimized forward matrix interpolated from ForwardMatrix1 and ForwardMatrix2 and expressed as a function of CCT for the Olympus E-M1 camera.7.5.Forward Matrix SpecificationBy comparing Eqs. (60) and (70), is algebraically related to the color matrix as follows: Since is interpolated from ForwardMatrix1 and ForwardMatrix2 in practice, these are defined as According to Eq. (72), the optimized forward matrix is by definition normalized such that the unit vector in the camera raw space maps to the D50 WP of the PCS.32 This means that ForwardMatrix1 and ForwardMatrix2 must also be normalized in this manner. For example, the default ForwardMatrix1 and ForwardMatrix2 for the Olympus E-M1 camera are, respectively, normalized as follows: The official D50 WP of the PCS is actually , , and ,40 which is a 16-bit fractional approximation of the true D50 WP defined by , , and .8.ConclusionsThe opening section of this paper showed how the DCRaw open-source raw converter can be used to directly characterize a camera without needing to determine and invert the OECF and illustrated how characterization matrices are normalized in practice. As a consequence of camera metameric error, the camera raw space for a typical camera was shown to be warped away from the triangular shape accessible to additive linear combinations of three fixed primaries on the chromaticity diagram, and the available gamut was shown to be dependent on the characterization illuminant. It was also shown that the reference white of a typical camera raw space has a strong magenta color tint. Subsequently, this paper investigated and compared the type of color conversion strategies used by smartphone cameras and commercial raw converters, the image-processing engines of traditional digital cameras, DCRaw, and the Adobe DNG converter. Smartphones and raw conversion software applications typically adopt the type of color conversion strategy familiar in color science. This involves the application of a characterization matrix to transform from the camera raw space to the CIE XYZ color space, a CAT to chromatically adapt the estimated WP of the scene illumination to the reference white of an output-referred color space (such as D65 for sRGB), and finally a transformation from CIE XYZ to the linear form of the chosen output-referred color space. Since the optimized characterization matrix is CCT-dependent unless the Luther-Ives condition is satisfied, an optimized matrix can be determined by interpolating between two preset characterization matrices, one optimized for a low-CCT illuminant and the other optimized for a high-CCT illuminant. Simpler solutions include using a fixed characterization matrix optimized for representative scene illumination. For traditional digital cameras, this paper showed how the overall color conversion is typically reformulated in terms of raw channel multipliers along with a set of color rotation matrices . The raw channel multipliers act as a type of CAT by chromatically adapting the scene illumination WP estimate to the reference white of the camera raw space. Since the rows of a color rotation matrix each sum to unity, the rotation matrix subsequently transforms from the camera raw space directly to the chosen output-referred RGB color space and at the same time chromatically adapts the camera raw space reference white to that of the output-referred color space. It was shown that the variation of the elements of a color rotation matrix with respect to CCT is very small, so only a small selection of preset rotation matrices are needed, each optimized for a specified preset illuminant. This enables raw channel multipliers appropriate for the scene illumination WP estimate to be applied in combination with the preset rotation matrix associated with the closest-matching WP. The primary advantages of the reformulation are that interpolation is not required and the method can be efficiently implemented on fixed-point architecture. Furthermore, image quality can be improved by applying the raw channel multipliers prior to the color demosaic. It was shown that DCRaw uses a similar model as traditional digital cameras, except that only a single color rotation matrix is used for each camera, specifically a matrix optimized for D65 illumination, . Although the overall color conversion loses some accuracy when the scene illumination differs significantly from D65, an advantage of decoupling the raw channel multipliers from the characterization information represented by the color rotation matrix is that WB can be correctly achieved for any type of scene illumination provided raw channel multipliers appropriate for the scene illumination are applied. It was shown that the rotation matrices used by DCRaw can be derived from the inverses of the “ColorMatrix2” color characterization matrices used by the Adobe DNG converter. The Adobe DNG converter maps the camera raw space and scene illumination WP estimate to an intermediate stage in the overall color conversion, namely the PCS based on the CIE XYZ color space with a D50 WP. Method 1 defines the approach that is also used in commercial raw converters and advanced smartphones. A color matrix optimized for the scene illumination is obtained by interpolating between the “ColorMatrix1” low-CCT and “ColorMatrix2” high-CCT preset matrices. Due to highlight recovery logic requirements, these color matrices map in the opposite direction to conventional characterization matrices. Furthermore, the ColorMatrix1 and ColorMatrix2 matrices are initially normalized according to the WP of the PCS rather than their corresponding characterization illuminants. Since the Adobe color matrices are freely available, their appropriately normalized inverses can serve as useful high-quality characterization matrices when camera characterization equipment is unavailable. Method 2 offered by the Adobe DNG converter uses raw channel multipliers in a similar manner as traditional digital cameras. However, these are applied in combination with a so-called forward matrix rather than a rotation matrix since the Adobe DNG converter does not directly map to an output-referred RGB color space, so the forward matrix rows do not each sum to unity. Although the optimized forward matrix is determined by interpolating the “ForwardMatrix1” and “ForwardMatrix2” preset matrices, the variation of the optimized forward matrix with respect to CCT is very small, analogous to a rotation matrix. 9.Appendix: Raw Data ModelConsider the raw values expressed as an integration over the spectral passband of the camera according to Eq. (5): Although can be regarded as the average spectral irradiance at a photosite, it is more precisely described as the spectral irradiance convolved with the camera system point-spread function (PSF) and sampled at positional coordinates on the sensor plane: where and are the pixel pitches in the horizontal and vertical directions. A noise model can also be included.28,51 The quantity denoted by is the ideal spectral irradiance at the sensor plane that would theoretically be obtained in the absence of the system PSF: where is the corresponding scene spectral radiance, is the system magnification, is the working -number of the lens, is the lens transmittance factor, and is the object-space angle between the optical axis and the indicated scene coordinates. If the vignetting profile of the lens is known, the cosine fourth term can be replaced by the relative illumination factor, which is an image-space function describing the real vignetting profile.52The constant that appears in Eq. (5) places an upper bound on the magnitude of the raw values. It can be shown28 that is given by where is the exposure duration and is the conversion factor between electron counts and raw values for mosaic , expressed using units.53,54 The conversion factor is inversely proportional to the ISO gain , which is the analog gain setting of the programmable gain amplifier situated upstream from the ADC: Here is the unity gain, which is the gain setting at which . Full-well capacity is denoted by , and is the raw clipping point, which is the maximum available raw level. This value is not necessarily as high as the maximum raw level provided by the ADC given its bit-depth , which is DN, particularly if the camera includes a bias offset that is subtracted before the raw data is written.28,53The least analog amplification is defined by , which corresponds to the base ISO gain.28,51 The numerical values of the corresponding camera ISO settings are defined using the JPEG output rather than the raw data.55,56 These user values also take into account digital gain applied via the JPEG tone curve. When comparing raw output from cameras based on different sensor formats, equivalent rather than the same exposure settings should be used when possible.57 As noted in Sec. 2.2, the actual raw values obtained in practice are quantized values modeled by taking the integer part of Eq. (5), and it is useful to subsequently normalize them to the range [0,1] by dividing Eq. (5) by the raw clipping point. ReferencesJ. Von Kries,
“Chromatic adaptation,”
Sources of Color Science, 145
–148 MIT Press, Cambridge
(1970). Google Scholar
H. E. Ives,
“The relation between the color of the illuminant and the color of the illuminated object,”
Trans. Illum. Eng. Soc., 7 62
–72
(1912). https://doi.org/10.1002/COL.5080200112 Google Scholar
J. Nakamura,
“Basics of image sensors,”
Image Sensors and Signal Processing for Digital Still Cameras, 53
–93 CRC Press, Taylor & Francis Group, Boca Raton, Florida
(2006). Google Scholar
J. Jiang et al.,
“What is the space of spectral sensitivity functions for digital color cameras?,”
in IEEE Workshop Appl. Comput. Vision,
168
–179
(2013). https://doi.org/10.1109/WACV.2013.6475015 Google Scholar
R. Luther,
“Aus dem Gebiet der Farbreizmetrik (On color stimulus metrics),”
Z. Tech. Phys., 12 540
–558
(1927). ZTPHAU 0373-0093 Google Scholar
P.-C. Hung,
“Sensitivity metamerism index for digital still camera,”
Proc. SPIE, 4922 1
–14
(2002). https://doi.org/10.1117/12.483116 PSISDG 0277-786X Google Scholar
P.-C. Hung,
“Color theory and its application to digital still cameras,”
Image Sensors and Signal Processing for Digital Still Cameras, 205
–222 CRC Press, Taylor & Francis Group, Boca Raton, Florida
(2006). Google Scholar
P. M. Hubel et al.,
“Matrix calculations for digital photography,”
in Proc., IS&T Fifth Color Imaging Conf.,
105
–111
(1997). Google Scholar
J. Holm, I. Tastl and S. Hordley,
“Evaluation of DSC (digital still camera) scene analysis error metrics—Part 1,”
in Proc., IS&T/SID Eighth Color Imaging Conf.,
279
–287
(2000). Google Scholar
G. D. Finlayson and Y. Zhu,
“Finding a colour filter to make a camera colorimetric by optimisation,”
Lect. Notes Comput. Sci., 11418 53
–62
(2019). https://doi.org/10.1007/978-3-030-13940-7_5 LNCSD9 0302-9743 Google Scholar
D. Alleysson, S. Susstrunk and J. Hérault,
“Linear demosaicing inspired by the human visual system,”
IEEE Trans. Image Process., 14
(4), 439
–449
(2005). https://doi.org/10.1109/TIP.2004.841200 IIPRE4 1057-7149 Google Scholar
International Organization for Standardization,
“Graphic technology and photography—colour target and procedures for the colour characterisation of digital still cameras (DCSs),”
(2012). Google Scholar
International Electrotechnical Commission,
“Multimedia systems and equipment—colour measurement and management—Part 2-1: colour management—default RGB colour space—sRGB,”
(1999). Google Scholar
Adobe Systems Incorporated,
“Adobe® RGB (1998) color image encoding,”
(2005). Google Scholar
International Organization for Standardization,
“Photography and graphic technology—extended colour encodings for digital image storage, manipulation and interchange—Part 2: reference output medium metric RGB colour image encoding (ROMM RGB),”
(2013). Google Scholar
J. Holm,
“Capture color analysis gamuts,”
in Proc. 14th Color and Imaging Conf., IS&T,
108
–113
(2006). Google Scholar
D. Coffin,
(2015). Google Scholar
Y. Ohno,
“Practical use and calculation of CCT and DUV,”
LEUKOS, 10
(1), 47
–55
(2014). https://doi.org/10.1080/15502724.2014.839020 Google Scholar
D. L. MacAdam,
“Projective transformations of I. C. I. color specifications,”
J. Opt. Soc. Am., 27 294
(1937). https://doi.org/10.1364/JOSA.27.000294 JOSAAH 0030-3941 Google Scholar
CIE (Commission Internationale de l’Eclairage),
in Proc. 14th Session,
36
(1959). Google Scholar
Commission Internationale de l’Eclairage,
“Colorimetry,”
Vienna
(2004). Google Scholar
G. D. Finlayson and M. S. Drew,
“White-point preserving color correction,”
in Proc., IS&T Fifth Color Imaging Conf.,
258
–261
(1997). Google Scholar
International Organization for Standardization,
“Photography—electronic still picture cameras—methods for measuring opto-electronic conversion functions (OECFs),”
(2009). Google Scholar
D. A. Rowlands, Physics of Digital Photography, IOP Publishing Ltd., Bristol
(2017). Google Scholar
D. Varghese, R. Wanat, R. K. Mantiuk,
“Colorimetric calibration of high dynamic range images with a ColorChecker chart,”
in 2nd Int. Conf. and SME Workshop HDR Imaging,
(2014). Google Scholar
J. McCann,
“Do humans discount the illuminant?,”
Proc. SPIE, 5666 9
–16
(2005). https://doi.org/10.1117/12.594383 PSISDG 0277-786X Google Scholar
International Organization for Standardization,
“Photography—electronic still picture imaging—vocabulary,”
(2012). Google Scholar
Adobe Systems Incorporated,
“Adobe digital negative (DNG) specification,”
(2012). Google Scholar
G. Buchsbaum,
“A spatial processor model for object color perception,”
J. Franklin Inst., 310 1
–26
(1980). https://doi.org/10.1016/0016-0032(80)90058-7 JFINAB 0016-0032 Google Scholar
E. H. Land and J. McCann,
“Lightness and Retinex theory,”
J. Opt. Soc. Am., 61
(1), 1
–11
(1971). https://doi.org/10.1364/JOSA.61.000001 JOSAAH 0030-3941 Google Scholar
S. Hordley,
“Scene illuminant estimation: past, present and future,”
Color Res. Appl., 31 303
(2006). https://doi.org/10.1002/col.20226 CREADU 0361-2317 Google Scholar
E. Y. Lam, G. S. K. Fung,
“Automatic white balancing in digital photography,”
Single-Sensor Imaging: Methods and Applications for Digital Cameras, 267
–294 CRC Press, Boca Raton, Florida
(2009). Google Scholar
Commission Internationale de l’Eclairage,
“Fundamental chromaticity diagram with physiological axes—Part 1,”
Vienna
(2006). Google Scholar
M. D. Fairchild, Color Appearance Models, 3rd ed.Wiley, New York
(2013). Google Scholar
K. M. Lam,
“Metamerism and colour constancy,”
(1985). Google Scholar
International Color Consortium,
“Image technology colour management—architecture, profile format, and data structure,”
(2010). Google Scholar
J. Holm et al.,
“Color processing for digital photography,”
Color Engineering: Achieving Device Independent Color, 179
–220 John Wiley & Sons, Ltd., Chichester
(2002). Google Scholar
A. R. Robertson,
“Computation of correlated color temperature and distribution temperature,”
J. Opt. Soc. Am., 58 1528
(1968). https://doi.org/10.1364/JOSA.58.001528 JOSAAH 0030-3941 Google Scholar
Q. Xingzhong,
“Formulas for computing correlated color temperature,”
Color Res. Appl., 12
(5), 285
(1987). https://doi.org/10.1002/col.5080120511 CREADU 0361-2317 Google Scholar
C. S. McCamy,
“Correlated color temperature as an explicit function of chromaticity coordinates,”
Color Res. Appl., 17 142
(1992). https://doi.org/10.1002/col.5080170211 CREADU 0361-2317 Google Scholar
J. Hernández-Andrés, R. L. Lee and J. Romero,
“Calculating correlated color temperatures across the entire gamut of daylight and skylight chromaticities,”
Appl. Opt., 38 5703
(1999). https://doi.org/10.1364/AO.38.005703 APOPAI 0003-6935 Google Scholar
C. Li et al.,
“Accurate method for computing correlated color temperature,”
Opt. Express, 24
(13), 14066
(2016). https://doi.org/10.1364/OE.24.014066 OPEXFF 1094-4087 Google Scholar
R. Sumner,
“Processing RAW Images in MATLAB,”
(2014). Google Scholar
Adobe Systems Incorporated,
“Introducing the digital negative specification: information for manufacturers,”
(2004). Google Scholar
D. A. Rowlands, Field Guide to Photographic Science, SPIE Press, Bellingham
(2020). Google Scholar
P. Maeda, P. Catrysse and B. Wandell,
“Integrating lens design with digital camera simulation,”
Proc. SPIE, 5678 48
(2005). https://doi.org/10.1117/12.588153 PSISDG 0277-786X Google Scholar
E. Martinec,
“Noise, dynamic range and bit depth in digital SLRs,”
(2008). Google Scholar
T. Mizoguchi,
“Evaluation of image sensors,”
Image Sensors and Signal Processing for Digital Still Cameras, 179
–203 CRC Press, Taylor & Francis Group, Boca Raton, Florida
(2006). Google Scholar
Camera & Imaging Products Association,
“Sensitivity of digital cameras,”
(2004). Google Scholar
International Organization for Standardization,
“Photography—digital still cameras—determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure index,”
(2006). Google Scholar
D. A. Rowlands,
“Equivalence theory for cross-format photographic image quality comparisons,”
Opt. Eng., 57
(11), 110801
(2018). https://doi.org/10.1117/1.OE.57.11.110801 Google Scholar
BiographyD. Andrew Rowlands received his BSc degree in mathematics and physics and his PhD in physics from the University of Warwick, UK, in 2000 and 2004, respectively. He has held research positions at the University of Bristol, UK, Lawrence Livermore National Laboratory, USA, Tongji University, China, and the University of Cambridge, UK, and he has authored three books on the science of digital photography. |