Flexible Technique to Enhance Color-image Quality for Color-deficient Observers

  • cc icon
  • ABSTRACT

    Color-normal observers (CNOs) and color-deficient observers (CDOs) have different preferences and emotions for color images. A color-image quality-enhancement algorithm for a CDO is developed to easily adjust images according to each observer’s preference or image quality factors. The color-perception differences between CDO and CNO are analyzed and modeled in terms of the YCbCr chroma ratio and hue difference; then the color-shift method is designed to control the degree of color difference.


  • KEYWORD

    Color enhancement , Color deficiency , Color deficient observer , Protanomaly , Deuteranomaly

  • I. INTRODUCTION

    Nowadays, displays have become more and more personalized because of the widespread use of mobile display technology in devices such as smartphones and personal computers. Such accessibility to personalized display usage also enables the personalization of a display’s image quality based on an individual user’s visual characteristics [1].

    There have been numerous studies on the enhancement of color-image quality of a display image, the majority of which target color-normal observers (CNOs). The color-image quality-enhancement algorithm for the CNO cannot be used for a color-deficient observer (CDO) because of varying color perception. The CNO’s eye has three types of cones, L (long, or red, cone), M (medium, or green, cone), and S (short, or blue, cone), in the retina, while in a CDO one or more types of cones are missing, or have different sensitivities as compared to the CNO, resulting in different color perception [2]. Therefore, different image-processing methods are required for CDOs, considering their color-perceptual characteristics.

    Many image-manipulation techniques for CDOs have been published recently; however, the research purposes have been rather limited. Previous studies are chiefly categorized into two techniques: the simulation of CDOs [3, 4], and the recoloring of images to help CDOs to discriminate colors more easily [5-8].

    It is worth noting that people view images on electronic displays mostly for entertainment purposes. Therefore, not only the extraction of exact information should be considered, but also the enhancement of color-image quality, such as preference, naturalness, or color emotion for CDOs. However, there have been only a few studies on color-image quality enhancement for CDOs [9-11], and their conclusions are not yet decisive. Therefore, for more personalized usage of mobile displays for CDOs, various experiments on color-image quality for CDOs and simplified and flexible color-shift schemes are required.

    In this study the observers with deuteranomaly’ changes in color emotion caused by hue shift are investigated using various video clips, and the results of the investigation are compared to previous findings [11]. Moreover, the color-perception differences between CNO and CDO are analyzed in the YCbCr color space. Based on the results of color-image quality experiments and color-difference analysis, a novel color-image quality-enhancement algorithm for protanomaly (having altered L-cone sensitivity) and deuteranomaly (having altered M-cone sensitivity) is developed, to flexibly adjust images according to each CDO’s preference or image quality factors using the YCbCr color space.

    II. ANALYSIS OF PERCEIVED COLOR QUALITIES OF CDOS

    Color-image quality perceived by color-deficient observers are analyzed in terms of preference, naturalness, and emotion.

       2.1. Preference and Naturalness for Color Images

    Mochizuki et al. [9] developed a color-preference enhancement algorithm for CDOs by strengthening the weak cone signals of CDOs to match those of a CNO. Such an approach is based on the assumption that CDOs will prefer the images if they perceive the images as CNOs do. Chen et al. [10] used Mochizuki et al.’s algorithm to study the color preference of CDOs; the experimental results found that, on average, CDOs preferred the cone-signal-compensated images. However, there were individual differences as well. The CDOs’ responses were categorized into four groups. The first group showed no preference change with variation of the degree of enhancement. The second group’s preferences increased as the degree of enhancement increased, while the third group preferred a specific degree of enhancement. The fourth group’s preference was opposite that of the second group: This group did not prefer the enhanced images.

    Naturalness of a color image is evaluated based on the memory color for an object’s color. The authors’ previous experiment on the preferred and natural hues for familiar objects (memory colors) [11] found that there is little difference for the most natural-looking color images between the CNO and observers with deuteranomaly. However, the observers with deuteranomaly preferred more reddish and more greenish colors compared to the color normal, which corresponds to the cone-signal-compensated colors.

       2.2. Color Emotion

    Color emotion means the various emotional feelings evoked by colors or color combinations [12]. The effect of red and green hue shifts on color emotion of the observer with deuteranomaly is investigated as a new attribute of color-image quality in this study. The red-hue and green-hue areas are initially determined in the YCbCr color space. Then the red hue, within 75 to 160 degrees, is shifted toward yellow or purple, while the green hue, within 180 to 280 degrees, is shifted in the blue or yellow direction. Therefore, in total, four different hue-shift methods (Modes 1, 2, 3, and 4) are generated, as shown in Fig. 1. The Mode 1 method makes both red and green hues more yellowish, while the Mode 2 method takes red and green hues away from yellow. Note that the Mode 1 method is the hue-shift direction that simulates CDO vision, while Mode 2 is the hue-shift direction that compensates the cone-signal loss of CDO vision.

    Figure 2 shows the images manipulated using the hue-shift methods. Compared to the original image (Mode 0), the yellow feathered area of the bird for Modes 1 and 3 is more greenish, while it is more reddish in Mode 2 and 4 images. In the case of the green leaves in the background, Modes 2 and 3 appear more greenish than the original, while Modes 1 and 4 are more yellowish.

    Ten different video clips were selected as test stimuli for a psychophysical experiment. Each video was eleven to twelve seconds in duration and contained a natural scene or animation having red or green as the dominant color. Each video clip’s spatial resolution was 1280 pixels × 720 pixels, and the frame rate was 24 f/s. Each video was hue-shifted using the four modes shown in Fig. 1.

    Five color-emotion scales were selected, following Russell’s circumplex model of affect [13], i.e. sad-happy, negative-positive, passive-active, awkward-familiar, and unclear-clear. After viewing each video clip, the subjects evaluated each emotion on a scale ranging from −4 to 4. For example, +4 represents extremely happy, while −4 corresponds to extremely sad. Thirteen observers with deuteranomaly having various degrees of deficiency participated in the experiment. The original and manipulated video clips were shown in random order on a 24-inch-wide sRGB LCD monitor in a dark room, and 3,250 responses were obtained (5 Modes × 10 videos × 5 color-emotion scales × 13 observers).

    Table 1 summarizes the average color-emotion scales. The results show that Mode 1 images evoked the most positive emotional responses, followed by Mode 0 and Mode 2 images. This result contradicts the finding of the previous study by Chen et al. [10] In this experiment, the majority of the observers with deuteranomaly registered that the familiar-looking images (Mode 1, the deuteranomaly-simulated images to the color-normal) evoked the most positive emotional responses, not the cone-signal-compensated images (Mode 2). Modes 3 and 4 may have received negative responses because of the imbalance in the colors.

    III. FLEXIBLE COLOR-IMAGE QUALITY-ENHANCEMENT ALGORITHM FOR COLOR-DEFICIENT OBSERVERS

    The previous studies [9-11] on the color-image quality perceived by CDOs indicate that compensating the cone signals can be a good approach to enhance overall image quality. However, depending on the individual preference and the image quality factors (for example, naturalness or preference), different color transformations will generate more satisfactory results. Therefore, in this study a flexibly adjustable color-control algorithm is developed for CDOs.

       3.1. Color-perception-difference Analysis between the Color-normal and the Color-deficient

    As a first step to develop the algorithm, the color-perception difference between the CNO and CDO are analyzed using the CDO vision simulation employed in Chen et al.’s study [10]. For this study, all input images are assumed to be sRGB images, though the current algorithm can be applied to any color display adaptively, considering the color characteristics.

    Figure 3 shows the block diagram explaining the calculation process. At first, input sRGB values are converted to the corresponding CIE tristimulus values, XYZ, which are used to calculate the CNO’s CIELAB values. Then XYZ values are converted to cone signals, LMS. These cone signals LMS are modified to L’M’S’, considering the type and degree of color deficiency, by using Eq. (1).

    image

    where P and D indicate types of color deficiency (protanopia and deuteranopia, respectively) and 𝜔 is the degree of color deficiency (𝜔 = 0 for color-normal, 𝜔 = 1 for color-blind). Depending on the parameter k, color deficient vision is simulated or compensated. If k is positive, the resulting images are considered simulated images; if k is negative, the resulting images are considered cone-signal-compensated images for the CDO. Then the modified cone signals L’M’S’ are again converted to their corresponding tristimulus values, X’Y’Z’, to calculate the CDO’s CIELAB values and digital RGB values.

    The colors perceived by the CNO and CDO are compared in CIELAB color space. Since there is a small difference in the case of lightness, L*, with average ΔL* = 1.06 and ΔL* = 3.17 between the CNO and CDO respectively, only CIELAB a* and b* values are compared. Figures 4 and 5 compare CIELAB a*, b* distributions and CIELAB hue values respectively. It is worth noting that as the degree of color deficiency increases, the CDO perceives a color as more yellowish or more bluish than the color-normal does. Moreover, the perceived chroma of red and green hue areas start to decrease.

    The systematic color changes shown in Figs. 4 and 5 suggest the possibility of obtaining the color-deficient-compensated or simulated images by shifting hue angles and enhancing or decreasing chroma accordingly, without calculating the cone signals directly. In other words, if hues of the image are shifted to become reddish or greenish, and the chroma of red and green colors are increased, the CDO could perceive an image as the color-normal sees the original image.

    As a color space for image manipulation, YCbCr is preferable to CIELAB, to minimize the complexity of calculation. Therefore, color-perception differences are further calculated in the YCbCr color space. Considering that CIELAB lightness showed a slight difference between the normal and deficient people, only the chroma and hue-angle differences in the YCbCr space are calculated. The chroma and hue angle in the YCbCr color space are defined as follows;

    image
    image

    The left side of Fig. 3 explains how YCC_Chroma and hue-angle differences are calculated. The original input sRGB values and modified RGB values are used to calculate the YCbCr values for a normal and deficient observer respectively. Then YCC_Hue and YCC_Chroma differences are calculated. For YCC_Hue difference, YCC_dHue, the difference between the cone-signal-modified hue angle and the original hue angle is calculated. In the case of the YCC_Chroma difference, the ratio of cone-signal-modified chroma and the original chroma values, YCC_Chroma_ratio, is calculated. Figures 6 and 7 show the calculated values for protanomaly and deuteranomaly, having various k values as a function of the original YCC hue angles. These calculated values are modeled using simple linear equations, as shown in Figs. 8 and 9.

       3.2. Proposed Algorithm

    Using the YCC_Chroma_ratio and YCC_Hue difference models, color-preference enhancement algorithms for color-deficient observers are developed. Figure 10 shows the flowchart of the proposed algorithm.

    As input values, the type of color deficiency (protanomaly or deuteranomaly) and control parameter k are required. As explained in the previous section, k controls the degree of color changes. With the input information, the model parameters YCC_dHue and YCC_Chroma_ratio are calculated using the data in Figs. 8 and 9, for each input pixel. Then YCC_dHue and YCC_Chroma_ratio values are used to calculate output YCC values using Eq. (4).

    image

    Though k can take on any value within (−1, 1), to avoid a serious artifact caused by out-of-gamut colors, the range of k is limited to within (−0.5, 0.5) for an sRGB display. When k = 0.5, the resulting image will be the simulation of the color anomaly having a deficiency level of 0.5. When k = −0.5, the resulting image will be the cone-signal-compensated image for the color anomaly having a deficiency level of 0.5. To maximize the flexibility of color manipulation, k can be set differently for chroma and hue for each hue range.

    As an example showing the similarity between the proposed algorithm and cone-signal-based algorithms, Fig. 11 shows the original image, protanomaly simulated image, cone-signal-compensated image, and resulting images using the proposed algorithm. Note that when the value of k is positive, the resulting image is similar to the CDO’s vision simulation, whereas when k is negative, the resulting image is similar to the result of Mochizuki et al.’s algorithm. The CIELAB color differences are 2.23 ± 1.34 ΔE*ab between the proposed and the cone-signal simulation and 3.15 ± 2.69 ΔE*ab between the proposed and the cone-signal compensation. Note that the resulting images are similar to those from the previous techniques, and the proposed algorithm using the YCbCr color space is simpler to implement.

    IV. PERFORMANCE EVALUATION OF THE PROPOSED ALGORITHM

    The proposed algorithm was implemented on a 10-inch tablet PC to generate emotionally enhanced images for CDOs. The peak white of the tablet PC was around 400 cd/m2, and the color gamut and monitor gamma were similar to those for sRGB. The same short video clips used for the color-emotion experiment in Section 2.2 were used as the test stimuli. Though real-time image manipulation is possible on the tablet PC, premanipulated images were used for the experiment.

    Ten subjects with protanomaly and ten with deuteranomaly participated the experiment. At first, each subject underwent a vision test to determine the degree of color deficiency. Then each video clip was transformed for each subject, considering the subject’s type and degree of color deficiency. The five-point Likert scale was used to scale sad-happy, negative-positive, passive-active, awkward-familiar, and unclear-clear of the transformed images. The subjects’ responses were averaged. The average scores of the scales show high positive responses, resulting in 4.8 for sad-happy, 4.75 for negative-positive, 4.65 for awkward-familiar, 3.7 for passive-active, and 4.55 for unclear-clear. This experimental result indicates that the proposed algorithm can easily be used for mobile displays to generate images that the user prefers.

    V. CONCLUSION

    The color-emotion shifts corresponding to hue changes were evaluated using various video clips by thirteen observers with deuteranomaly. The experimental results showed that positive emotions were evoked in CDOs when colors were manipulated to be more like color deficient vision, unlike in the previous study, which showed the highest preference for weakly cone-signal-compensated images. Such a difference indicates that different color-shift strategies are required for every application.

    In this research, a color-image quality-enhancement algorithm for protanomaly and deuteranomaly was developed to easily adjust images according to each observer’s preference or image quality factors. The color-perception differences between the CDO and CNO were analyzed and modeled in terms of the YCbCr chroma ratio and hue difference; then a color-shift method was designed to control the degree of color difference.

  • 1. Nam J., Ro Y. M., Huh Y., Kim M. 2005 Visual content adaptation according to user perception characteristics [IEEE Trans. Multimedia] Vol.7 P.435-445 google doi
  • 2. Fairchild M. D. 2013 Color appearance models google
  • 3. Brettel H., Vienot F., Mollon J. 1997 Computerized simulation of color appearance for dichromats [J. Opt. Soc. Am. A] Vol.14 P.2647-2655 google doi
  • 4. Machado G. M., Oliveira M. M., Fernandes L. A. 2009 A physiologically-based model for simulation of color vision deficiency [IEEE Trans. Vis. Comput. Graphics] Vol.15 P.1291-1298 google doi
  • 5. Huang J. B., Tseng Y. C., Wu S. I., Wang S. J. 2007 Information preserving color transformation for protanopia and deuteranopia [IEEE Signal Process. Lett.] Vol.14 P.711-714 google doi
  • 6. Huang C. R., Chiu K. C., Chen C. S 2011 Temporal color consistency-based video reproduction for dichromats [IEEE Trans. Multimedia] Vol.13 P.435-445 google
  • 7. Jeong J. Y., Kim H. J., Wang T. S., Yoon Y. J., Ko S. J. 2011 An efficient re-coloring method with information preserving for the color-blind [IEEE Trans. Consum. Electron.] Vol.57 P.1953-1960 google doi
  • 8. Oliveira M. M. 2013 Towards more accessible visualizations for color-vision-deficient individuals [Comput. Sci. Eng.] Vol.15 P.80-87 google doi
  • 9. Mochizuki R., Nakamura T., Chao J., Lenz R. Colorweak correction by discrimination threshold matching [Proc. CGIV’08] P.208-213 google
  • 10. Chen Y., Guan Y., Ishikawa T., Eto H., Nakatsue T., Chao J. 2014 Preference for color-enhanced images assessed by color deficiencies [Color Res. Appl.] Vol.39 P.234-251 google doi
  • 11. Baek Y. S., Kwak Y., Woo S., Park C. 2015 Preferred memory color difference between the deuteranomalous and normal color vision [Proc. SPIE] Vol.9395 P.939517 google
  • 12. Ou L. C., Luo M. R., Woodcock A., Wright A. 2004 A study of colour emotion and colour preference. Part I:Colour emotions for single colours [Color Res. Appl.] Vol.29 P.232-240 google doi
  • 13. Posner J., Russell J. A., Peterson B. S. 2005 The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology [Dev. Psychopathol.] Vol.17 P.715-734 google
  • [FIG. 1.] Hue-shift methods for color-emotion experiments for CDOs.
    Hue-shift methods for color-emotion experiments for CDOs.
  • [FIG. 2.] Images manipulated using the hue-shift methods for color-emotion experiments.
    Images manipulated using the hue-shift methods for color-emotion experiments.
  • [TABLE 1.] Experimental color-emotion results for deuteranomaly
    Experimental color-emotion results for deuteranomaly
  • [FIG. 3.] Method to calculate color-perception difference between color-normal and color-deficient observers.
    Method to calculate color-perception difference between color-normal and color-deficient observers.
  • [] 
  • [FIG. 4.] Color-perception comparison between the color-normal and the color- deficient in the CIELAB a*b* plane.
    Color-perception comparison between the color-normal and the color- deficient in the CIELAB a*b* plane.
  • [FIG. 5.] Comparison of perceived CIELAB hue angle between color-normal and color-deficient observers.
    Comparison of perceived CIELAB hue angle between color-normal and color-deficient observers.
  • [] 
  • [] 
  • [FIG. 6.] Calculated YCC_Hue angle differences and YCC_Chroma_ratio for protanomaly.
    Calculated YCC_Hue angle differences and YCC_Chroma_ratio for protanomaly.
  • [FIG. 7.] Calculated YCC_Hue angle differences and YCC_Chroma_ratio for deuteranomaly.
    Calculated YCC_Hue angle differences and YCC_Chroma_ratio for deuteranomaly.
  • [FIG. 8.] YCC_Hue angle differences and YCC_Chroma_ratio modeling for protanomaly.
    YCC_Hue angle differences and YCC_Chroma_ratio modeling for protanomaly.
  • [FIG. 9.] YCC_Hue angle Differences and YCC_Chroma_ratio modeling for deuteranomaly.
    YCC_Hue angle Differences and YCC_Chroma_ratio modeling for deuteranomaly.
  • [FIG. 10.] Flowchart of flexible color-enhancement algorithm for CDOs.
    Flowchart of flexible color-enhancement algorithm for CDOs.
  • [] 
  • [FIG. 11.] Examples of images manipulated using the proposed algorithm.
    Examples of images manipulated using the proposed algorithm.