<< return to Pixycam.com

Poor colour saturation

Playing with the Pixy2, the colour saturation of the images produced by the camera seems poor and as a result the colour discrimination in colour_connected_components and pan_tilt_demo is disappointing unless you can assure a completely neutral background. I’ve attached 2 pics, one screen-grabbed from PixyMon (under Linux Mint) and the other the same scene taken with my iPhone (5S). I would expect the red chair and the yellow USB lead to be easily distinguished from anything else. Comparing the hsl values of pixels between the two images, the iPhone image has roughly twice the saturation.

There was a similar thread in 2014 relating to Pixy1 but the response only had suggestions to do with colour balance and exposure, neither of which seem to address the problem.

The response referred to pixydevice/libpixy/camera.cpp on github so I went searching for its counterpart for pixy2. I found pixy2/src/device/libpixy_m4/src/camera.cpp which looks very similar but does have an extra function cam_setSaturation - it would seem just what I need.

This doesn’t seem to be exposed by PixyMon. So, my question:

  1. Will it do me any good?
  2. How do I tweak it?

Screenshot%20from%202018-09-19%2014-07-45

Img_2761a

Hello Philip,
Your iphone (and other cameras) apply color correction to get the vivid colors, through a color correction matrix and other means, sometimes proprietary. These methods can greatly improve the image quality over the raw RGB values, more closely matching the way us humans perceive color.

The video program actually does have a saturation slider you can play with that increases the gain on Pixy’s built-in color correction matrix:

https://docs.pixycam.com/wiki/doku.php?id=wiki:v2:pixymon_index#video-tuning-tab

In case your interested, I’ve also been told by engineering:

"The saturation setting isn’t offered in color corrected components mode, which can be counter-intuitive… CCC relies on “linearized” exposure of the pixels, meaning that raw RGB values more closely match the amount of light in a linear-proportional sense. Once the pixels go through the color correction matrix, linearity is reduced and detection accuracy is also reduced, more so when light intensity varies, for example when the object passes under a shadow. "

My take is that even though the image might look better to us, it doesn’t necessarily mean a given machine vision algorithm will agree.

Hope this helps!

Edward

1 Like