<< return to Pixycam.com

Image resolution seems low...

I’m excited about my PixyCam, however the image that is produced (that is viewable directly through PixyMon) shows a fairly grainy image. I wouldn’t have thought this would be the case with a 720p image sensor.

Actually, looking back at the documentation now I see that the image sensor is being sampled at 50Hz and thus can only provide images at VGA resolution, but it still seems like the quality in the images are fairly low even for VGA. I’ve tried adjusting the lens but haven’t had any luck at making the image any sharper.

So, first - why is the produced image so granular? Is it just because of images being captured at VGA resolution? For example, the Kickstarter page shows the color code example photo of the wall outlet, and the produced image through Pixy looks pretty horrendous compared to the still taken with a different camera (I wouldn’t expect a VGA image to be so bad of something so close)!

And second - it appears that I can set the capture mode of Pixy to reduce the sampling rate and capture images at 720p resolution, but will this improve the object detection? The videos on the Kickstarter page make the object detection looks really robust, and they mention using HSV for better performance in low light conditions, however my findings show that low-light conditions impact the detection of objects/colors in a severe way.

Pixy’s image sensor is 1280x800 native. At this resolution it supports 25 Hz framerate. And at 640x400 resolution, it supports 50 Hz framerate. We choose the 2nd mode because we want a higher framerate and the memory and CPU requirements are less.

When running in PixyMon, the “Raw” and “Cooked” modes are actually 320x200, which is fairly low-res. We need to downsample from 640x400 because Pixy doesn’t have enough contiguous RAM to grab at 640x400. But that’s just when grabbing raw/cooked frames. When Pixy is processing color blobs, there is no downscaling. Pixy does this by pre-processing in the M0 core and eliminating lots of “uninteresting” pixels— this allows us to process the entire frame with a small amount of memory (we don’t need to store each pixel). It also makes things fast. The upshot is that when you are viewing in PixyMon, you are getting a lower-res than when Pixy is actually processing frames. There is also a blockiness in PixyMon that’s introduced because we’re downsampling a Bayer pattern.

So low-light performance-- it sometimes requires some tweaking. You should check out this page.
http://cmucam.org/projects/cmucam5/wiki/Some_Tips_on_Generating_Color_Signatures

I have a follow up question on this topic.

I am running PIXY through the UART, and still get a resolution of 320x200, with a 50Hz frame rate
It says above that I would get 640x400 when not running PixyMon. Why do I not get the 640x400?

Also, I wouldn’t mind at all to downgrade the frame rate to 25 Hz for better pixel resolution.
Then I understand that the RAM might be too small, but I only need the resolution in the x-Direction (1280px)
so I can cut image size in the y-direction to compensate.

What are the possibilities to achieve this? And what are the component limitations?
Is it straight forward to edit this in the firmware?

Best
Leo

Hello Leo,
I believe you don’t get 640x400 with UART because of the way the color blob algorithm does color interpolation. But note that there is no cropping taking place. You are still getting the full field of view. Do you need 1280 resolution because within your application, you need this level of precision?

Edward

The 1280 resolution is important to me. The FOW I have already accounted for with changing the lens.
What are my options?

Leo

Hello Leo,
Currently there aren’t many, other than modifying the firmware. If you can describe how you application requires higher resolution, it will help us determine what features and efforts to put on the roadmap.

Edward

Dear Edward,
I am going to use Pixy as positioning sensor. As a part of a feedback circuit. It will track the position of an object in a magnetic field, which will be adjusted to correct for any shifts in location.
The complete setup will include an arrangement of 8 Pixy cameras, tracking 4 different locations of the object.
We will use redundancy of 2 cameras for each location, in case of failure, as we will use these cameras in a pressurized environment with limited access.

As we don’t want to allow for any large shifts, I have replaced the lens for a more narrow FOW, which in combination with full pixel resolution would give us a great spatial sensitivity.
The design parameters are such that I need to be able to shift this object a couple of inch in the ‘x’ direction while the ‘y’ position will be held constant.
With that in mind, I don’t need a great FOW in the y direction (I can discard the top half or 2/3 of the pixel rows). While in the x direction I need the full FOW and pixel resolution.

The time constant for my feedback system will be about 1-2 Hz, 50Hz frame rate is therefore unnecessary if not a wast of computer capacity for the rest of the system.

So my design suggestions would be: (in a bullet form)

  • reduce repetition rate
  • reduce pixel lines in y direction
  • increase pixel resolution in x direction.

I will not utilize the ‘color code’ function. And I will only track 2 colors. There will only be one ‘blob’ of the first color and up to 5 object of the second.
( If you impose a sinusoidal magnetic field, on top of the steady field, the object will start to rotate. This is what is I will observe with the second color)

I hope these additional details might help you in guiding me in editing the firmware.
I’ll be happy to share some photos when we have figured everything out.

Best
Leo

Hello Leo,
It sounds interesting! Maybe as a workaround, you could use averaging (for noise reduction) and a narrower FOV than you would normally use.

I’ll pass this along. Thanks for the description!

Edward