<< return to Pixycam.com

Ideas to improve the first use experience

Dear Charmed Labs,

I just got my Pixy with pan/tilt mechanism, assembled it, and plugged it into my laptop.
After that, pain and suffering !

Here are a couple of points where I think there is room for improvement:

  • Fixing the compilation under Linux (I created a pull request https://github.com/charmedlabs/pixy/pull/11)

  • Being more transparent on the capabilities of the device. Finding the technical specs in the wiki took a while, and I found no discussion about what the NXP LPC4330 can or cannot do in “human words”. Nowadays you risk people to expect “cellphone computing capabilities everywhere”, which the NXP is not(an example of this is issue http://cmucam.org/boards/9/topics/2758 ).

  • Providing better auto-balance. My first ten trials to get the camera to track anything where a failure (trying different objects, playing with the settings, changing the room light), even after seeing the recommended tips (http://cmucam.org/projects/cmucam5/wiki/Some_Tips_on_Generating_Color_Signatures).
    I seems that I am not the first one suffering this issue http://cmucam.org/boards/8/topics/2858, http://cmucam.org/boards/9/topics/4111, http://cmucam.org/boards/8/topics/4214.
    The few good results I got where either by controlling the background (i.e. white wall scenario), or by playing quite a bit with the camera parameters.
    I think it would make sense to use the host CPU to run a “sophisticated color balancing” so as to maximize the discriminating power of the selected signature(s).
    The point being having a much much easier and effective way to “tune the camera”, rather than having to open a menu and play with numbers hoping for the best.

  • The reflective surface of the Pixy box looks nice, but does not play well for tracking of the color squares printed on it. Simply tilting the box makes it look reflective (saturated white) and the tracking is lost. For me this is a total demonstration fail. Either change the box paper, provide a mate paper with the color codes, or do not include them at all. As it is right now, I fail to see how this demo could work reliably (if there is a trick, please let me know).

Overall (for me) the out-of-the-box tracking is a let-down. Since this is put forward as part of the product (http://youtu.be/J8sl3nMlYxM), I think it is important to put some effort on improving it.

Maybe going for something a bit more sophisticated than simple color matching might do the trick.
Is the NXP powerful enough to apply a correlation filter (to search for a specific pattern) ?

Looking forward to see the Pixy camera software evolve (both from charmedlabs side and from the community).

Hi Rodrigo,

Thanks for your input! We love to get feedback from users, so this is great to see.

I have a few comments/clarifications:

  • I believe there was a misunderstanding in “this”:http://cmucam.org/boards/9/topics/2758 post on Pixy’s capabilities. Please see my latest post for clarification.
  • Improving the auto-balance and color algorithm are our top priority right now. We sent out a survey a while back and improving the algorithm was the top choice from those who responded. Typically the default configuration parameters work well for most conditions, but your lighting can really change that.

Please let me know if you have any more comments or questions. Keep the feedback coming!

Scott