<< return to Pixycam.com

Possible to make Pixy expect only white dot/blob objects? (IR objects seen through IR filter)

I was planning to use Pixy with an different M12 lens, IR pass filter (allowing only IR, no visible light), and the “objects” Pixy is intended to see are IR LEDs.

They will show up as white dots, and Pixy will see nothing else.

The wiki says that object teaching is hue-based so black, white, and grey objects are no good. This unfortunately sinks my application idea (because an IR only pixy will see only in black & white…)

Is there any way to make Pixy intentionally expect only white “blob” objects? Pixy will literally never see anything else.

It’s considerably dumber than Pixy’s original application but I’m hoping it can be done.

Hi Donald,

Since the current algorithm works best on objects with a distinct hue, you’d be better off writing new LUT generation code that works for your application. You’d be able to use our connected components code, so really all you need to do is look at how we generate LUTs, and alter it to detect the ranges of white that you care about.

Does this make sense? Good luck!

Scott

Pretty sure I could handle the fiddling required but I’m going to need some help getting started on that (new LUT generation, etc) The term LUT is even new to me.

Do I modify Pixy’s firmware to do this? I just received my Pixy and got it working, I certainly don’t have a development environment for whatever processor Pixy uses but I’d be willing to give it a shot.

Point me to where and what to find the code about LUT generation and I’ll see if I can take it from there. Right now I don’t even know where I’d look.

Could you also just add a red filter, and use red LEDs… or does it need to be invisible to humans?

Hi Donald,

Sorry for not getting back to you sooner. The LUT generation code is “here”:https://github.com/charmedlabs/pixy/blob/d50c8b7df36a95f61d089f8e56b31b76aecc329e/common/colorlut.cpp.

Hope this helps.

Scott

I am also interested in this application (detecting black/white blobs).

Conceptually, I guess it would be easy to trick Pixy into using luminance rather than hue to define objects. Basically, only one color signature would be available, and it would be defined by some luminance threshold (the threshold could, for example, be calculated via Otsu’s method: http://en.wikipedia.org/wiki/Otsu’s_method).

Unfortunately, I don’t have enough experience with C++ to understand the colorlut code without any documentation. If this is indeed as simple as I hope, could you point us to the part of the code what we’d have to change?

Another question: Is there a way to get data from Pixy into Matlab (close to real time)? I.e. some function that I can call from Matlab that polls Pixy via USB and returns a list of the detected objects…

Thanks a lot!

Matt

Hi Matt,

We don’t yet have any libraries for reading Pixy data directly from USB. We’re working on it though.

Unfortunately the model generation code is fairly large (most of the colorlut.cpp file, and in other source files), so its not so easy to just swap out the current LUT generation code. This is another thing on our to-do list that we are trying to improve on, so in the future we can have multiple different model generation algorithms and allow you to choose between them.

Sorry its not more straight-forward, but for now you’ll have to learn much of whats in the colorlut.cpp code.

Scott

Scott, would you or someone else on the team be willing to do a firmware mod like this for me as a small contract (i.e. paid) job? I’d be willing to pay for small bit of development so Pixy has slightly customized firmware that can do exactly what I need, and you (or someone you know) surely has the whole development toolchain ready to do so.

If you or someone else on the team is interested, send me an email to rb [@] aeinnovations.com and we’ll talk.

(I’d have private messaged or emailed you instead but I can’t find any contact info or PM function here - I also emailed the general Pixy mailbox last week with this query but no response. Thanks!)

Hi again,

I’ve been working through the code in the video folder (https://github.com/charmedlabs/pixy/tree/master/device/video) and it’s becoming clearer, but I have one major question. It would be great if you could help me with this:

If I understand correctly, then @m_lut@ is a 256-by-256 table of color values which covers a space whose axes are the color properties “u” (red minus green1 intensity) and “v” (blue minus green2 intensity). This is what @ColorLUT::map@ does in @ColorLUT::generate@. @m_lut@ maps each point in that color space to one of up to seven color signatures, as determined in @ColorLUT::add@.

My question: Where is @m_lut@ evaluated to determine which pixel in an image frame belongs to which signature?

This is my progress to answer this question so far: @Blobs::blobify()@ seems to assemble “blobs” according to color, but it get’s its data through the @Blobs::unpack()@ function, which provides @SSegments@ that already know which model (color signature) they are. The data in the @SSegments@ seems to come from @qval@, which comes from a queue (I don’t understand what that is, presumably a buffer memory for the incoming frame data?). So how does the information about color models get into the @SSegments@/queue in the first place? Where is @m_clut@ evaluated?

I should say that I have no experience with C/C++, so my understanding and terminology may be off. Please let me know if it’s clear that I misunderstood something.

Thanks,
Matt

Could someone quickly let me know if I’m on the right track, and where the m_lut is evaluated? I’d really appreciate the help! Thanks!

Hey Matt,
I’m not a C/C++ expert, but I would like to bounce some ideas off of you. ([email protected]) I’m interested in solving the same problem, and I have a simple solution in mind.
,Thomas

Donald - Sorry, but I won’t have time for anything like that for at least a few weeks, which I’m guessing is too late for you. I’ll ask around and see if anyone else can help you out. Also, sorry for my delayed response, somehow I missed your response from last week.

Matt - You seem to have everything correct. I can see why you are confused about where m_lut gets used. Its actually used on the m0 core (the video acquisition and processing side). The m0 core takes in frame data from the image sensor, uses m_lut to determine if the incoming data is a part of a signature, and stores the results as run-lengths in shared memory. The m4 side then accesses that shared memory to see what blocks are currently in frame. Does this make sense?

Hope this helps.

Scott

Thanks Scott. I kept working on it and now have even more questions:

If I understand correctly, the lut evaluation happens in file rls_m0.c line 330:
@LDRB r1, [r2, r5] // load lut val@
r5 contains an index made from the red-minus-green and blue-minus-green pixel values in the code above line 330.

Is this really the relevant code? I’m confused because I tried changing the code in rls_m0.c, but it didn’t seem to have an effect on the way Pixy recognizes objects. Concretely, I tried changing line 234 from @SUBS r3, r4@ to @ADDS r3, r4@ and line 323 from @SUBS r6, r5@ to @ADDS r6, r5@, with the goal to change the color space that the lut cares about from a hue-only space to a luminace-only space. I compiled everything as described here: http://cmucam.org/boards/9/topics/3191. However, despite the changes in rls_m0.c, teaching Pixy new color signatures worked just as usual. I also tried some other changes to rls_m0.c, with the same lack of effects. If this isn’t the place where the lut is evaluated, where is it then? It would be great if you could point me directly to the lines of code.

I made another confusing observation when I changed colorlut.cpp. As a test, I messed with ColorLUT::add. When I did this, I couldn’t assign signatures in Pixymon anymore – the signature assignment was broken, as expected from my crude changes to the code. Confusingly, however, I could still teach Pixy new signatures when I used the hardware button on the Pixy board, rather than Pixymon. That worked just as if I hadn’t changed the firmware. To check if I was modifying the right code, I used cprintf in ColorLUT::add to print a confirmation to Pixymon. The confusing thing is that the confirmation was always printed when I tried teaching a new signature, no matter whether I did it through Pixymon or using the Pixy hardware button. So my modified ColorLUT::add code was executed in both cases, but didn’t actually seem to affect the m_lut when I used the Pixy hardware button.

So, in summary, I have the following questions:

Where (line of code) is m_lut evaluated to assign a signature to a freshly sampled image pixel?

What happens when I use Pixymon rather than Pixy’s hardware button to teach signatures? Is different image processing code used when Pixy is connected to Pixymon?

My first-pass idea for implementing luminance-based object detection is to simply change the definitions @u = red-green@ and @v = blue-green@ to @u = red+green@ and @v = blue+green@. It might not be perfect but the simplest approach I can think of. Could you point me to the functions I have to change to make that happen? I thought all I need to change is some parts of colorlut.cpp and the m_lut evaluation in rls_m0.c, but what I tried (see above) didn’t work. I’m sure it would be easy for someone familiar with the code to give me an outline of what I have to do, and save me days or weeks of trial and error.

Thanks,
Matt

Hi Matt,
Nice! Sounds like you’re making progress— you’ve got a development cycle working, which isn’t simple at this point :slight_smile:

Let’s focus first on why your changes on the M0 side don’t seem to be taking.

You’ve found the relevant code. Modifying it will definitely affect how the color LUT is used (changing the SUBS to ADDS, for example).

In some cases however, this won’t affect things too much— like if you have a perfectly red object. So I’m not so sure your code isn’t being executed, and I’d be bolder by modifying the code such that it breaks in some way. (It won’t be difficult!)

The M0 code gets compiled into a binary and converted in to a C array and reflected in the file “m0_image.c” that gets included in main_m4.cpp. Make sure your m0_image.c file is updated before you compile on the video project. Then you can be sure that your code is in there.

thanks,
–rich

Progress report:

I guess I didn’t compile main_m4.cpp when I made the changes to rls_m0.c, or didn’t interpret the results right. Anyway, it now works and I can do very basic luminance-based blob detection. The code is here: https://github.com/mjlm/pixy (This is just a snapshot of the work in progress, not really intended for publication. Most changes are comments I added for my own understanding.)

In summary, I hard-coded pixles above a certain threshold to be signature 1, and all other pixels to be signature 0 (i.e. no signature).

This is what I did (not sure if all the changes are necessary):

  • In rls_m0.c, I changed from r-minus-g to r-plus-g-minus-127 (same for b/g). This changes from hue-representing to a luminance-representing space without changing the numerical range.
  • In ColorLUT::map, I made corresponding changes.
  • In ColorLUT::add, I say that pixels with (u>50 || v>50) are signature 1, and 0 otherwise. This is the luminance threshold (value picked arbitrarily for now).
  • In cc_loadLut, I load only model 1, no others.
  • Plus maybe other changes that I can’t remember now.

This now seems to work, i.e. pixy will detect bright spots as signature 1 if the relative brightness of the background and blob happen to be appropriate for the threshold (otherwise it detects nothing or the entire field of view). However, this only works in the “default program” that is run when Pixy is switched on. So when you’re using Pixymon, it only works right after bringing up Pixymon, or after clicking Action–>Run default program. The screen will stay black, but Pixymon will display the bounding boxes of any detected blobs. When viewing the “cooked” video in Pixymon, detection doesn’t work.
Edit: Conveniently, it is possible to effectively change the threshold through Pixymon by changing the “brightness” setting in the Configure parameters–>General dialog.

So these are the next points I want to work out (input very much appreciated):

  • How do the “programs” work? Why does detection not work in the cooked video in Pixymon? Does it use different code than Pixy internally?
  • Automatic exposure control: Does Pixy adapt exposure dynamically, or does that just happen in Pixymon for display? If exposure is actually adapting, how can I switch that off? I will have completely static illumination and adapting exposure will mess with my thresholding.
  • Somewhat off-topic: I want to reduce the latency for getting the blob coordinates into Matlab. Currently, my connection is Pixy–>Arduino–>Serial Port (USB)–>Matlab. I wonder if I can skip the Arduino. What is Pixy’s internal latency from grabbing the frame to outputting the blob coordinates through the serial interface?

I’m sure more questions will follow.

Hi Matt,

Wow, sounds like you’ve made some great progress!

To answer your questions:

Yes, Pixymon uses different code than Pixy for block detection. It is the same algorithm, just different code. We did it this way since it was easier to just re-compute the run-lengths on the host side instead of sending them over USB. However, this may change in the near future

  • Yes, you can enable/disable Auto Exposure Compensation (AEC) on Pixy using the cam_setAEC command in Pixymon (0=disable, 1=enable). You can find more commands like these by typing ‘help’ in Pixymon.
  • The blocks are computed as run-lengths as the images are acquired from the sensor, so there is no latency here. After the frame is finished, the blocks are then sent over one of the communication interfaces. Its hard to say how long this takes since it depends on the communication method, the number of blocks, etc. Also, I haven’t profiled this part of the code so I can’t give you exact numbers. You can roughly assume it takes 1/50th of a second to start sending the blocks (or 1/25th at the higher resolution).

Hope this helps.

Scott

Interestingly the wiki now says “Yes, the pixy can track IR light” ( http://www.cmucam.org/projects/cmucam5/wiki/Will_Pixy_tracksense_IR_light )

Has anything changed? I had understood from way back that Pixy couldn’t properly see IR because it appears white/grey colorless and Pixy works on colors (and IR “objects” seen by the camera through an IR filter are all the same colorless grey/white color.)

As a side note, take a look at this Kickstarter https://www.kickstarter.com/projects/258964655/ir-lock-infrared-target-tracking-for-drones-and-di
It uses Pixy and an IR filter to track IR blobs.

The fact that it is only about a week old and already over 300% funded demonstrates the strong interest in this apparently simple feature (compared to Pixy’s normal functions.)

!http://i.imgur.com/yWPonFKl.png!

Hi Donald,

Pixy can only see IR light if you replace the standard lens with one that does not filter out IR. The Kickstarter project you referenced uses a special filter to allow IR light in to the image sensor.

Scott

Ah, but will Pixy see and track (colorless) IR blobs with only a changed lens/filter? That’s the key.

The impression I got from past discussions in this thread is NO – Pixy’s software only works with colors and couldn’t be expected to track “colorless” IR blobs, and would need firmware modification in order to do so. Can you please clarify, Scott?

An IR passing (and visible light blocking) filter lens / filter is within my workshop’s abilities. Setting up a Pixy firmware development environment/toolchain and tweaking Pixy’s code is not.

Hi Donald,

This is Thomas from the IR-LOCK project. We are working on firmware that helps Pixy reliably track IR blobs (essentially, ‘white blobs’). It seems like a simple problem, but we have some serious improvements to make before the first release in December:

Allowing the user to easily select an Exposure Value based on the operating environment
Disabling auto-exposure control (while also selecting a good Exposure Value)
Not ‘breaking too many things’ in the process of modifying the code. Essentially, de-bugging
Enhancing performance based on experiments with our IR hardware (lens, filter, IR markers) and modified firmware
… a long of list of other features we would like to add

I am super-busy nowadays, but I will try to reply as soon as possible if you have questions about the IR-tracking capabilities of Pixy: thomas a t irlock d o t com

Best,
Thomas