Thank you Robert! Glad you like them!
Pixymon is programmed in QtCreator, I used the free version, which is more than sufficient. Here’s a link: http://www.qt.io/download-open-source/
When installing, make sure to also install the minGW compiler (you’ll be prompted during the install).
The language used in Qt is C++, but pixymon uses the Qt libraries for variables (QString instead of String, QWidgets for UI elements, emit & signals for event handling and threading etc. The Qt library presumably exists to make cross-platform compiling easier), but that’s not important for now, just be aware of what variable types you’re working with.
I will now quickly walk you trough part of the renderer.cpp class from my modified pixymon, the line numbers specified will correspond to this file: https://github.com/Pravuz/Pixymon_modified/blob/master/host/pixymon/renderer.cpp
Note that modifying the renderer class is not an elegant solution (the elegant solution would be to make a new module based on monmodule.cpp, but nevermind that for now).
+if you’re completely green in regards to programming, this might be a bit too advanced, and I would recommend checking out this beginners intro to programming in c+++ +: https://www.youtube.com/playlist?list=PLF9B0522C7BC3C1C2 or some other c+++ +youtube guide+
Beginning with the pixels for each image in the videostream, they are handled in the renderBA81 method (line 296).
Line 336 and 337 are part of my algorithm, which again aren’t to important for now, but you should compare my version of renderBA81 with the original
( https://github.com/charmedlabs/pixy/blob/master/src/host/pixymon/renderer.cpp line 198).
Regardless, each pixel is handled individually within the nested for-loops. The first order of business is the interpolating (line 332), which is of no concern, it just has to happen because of how the data for the image sensor on pixy is built. after the interpolating, you’re left with rgb values for the pixel, which I convert to greyscale (line 338), and use for comparison in order to detect a change (in another word; motion. line 339 to 354).
The rgb (each value are 8bit integers) value for each pixel is then combined into one 32bit integer (line 359), and then added to the image which will be displayed in the program (via videowidget.cpp).
When all the pixels are processed, the image is emit’ed via the signal called ‘image’ in line 368. This signal is then picked up by the videowidget.cpp class. (the connection here is established on line 56).
Now, if you take a look at the header for the renderer class (https://github.com/Pravuz/Pixymon_modified/blob/master/host/pixymon/renderer.h)
you can see that I’ve made a few Signals (line 82 to 85) and slots (which are used by the signals, line 105 to 109).
What this essentially does, is split the work up into different threads, so that the rest of my algorithm doesn’t cause the video to lag.
So the next part of my algorithm, which is to separate, filter and process the objects detected, are done in the background.
Hopefully this short intro wasn’t too confusing or discouraging, but I’m happy to assist with what I can, as long as I have time