Hello,
I am involved in a project with a Raspberry Pi + Pixy, connected via USB, mounted on a robot. The Pi has WiFi. We have prior experience with the Pixy.
On the robot, the Pixy will see one or more groups of objects, in different arrangements. Each group will have one or more objects. Each object in each group is the same shape and color.
The Pixy is trained to recognize one of the objects.
We would like to have a remote computer that has WiFi access to the Pi show the real-time Pixy camera frames along with the one or more groups of objects that Pixy recognizes.
Basically, the remote computer would have a display that looks like PixyMon. Remote computer is Windows, but it doesn’t need to be. Although I am not sure if another OS on the remote computer would make this easier or not. I would say not.
This remote computer display would be viewed by an operator. We would like the operator to select a group from the display and then send the object data for that selection to the robot so that the robot can navigate to that group.
Since everything is the same color / hue (object), there would seem to be no way to distinguish between different groups.
Therefore, the first idea I have is to change the Pixy M4 firmware so that it will include a unique ID with every block, so as to distinguish a specific block.
The next issue is how to get a PixyMon-like display and incorporate selecting a particular group / block on the display. There seems to be several possible approaches.
The one that would appear to be the least amount of work is to compile PixyMon for Pi / Raspbian and run a VNC server on the Pi and then connect to the Pi VNC server via WiFi from some other computer as a client. This should allow PixyMon to be operated via the VNC client on the remote computer. libpixyusb and PixyMon would need to change to recognize the unique block ID. In addition, PixyMon would also need to change to allow a user to select a block and then send the object info for the selected block to the robot. However, being able to select a block would seem to require a fair amount of understanding about Qt and the Qt side of PixyMon.
Instead of VNC-ing PixyMon, I have thought about breaking PixyMon into a client and a server. When it gets down to actually communicating with the Pixy, it looks PixyMon could be split at the Link class. The Link class has derived class for USB (USBLink). I was thinking of creating a new class derived from Link, like Network. The implementation of the Network class on the remote computer would do some sort of socket communication to the Pi over WiFi. Everything up from the Network class would run on the remote Windows computer. Everything down from there would run on the Pi. The Pi would need a corresponding Network class implementation that receives everything coming from the network down to the Pixy and sends everything coming up from the Pixy back to the remote computer. I don’t know what is expected in general in terms of the transport protocol. Is it send one command, wait for a response, then send another command? Or is it more full-duplex? Or multiple commands at the same tine as well? I would need to get this exactly right in order for this to work at all. All the PixyMon, libpixyusb and M4 firmware changes noted above in the VNC solution would still be required.
Another approach would be to not rely on PixyMon at all and instead do something with the cam_getFrame command that is discussed in this forum. This would require getting frames and then writing over the video frame buffer the objects like PixyMon does. Some of this could happen on the Pi, or it all could get sent to the remote Windows computer and happen there. This seems like a lot of tricky graphics work and keeping the frame rate decent might be an issue. I wouldn’t need to change the M4 firmware, because I could tag every object block coming out of libpixyusb on the Pi with an ID.
I suppose the most totally basic approach, one that doesn’t really implement the concept, is to just send all the object block info received by the Pi from the Pixy to the remote computer and create an application there that displays the blocks without the video frames. Maybe that is more useful than nothing.
Sorry for my long message.
All input / feedback is very much appreciated.
Thanks,
Sal