Well, first of all, I’m a career software engineer. When I first came across this issue with Pixycam I took a good hard look at the software that’s supplied with it as well as the specs for the various chips. From the specs I could see that it does indeed have the ability to set certain settings so that a sub-section of the overall, high-resolution, image field can be selected. And in the software written by the Pixycam engineers there are constants defined specifically for that purpose as well as code that sure looks like it’s intended precisely for the purpose of zeroing in on subsections of the chip’s pixels. All of those constants and code is commented out, but it sure looks like at least one of the engineers recognized the potential and at least attempted to set things up so that it could be possible.
As for my project: I want to provide a low-cost means of providing high-accuracy location data to a mobile robot. In particular it would be used by small Christmas tree farms to control a mowing robot to keep the grass cut between the trees. That effort, to keep the grass cut, is a time-consuming and expensive part of business. And while general robot mowers do indeed exist they aren’t suitable for this particular application because the location data that they can access, like GPS, is to way too inaccurate, it only tells them where they are within about ten feet. For this mowing-between-the-trees application the mowing robots need to know their location to within inches, in order to not only avoid bumping into existing, mostly-grown, trees but also to avoid mowing OVER newly planted seedlings.
My idea is to position several digital cameras, with the image processing capabilities like those in the Pixycam, around the corners of the field. The mowing robot will have a vertical shaft, one that’s tall enough to reach above the height of the full-grown trees and thereby be visible to all the cameras. The top of the shaft would have a set of uniquely colored bands to make it stand out against the generally-green background of all the trees. Each camera would track the location of that shaft, and implicitly the robot, by analyzing its field of view and identifying where within that field of view the colored bands exist. Each camera would know it’s own position relative to the others, know the extent of its field of view, and be able to calculate and angle value for the detected shaft. The cameras would transmit that data to the robot in realtime and the robot’s onboard processor would use that data to very accurately calculate its position within the field, allowing it to easily traverse the field, cutting the grass, while avoiding the trees.
A similar approach would be initially used to map out the locations of all the trees and that data stored in the robot’s memory.
The overall idea is that the farmer, recognizing that the field needs cutting, can just turnon the robot, perhaps tell it WHICH field needs mowing, and send it on its way to do the work, while the farmer is free to attend to the hundreds of other tasks he has to do.
This project could easily lead to a very viable product, not just for Christmas tree farmers but by numerous other agricultural ventures.
While the Pixycam, as configured for low-resolution use of the image chip, is fine for small work areas,like a table top, or small room, it appears to me that by allowing the camera to selectively zero in on specific high-resolution subsections it could be useful for much larger operating areas.
My hope is that Pixycam will make that zero-in capability accessible to us programmers so that we can experiment with its capabilities. And while the Pixycam itself might not actually be suitable for an area as large as a farmer’s field or be able to operate in the harsh outdoor environment,it would certainly provide a TEST BASE for exploring the possibilities.
And if it all works out as I plan, I would certainly be looking for some company that has the image processing technologies(Pixycam?) and that could turn it into a full-fledged product or at least license out the technology.
I’m convinced that I can’t be the only person who could make use of the zero-in capability. At the very least it would provide even more functionality that could spur even more inventive exploration by its users. You can only go so far with tracking a bouncing ball or following a line. Increase it’s capabilities and you increase your market.