Hello,
Can we have the complete datasheet of the OV9715?
I want to know the register of the ccsb to understand how it work.
Hello,
Can we have the complete datasheet of the OV9715?
I want to know the register of the ccsb to understand how it work.
Omnivision requires that you sign an NDA to get the datasheet for this part. When we signed the NDA the distributor gave us a firm warning that we would get no support (from Omnivision) unless we buy a couple 100K/year. Then they sent us a datasheet with our company name watermarked in the pdf!
Crazy, yes? It’s pretty typical though, for CMOS imagers. These parts are not meant for us hobbyists/roboticists/tinkerers.
If you have a specific question, maybe I can answer.
Hey Rich,
Per your offer above to potentially help with a specific question:
I see the Omnivision camera offers a windowed mode.
2 questions -
Thanks for any help you may be able to provide.
Hello Sam,
I believe the windowed modes on the Omnivision imager are just cropping modes. What if you just cropped the output of Pixy? Maybe I’m not completely understanding what you are trying to do. Do you want to run different processing on each window?
Edward
Hey Edward,
Ultimately, I’m trying to build a line-following robot.
However, the line to be followed is not the smoothly curving circuitous track I’ve seen in a lot of line-following examples. Instead, it has several 90 degree right/left turns, some of which occur as intersections with the main track (where the goal is to follow the offshoot from the main track to its end, do some stuff, then return).
Because of the nature of the track, a simple PID algorithm with the generated error signal just being the left/right position of the centroid of the returned block will not work.
An example of how the decision making is complicated (assume the direction of travel is vertical to the diagram):
This line:
|\ |
| \ |
| \ |
| \ |
| \ |
| \ |
| \ |
\ |
---|
---------- | /| | / | | / | | / | | / | | / | | / | |/ | ----------if I only read the line as a single, un-windowed frame. However, if I'm able to window the frames so that I capture one frame with only the top half of the image (windowing) included in the data generation, then capture the next with only the bottom half of the image, I can use the offset between the two centroids to determine an angular offset from vertical, which I can feed into my control system as the error signal. Further utility comes when I'm approaching those hard-angled turns, in determining which direction an approaching turn will be, and in determining when I've traveled far enough toward the corner to execute my turn without completely undershooting the line after the turn.
My current solution has me illuminating a row of LEDs in front of the camera (to only illuminate the front half of its field of view), taking a frame, then illuminating another row of leds behind the camera, which illuminates the back half of it’s field of view, and taking another frame (after waiting 20 ms). Besides the fact that the spillover between the front/back LEDs causes inconsistent overlap between the forward/backward areas of illumination (which windowing would fix, as I could specify exactly how much of the image to pay attention to), the flashing lights cause a much more serious problem - a partial exposure line (see http://dvxuser.com/jason/CMOS-CCD/ for an example / description) that moves vertically across the video (as seen in pixyMon). This moving partial exposure line actually effects the data that pixy sends back, so that even if my robot is stationary, the data I get back varies by enough to be unusable for this purpose.
If I could generate my data based on windowed frames, I could leave the LEDs on permanently, removing the partial exposure artifacts and speeding up my loop time significantly (my current fix for the partial exposure problem is to leave one row of LEDs illuminated for ~ 100 ms and take an average of the data I capture. This means I’ve got about ~200 ms between each new frame of data which is a lot of time for a robot moving at any speed above a crawl).
I’ve seen that cmucam4 had options for windowing, but I’m afraid I won’t have the money or time to buy and integrate it into my current project, so I’m really searching for ways to make the Pixy work for me.
I am aware that Pixy is not the best solution for this problem, and that there are many simpler and more elegant solutions out there. However, at this point, my only option is to work with what I have, so I’m doing my best to make it work.
Thanks again for your help so far, and any further help or suggestions you may have.
Sam
Hello Sam,
Some kind of line detecting algorithm is on our list of things to add. I agree that the color connected components algorithm is not a good fit for this. Regarding windowing, you would need a way to dynamically change the window on-the-fly, is this correct? What controller are you using (Arduino, Rasberry Pi?)
Edward
Hey Edward,
Currently arduino, but I use embedded Linux systems at work, so could switch over to raspberry pi or beaglebone for this without too much trouble. I’m basically trying to avoid the expense of the cmucam4, since I already have a raspberry pi I could use. I realize the arduino library has no way of dynamically changing the windowing, but I was under the impression that the rPi uses libpixyusb to communicate with the pixy. I figured I’d ask whether it was possible first, test it with pixymon, then cross the bridge of changing micros when I came to it.
Thanks,
Sam
Hello Sam,
You are correct about libpixyusb. It it more flexible than Pixy’s serial protocol. Currently there is no way to dynamically change the processing window. It’s something that can be added, possibly by us or by a user.
Edward
Hey Edward,
yeah, that confirms what I’d thought.
Honestly, I don’t think the firmware change would be worth the effort. A big selling point on the pixy is it’s simplicity - I was just trying to shoehorn it into doing something it wasn’t designed for. At this point, I’m moving on to the cmucam4, as I’m pretty sure it’s better suited to my particular needs.
Thanks for all your help,
Sam
Hello Sam,
Agreed about the simplicity of Pixy. What aspect of cmucam4 do you plan on using? (maybe this is something we should consider adding back).
Edward
Hey Edward,
While the additional control features initially caught my eye (the ability to turn AEC and AWB off from the arduino, for example, along with the ability to dynamically change the windowing), the way I believe I’ll be using it is by retrieving the 80x60 bitmap (each bit a ‘0’ for an untracked pixel and a ‘1’ for a tracked pixel) and analyzing that.
That particular functionality may be outside the use-case for most Pixy users, though I do think there could be an advantage to adding in control over some of those camera parameters from the arduino.
Thanks,
Sam