I am running a custom piece of C++ code to interface to an XEM3001v2. To make it even more interesting, I happen to be running on a mac OSX platform. I have my won VHDL code on the XEM side.
In my application I am generating new chunks of 16bit x 4096 data on the XEM side at about 2000 times per second. I need to transfer all of this over to the mac with absolutely no losses. In an ideal world I’d do this for about 2 seconds. Since I found very quickly that I can’t possibly poll the trigger lines fast enough to grab every 1/2000th of a second, I had to find a workaround. What I did was to set up a large (many MB) block transfer using the ReadFromPipeOut function. This works reasonably well, with a few issues…
I embed codes in my data stream to indicate which portions of the transferred data are good and which parts are just filling time in the stream.
I had to play a trick to make sure that the acquisition starts at some fixed point after I start the ReadFromPipeOut. This makes sure that I get all the pipelines full before my real data starts coming over, if I don’t do this then I lose some of the early data (the most critical for my application).
The above step works reasonable well, but I found that the variation in startup latency is huge. This can be managed by setting this as a real-time thread.
All of this is now working, and although it is vastly less efficient than would be a system where I could control transfer from the XEM side, it generally works.
The one issue I can’t work around though is the max block transfer size. According to the documentation I found, the buffer size should be just a hair under 16 MB. I found empirically that if I set it larger than that I get a segmentation fault. I also found that if I keep it under 11 MB then everything works as I’d hoped. The strange part is in the window between 11 MB and 16 MB. At the low end things work okay, but by the time I reach 12 MB I am getting back data which suggests that my state machine on the XEM has not started up until extremely late in the process. This should not happen, unless the OSX side has some huge latency between when it first requests data and when the data starts flowing smoothly, AND this only kicks in when the buffer size goes over some threshold. I am doubtful about this idea, as this effect is extremely reproducible. I am certainly not convinced that there is not some other issue, but I am quite confident that it is not on the XEM side, as my code there is quite dumb and has no idea how much data the mac has requested. Any ideas on what is happening here?
Note that at 11MB I can grab about 0.6 seconds worth of data, which is probably okay for my application. I project that going up to 16 MB would allow me to reach ~0.85 seconds, which would pretty much guarantee success.