I have a custom piece of hardware that uses the XEM-3005 as its brains. My application code on the mac tells my FPGA code how many packets of data to acquire and over what time period. This varies from a minimum of 4 packets over one second to a maximum of 400 packets over approximately 25 ms. Each packet consists of 2048 bytes and I can’t afford to miss a single byte.
I had developed a fairly workable approach. I would write the key parameters to the FPGA using SetWireInValue. Then I would ActivateTriggerIn to start the acquisition sequence on the FPGA. Immediately after that I would use ReadFromBlockPipeOut to acquire the number of bytes that were expected. I did all these steps in a real-time thread, which is of course a soft concept on the mac.
This approach worked, with some careful tuning of the real-time thread parameters. Lucky me though, I got a new macbook pro and the old code no longer works. I drop data, which is fatal in this application.
I can try to hand tune the real-time thread parameters and hopefully get it right, but it seems to me that there must be a better way. I’m open to any and all suggestions, although I’d rather avoid placing all the acquired data in RAM and doing a bulk transfer, just because I want to minimize my changes.
Thanks in advance for any insights.