A strange problem with OSX, XEM, and ReadFromPipeOut

I am running a custom piece of C++ code to interface to an XEM3001v2. To make it even more interesting, I happen to be running on a mac OSX platform. I have my won VHDL code on the XEM side.

In my application I am generating new chunks of 16bit x 4096 data on the XEM side at about 2000 times per second. I need to transfer all of this over to the mac with absolutely no losses. In an ideal world I’d do this for about 2 seconds. Since I found very quickly that I can’t possibly poll the trigger lines fast enough to grab every 1/2000th of a second, I had to find a workaround. What I did was to set up a large (many MB) block transfer using the ReadFromPipeOut function. This works reasonably well, with a few issues…

  • I embed codes in my data stream to indicate which portions of the transferred data are good and which parts are just filling time in the stream.

  • I had to play a trick to make sure that the acquisition starts at some fixed point after I start the ReadFromPipeOut. This makes sure that I get all the pipelines full before my real data starts coming over, if I don’t do this then I lose some of the early data (the most critical for my application).

  • The above step works reasonable well, but I found that the variation in startup latency is huge. This can be managed by setting this as a real-time thread.

All of this is now working, and although it is vastly less efficient than would be a system where I could control transfer from the XEM side, it generally works.

The one issue I can’t work around though is the max block transfer size. According to the documentation I found, the buffer size should be just a hair under 16 MB. I found empirically that if I set it larger than that I get a segmentation fault. I also found that if I keep it under 11 MB then everything works as I’d hoped. The strange part is in the window between 11 MB and 16 MB. At the low end things work okay, but by the time I reach 12 MB I am getting back data which suggests that my state machine on the XEM has not started up until extremely late in the process. This should not happen, unless the OSX side has some huge latency between when it first requests data and when the data starts flowing smoothly, AND this only kicks in when the buffer size goes over some threshold. I am doubtful about this idea, as this effect is extremely reproducible. I am certainly not convinced that there is not some other issue, but I am quite confident that it is not on the XEM side, as my code there is quite dumb and has no idea how much data the mac has requested. Any ideas on what is happening here?

Note that at 11MB I can grab about 0.6 seconds worth of data, which is probably okay for my application. I project that going up to 16 MB would allow me to reach ~0.85 seconds, which would pretty much guarantee success.



How did you get things working on OSX? I didn’t see any *.dylib files on the FrontPanel CD. I thought only Windows and Linux were supported. Also, have you got some way of running the Xilinx ISE software under OSX?



While Xilinx does not support Mac OS X, we do through our API. You will not be able to build a bitfile with Mac OS X, but you can distribute your application on Mac. We have many customers using Mac OS X because their end customers are using their device designed with the XEM. But their end customers aren’t doing any FPGA programming.

The Mac and Linux versions were previously available only through our website. However, with our new FrontPanel-3 releases, we are shipping installers on the CD for Mac and Linux.

A few clarifications and answers are in order:

  1. At the time of my earlier post I was running ISE 8.1 under Virtual PC on a PowerPC Powerbook. That worked fine, but was painfully slow.

  2. I have migrated to ISE 9.1 under Parallels on an Intel MacBook. It works great, but ISE chokes on trying to work across to the MAC side of the disk. That means that you generate the programming file in a windows directory and then manually copy it over to the Mac side. Works great and is colossally faster than running under Virtual PC.

  3. My problem in the past (the reason for the original post) turned out to be a newbie c++ error… I was defining my buffer variable inside a subroutine, that meant that it was placed on the stack. At the larger buffer sizes I was running past the end of the stack. Easy fix, just move it out of the subroutine, and I’m up to full length.

I’m now onto trying to get BTPipes working…