WriteToBlockPipeIn unreliable

I am working on migrating from XEM6010 module to ZEM4310.

The problem I’m having now is WriteToBlockPipeIn somehow starts to write random data after a few tries.

This is my system

ZEM4310 module
FrontPanel 4.4.2
I’m using Python API
I tried with Windows 8.1 and Mac OS Yosemite. Both have the same problem.

Here is the testing condition.

for i in range(4000):

    logic_0 = bytearray.fromhex(u'0000 0000 0000 0004 0000 0000 0000 0000')
    logic_1 = bytearray.fromhex(u'0000 0000 0000 0000 0000 0000 0000 0000')
    data = logic_0 + logic_1
    xem.ActivateTriggerIn(0x40,4)
    xem.WriteToBlockPipeIn(0x81,16,data)
    time.sleep(0.1)

    logic_0 = bytearray.fromhex(u'0000 0000 0000 0002 0000 0000 0000 0000')
    logic_1 = bytearray.fromhex(u'0000 0000 0000 0000 0000 0000 0000 0000')
    data = logic_0 + logic_1
    xem.ActivateTriggerIn(0x40,4)
    xem.WriteToBlockPipeIn(0x81,16,data)
    time.sleep(0.1)

I have the ZEM4310 hooked up to some DAC so I can monitor what’s written to the DAC. Basically this loop works fine for about 1300 times, then the data written is stuck at some random bits (which is always the same, for some reasons). I tested this with Windows 8.1 and Mac OSX Yosemite and they both have the same problem. I also tried with FrontPanel 4.3.1 and it has problem.

What works is that if with in the for loop I reconfigure the FPGA every time, then the loop works fine up to the 4000th iteration.

Any comment or feedback is highly appreciated.

thetorque

How do you manage EP_READY?

I set it to be always ‘1’ which works in the old design with the XEM6010 module. I did the same thing here with ZEM4310. Might this cause the problem? how should I manage it?

I have some more data from a quick experiment.

1.) I tried with just WriteToPipeIn (no block-throttle) and the result is the same.

2.) About the timing, it seems like the data transfer is corrupted after a fixed amount of time. In the above code, if I change the delay time in time.sleep to 0.05 seconds, then the data transfer is corrupted at about the 60th iteration. If I change it to 0.1 seconds, then the data transfer is corrupted at about the 20th iteration. If I change to 1 second, then it’s bad only after 2 or 3 tries.

So to me it’s not the problem of how many transfers but more like timing or time-out issue.

A bit more information.

I just implemented exactly the same thing on XEM6010 and it works fine.

Please submit a request to our support email. It would be best if you could include a very simple HDL and application that can be used to reproduce this issue.

Ok. No problem. I will prepare a simple HDL that can reproduce this issue and submit a request. For now I will just use the old XEM6010 module to run my experiment.

Was this problem ever resolved?
I’m getting exactly the same problem with the ZEM4310 as well, for a block pipe out, using python.
It seems to fail 7 seconds after configuration

Make sure you have connected the okAA signal correctly.

I have very similar issue with ReadFromBlockPipeOut for ZEM4310. The data transfer has no issue until a fixed number of transfers, which depends on the pll clock frequency that I use to control the internal state machines and FIFO interface. For example, it takes 883 transfers at 100MHz, while it takes 88 transfers at 10MHz before the transfer fails.

When it fails - the function still returns the amount of data transferred (122880 bytes in my case) in a single transfer, and does not time out, but the transferred data is not what I send from the FPGA, but some fixed pattern (repeating 13, 240, 173, 222 in decimal).

I wonder how should I check okAA signal connection – it seems like I am following what is described in the samples.

Also, I found the sdc file for okHost does not have okAA constraints.

I solved the issue above. For those who might experience similar issue in future, here is my summary:

Device - ZEM4310
Issue - After ~7 seconds of successful transfers, ReadFromBlockPipeOut reads incorrect value from FPGA FIFO, which is a repeating pattern of 13, 240, 173, 222 in decimal. It does not depend on the amount of transferred data.
Environment - Quartus Prime 20.1.0 Lite, FrontPanel 5.2.3, Python 3.7

Solution
Support’s replies on a couple of relevant posts suggest that it is the “okAA” pin connection issue - either in Verilog or QSF (I/O Assignment) - and it was indeed from the okAA in my case. I basically re-created Quartus Project and migrated all the files. After that, I cleared the project and imported QSF again to make sure Quartus has the correct I/O assignments, specifically for the okAA pin. One thing that I suggest is to name your QSF different from your project (or main entity) name before you import that QSF, as Quartus seems to recreate and overwrite the QSF when you build the design with the same name as the project.