Timeout is a condition that isn’t generally appropriate for USB devices. In a well-behaved, well-designed system, it simply shouldn’t happen. If it does, it’s representative of a “catastrophic” issue. The operating system can handle it, because any good operating system should… but it might not be pretty.
USB is not like ethernet. There shouldn’t be connectivity issues. There shouldn’t be unexpected delays. There aren’t reroutes due to outages.
This is not to say that a system involving USB or even a system involving BTPipes cannot time out. It just means that the protocols implement at the USB layer and the BTPipe layer shouldn’t. There can be layered implementations on top of this that can expect systemic timeouts, if appropriate.
The best way to avoid a timeout is to only start a transfer when you know it will complete. That is, ask the device “do you have 1,000 bytes for me to read?” If the answer is yes, then read 1,000 bytes. Or, ask the device “How many bytes do you have for me to read?” And then read the answer.
This is, actually, exactly how streaming devices operate when the stream is transferred over a block device or a packetized system. Think about how “streaming” a YouTube video works. It’s not like the bits flow like a river from Netflix to your television. The source sends a “stream” of packets (say 1024 bytes at a time). The packets arrive, are buffered, and “streamed” to the screen (note that even in this case, they still arrive to the screen in packets – one frame at a time).
If the buffer fills at the TV, back pressure is applied via the protocol to Netflix to tell it to slow down. Netflix doesn’t just keep sending you “The Real Housewives” hoping that you’ll eventually catch up. If the buffer empties, then the protocol has some fallback mechanisms.
But here’s where we get into some details… There are layers to these protocols and lots of devices involved. At some point, data “streams” into a memory chip on the TV. But this particular transfer isn’t allowed the convenience of a timeout. It either happens or it doesn’t. The timeout is handled at a higher protocol layer. You can’t split the block transfer to the memory any more than you can split a byte half way through the transfer through a UART.
How do you make a system that can recover… First, we should note that not all systems are engineered the same. Some don’t need to handle this sort of situation at all. This is why there are so many protocols out there.
But here’s one option… Make your protocol always operate on blocks of 65,536. Let the first two bytes of this packet inform the recipient of how many bytes are valid in the block. The remaining 65,534 are payload. Here’s how this would work…
GreedyConsumer: Give me a block! NOW!
Producer: Ok. Here you go. I filled this with 65,534 bytes. Payload = 65,534 bytes.
GreedyConsumer: Give me a block! NOW!
Producer: Ok. Here you go. I filled this with 1,000 bytes. Payload = 1,000 bytes + 64,534 bytes of garbage. (or 0’s)
GreedyConsumer: Give me a block! NOW!
Producer: Ok. Here you go. I filled this with 0 bytes. Payload = 0 bytes + 65,534 bytes of garbage.
GreedyConsumer: Woah. Did he just time me out? Brutal.
Now the GreedyConsumer tells HER greedy consumer that there was a timeout. But the underlying protocol that Producer and GreedyConsumer have been operating on goes on merrily without crashing.