In Labview we have two analog I/O devices available:
The fastest A/D converter we will use has a maximum sample rate of 1.25 Msps and no input filter, so we have the option of:
Option 1 leaves us with nothing very interesting to listen to (at least with the components we currently have) so we will investigate 2 and 3.
Writing DSP code for a DSP chip requires attention to a number of details, such as task scheduling, interrupt handling, etc., but at least on a DSP chip those details are under our control. On the PC, not only are those details hidden (or even locked away) from us, but we have to contend with competition from other, non-DSP tasks, unpredictable interrupts from processes we didn't create, and a heavyweight operating system. The good news is that with care, we can achieve satisfactory performance while utilizing up to around 90% of the processor capacity.
The key to success in this situation is adequate buffering. If we can provide input buffers to store input samples while we are blocked from processing and maintain a sufficiently long output queue, the flow of data samples in and out can proceed uninterrupted even if the execution of our process is eratic.
Buffered I/O typically provides and accepts data in blocks, and structuring our processing around a stream of blocks rather than a stream of individual samples can provide a dramatic improvement in performance. The overhead of passing a block of data from one functional block to another is approximately the same regardless of the size of the block, so working with large blocks rather than individual samples is significantly more efficient.