ELEC 432

Background

Sampling

In Labview we have two analog I/O devices available:

Sound Card.
16 bits, maximum sample rate of 44.1 kHz.
NI 6251 DAQ card.
16 bits, maximum sample rate of 1.25 MHz.
From this list it appears that the maximum frequency signal that we can handle is 625 kHz, which doesn't cover very much of the radio spectrum. But remember that the Nyquist limit specifies the bandwidth that we can digitize, not necessarily its frequency. For example, if we sample a bandlimited signal centered at 1.2 MHz with a sample rate of 1.0 MHz, it will be aliased to 200 kHz, but no information is lost, and if we are aware of the aliasing we can successfully process the signal. The process of deliberately aliasing a signal goes by a variety of names including: harmonic sampling, subsampling, and undersampling.

The fastest A/D converter we will use has a maximum sample rate of 1.25 Msps and no input filter, so we have the option of:

  1. Low pass filter the antenna signal and restrict ourselves to frequencies below about 500 kHz.
  2. Bandpass filter the antenna signal and use harmonic sampling.
  3. Ignore analog input filtering and see what happens.
Question 1:
In option 1 above, why would we need to limit the maximum frequency to 500 kHz rather than 625 kHz?
Question 2:
There's bound to be a limit to how high a frequency we can harmonically sample. Suggest one or two phenomena that might determine this limit. While in the lab, determine this value for our setup and give your result.

Option 1 leaves us with nothing very interesting to listen to (at least with the components we currently have) so we will investigate 2 and 3.

Buffering and Blocks

Writing DSP code for a DSP chip requires attention to a number of details, such as task scheduling, interrupt handling, etc., but at least on a DSP chip those details are under our control. On the PC, not only are those details hidden (or even locked away) from us, but we have to contend with competition from other, non-DSP tasks, unpredictable interrupts from processes we didn't create, and a heavyweight operating system. The good news is that with care, we can achieve satisfactory performance while utilizing up to around 90% of the processor capacity.

The key to success in this situation is adequate buffering. If we can provide input buffers to store input samples while we are blocked from processing and maintain a sufficiently long output queue, the flow of data samples in and out can proceed uninterrupted even if the execution of our process is eratic.

Buffered I/O typically provides and accepts data in blocks, and structuring our processing around a stream of blocks rather than a stream of individual samples can provide a dramatic improvement in performance. The overhead of passing a block of data from one functional block to another is approximately the same regardless of the size of the block, so working with large blocks rather than individual samples is significantly more efficient.