Lecture 23: Digital Signal Processing 10:10 What is digital signal processing? Digital Signal Processing (DSP) uses digital computations (= algorithms in C) to process sampled-data signals. The sampled-data signals are obtained from the real world through A/D conversion. They are sampled at a given speed (samples/second) and a given precision (bit) DSP is used in many cases where you would normally use an analog components. The advantage of doing computations digitally is that: 1/ There's no component tuning like in analog 2/ We can use a high precision (many bits), leading to very high signal-to-noise ratio 3/ In many cases, cheaper to implement. Eg when run on a small microcontroller, it's just an application in C. Just to give an idea of the signal to noise ratio. A digital signal of n bit precision can approximate the analog value within 1 LSB - there is a 1 LSB 'quantization error'. That manifests itself as noise, 'quantization noise'. The dynamic range (the distance between the noise level and the strongest signal), in dB, is 20 log10 (2^n) ~ 6.021 . n + 1.763 So every extra bit in a digital signal improves the SNR, the relative level of the signal to the noise, with 6dB. For 16-bit audio (CD quality), we get 98 dB. In comparison, the dynamic range of the human ear is 0 to 130dB (pain threshold). DSP does not solve everything. Very-high-bandwidth signals can easily outperform the conversion speed capabilities of the A/D, or the computation speed of the DSP computing hardware. In those cases, analog components must be used. Some applications of DSP include: - Digital filtering for audio and for sensing - Image processing - Digital Communications - Compression of audio and video - Speech processing and recognition - Radar, Sonar, Seismology 10:15 Structure of a DSP system The general structure of a DSP system is as follows: +-------+ +-------+ +-------+ | | | DSP | | | | A/D |---->| Algo |---->| D/A | | | | rithm | | | +-------+ +-------+ +-------+ | | Fs Fs The sample frequency of the input and the output for a single-rate DSP system is the same, and equal to Fs. For example, Fs = 8Khz. This means that the A/D must deliver a new sample each 125ms. - The A/D conversion clock would typically be much faster. The A/D triggering must be at _exactly_ 125ms. Hence, The A/D conversions must be precisely controlled in time. Failure to do so results in signal distortion. - The DSP Algorithm runs on a processor that is much faster than the sample rate. The processing of signal samples is a real-time requirement. Each 125ms, a new sample must be taken from the A/D. The number of clock cycles to process samples at a rate of Fs equals: II = Fclk / Fs where II stands for Introduction Interval. For example, at Fclk = 3Mhz and Fs = 8Khz, we have 375 processing cycles per sample. Hence, a DSP algorithm must complete on this processor in less than 375 cycles. This sets an upperbound on the complexity of the DSP algorithm that can be supported on this processor. 10:20 Discrete-time signals Signals processed in DSP algorithms are not continuous time but discrete time. They can be represented as a discrete stream of samples x(n). Consider an analog signal such as x(t) = A cos(2.pi.F.t) then, after sampling with a period T, we obtain a discrete sequence x(nT) = A cos(2.pi.F.n.T) = A cos(2.pi.F/Fs.n) The ratio F/Fs is called the relative or normalized frequency. (graph of x(nT)) We can also draw the spectrum of x(t) and x(nT). (graph of Freq(x(t)) (graph of Freq(x(nT)) Once we have converted a signal to a discrete-time signal, the question is if we can ever reconstruct the original continuous-time signal. If we look at the spectrum of x(nT), we see that we can reconstruct x(t) by throwing out the higher-frequency components. That is, we need to remove every component beyond |Fs/2|. (graph of reconstruction filter) There is another way of looking at this problem, and that is to find an answer to the following question: How fast do we have to sample a signal to be sure that we can reconstruct the analog version from the discrete time version? That answer to that question is the SAMPLING THEOREM. It expresses a relation beween F and Fs that guarantees perfect reconstruction: Fs > 2F Or in general, for an arbitrary analog signal with frequency content Fs > 2Fmax We can even demonstrate how to reconstruct the original signal using a 'brick' lowpass filter. (time-domain representation of the reconstruction of x(t) from x(nT)) 10:40 Finite Impulse Response Filters A digital filter is an algorithm that computes an output sequence y(n) out of an input sequence x(n). Note that, in this notation, we have left out the period T; we are working in the sampled-data domain, where every operation is normalized by T. Here is an example of a moving average filter y(n) = 1/3 * x(n) + 1/3 * x(n-1) + 1/3 * x(n-2) with x(-1) = x(-2) = 0 We observe: - the filter uses a single input x(n), but also makes use of delayed versions of x(n), namely x(n-1) and x(n-2). These delayed versions are created by storing x(n) and x(n-1) for one sample period. These two signal samples are the filter state. - the filter uses coefficients to multiplier x(n), x(n-1) and x(n-2). In this case, the coefficients have the purpose of weights over the three samples. - A non-zero input sample x(n) will have effect on the output y(n), y(n+1) and y(n+2). Hence, if x(n) is a single impulse, with all zeroes before and after, then we will see three non-zero y(n), y(n+1) and y(n+2). For this reason, we call this a FINITE IMPULSE RESPONSE filter or FIR. (graph of FIR architecture) 10:45 Infinite Impulse Response Filters An infinite impulse response filter has a recurrence equation that includes the output y(n). For example y(n) = 1/4 * y(n-1) + 1/3 * x(n) + 1/3 * x(n-1) + 1/3 * x(n-2) with x(-1) = x(-2) = y(-1) = 0 We observe: - The filter now has to store three samples: x(n-1), x(n-2) and y(n-2) - A non-zero input sample x(n) will have effect on every future output, assuming that the filter can be computed with infinite precision. For this reason, we call this an INFINITE IMPULSE RESPONSE filter or IIR The design of FIR and IIR filter that match a given frequency characteristic is called DIGITAL FILTER DESIGN. It is usually done using software. The input to the software is the desired frequency/phase response of the filter, while the output is the set of filter coefficients generated by the software. These coefficients are usually floating point numbers. And this leads to the next topic. 10:50 Floating Point and Fixed Point Arithmetic How can we implement digital filters using C? You can see that the implementation of a FIR or an IIR is a fairly straightforward C function. - You need to create a set of state variables (such as x(n-1), x(n-2) or y(n-1)) which are mapped to static or global variables. - You also need to create a mechanism to efficiently do multiplications with coefficients, and summing up the result. For example, our moving-average filter from the start can be implemented as follows: double movingAverage(double x) { double y; // filter state and initialization static double xn1 = 0.0, xn2 = 0.0; // filter y = x / 3. + xn1 / 3. + xn2 / 3.; // state update xn2 = xn1; xn1 = x; return y; } In the cyclic executive, we would use such a filter as follows int s; double y; while (1) { s = getSampleADC(); y = movingAverage(s * 1.0); // further operations with y ... } Now, using floating point computations on a small microprocessor is not a good idea. Floating point hardware is power-hungry. For example, the FPU in our Cortex-M4 CPU is turned off by default. To use hardware floating point operations, you have to enable it. Floating point hardware is also expensive. Very simple processors do not include floating point hardware. For such processors, the only way to work with double and float is to make use of a floating point software library, which is very slow. For these reasons, floating point computations are usually avoided on microcontrollers. In small microcontrollers, it is common to try to compute these FIR and IIR expressions using integer arithmetic. We will discuss a specific representation for fractional data called fixed-point arithmetic. Summary until here: - Structure of a digital signal processing system - Discrete-time signals - Sampling theorem - Finite Impulse Response Filters - Infinite Impulse Response Filters - Fixed Point Arithmetic Numbers ---===--- The following may spill into Wednesday's lecture ---===--- 11:00 Fixed Point Arithmetic A fixed-point number is a bit vector of W bits, from which L are fractional bits. Signed integers are a subset of fixed-point numbers. A signed integer can be thought of as a <32,0> number. Fixed-point numbers of length W smaller than or equal to 32 bits can be implemented as integer data types. (a) Fractional Constant -> To map a fractional constant v into a fixed point number implemented as an integer k, we compute: k = (v << L); For example, in a <32,4> data type, the constant 0.25 would be represented as the integer k = (0.25 << 4) = 4 (b) Value of a fixed-point number Similarly, we can compute the value of a fixed point number v represented in an integer k as follows: v = (k / (1 << L)); For example, in a <32,8> data type, the value integer 15 actually represents the value v = 15 / 256 = 0.05859375 10:50 Operations using fixed-point numbers The advantage of fixed-point numbers, is that the operations on fixed-point numbers are integer operations, and hence they are efficient to implement. Fixed-point numbers do not use normalization logic or exponent-computation logic such as is find in a floating-point hardware. However, fixed-point operations have to handle the binary point at position L, especially if an operation would affect it. The rules are as follows. 1. The addition of two fixed point numbers yields a similar fixed point number = + 2. The multiplication of two fixed-point numbers creates a fixed-point number of twice the size <2W, 2L> = * 3. The convert a fixed-point number into , where L1 > L2, use a shift operation: = () >> (L1 – L2) 4. To apply rule 3 for multiplication, we can produce a result from operands as follows: = ( * ) >> L Note that this formula assumes that no overflow will occur, or in other words, that there are no significant bits in the result between position W+1 and 2W. This behavior corresponds to what we are used to for integer arithmetic. Example. Let’s compute 0.25 * 0.35 + 1.15 using <32,8> arithmetic. #define FRACBITS 8 int a = (int) (0.25 * (1 << FRACBITS)); int b = (int) (0.35 * (1 << FRACBITS)); int c = (int) (1.15 * (1 << FRACBITS)); int tmp1, tmp2; tmp1 = (a * b) >> FRACBITS; tmp2 = tmp1 + c; This code produces the following integer values. a 64 b 89 c 294 tmp1 22 tmp2 316 The resulting value, tmp2, corresponds to the value 316/(1 << FRACBITS) = 1.234375. Note that the floating-point result of the expression (0.25 * 0.35 + 1.15) is 1.2375. There is some precision loss using fixed-point computation, because we do not keep all fractional bits required to store the result of 0.25 * 0.35, This precision loss is called the quantization error. In fixed-point computations, we try to minimize the quantization error. At the same time, we try to avoid overflow. For example, the fixed-point data type cannot represent values bigger than 0.99... or smaller than -1.