Lecture 24: Digital Signal Processing 2 10:10 Recap Digital Signal Processing 1. We are building a system that will process signal samples in real time 2. The samples are taken from an A/D which converts an analog input, e.g. microphone 3. The sampled representation of an analog signal is called a discrete-time signal. 4. The Sampling Theorem specifies the conditions under which the analog signal can be reconstructed from the discrete-time version of that signal. The condition is: Fmax < Fs/2 or, the sample frequency must be greater than twice the highest frequency component of the signal before sampling 5. Violation of the above rule results in a distortion called ALIASING. Aliasing is a non-linear distortion, and cannot be removed by filtering. 6. A Finite Impulse Response Filter is a DSP algorithm that converts in input sample stream x[n] into an output sample stream y[n]. Example y(n) = 1/3 * x(n) + 1/3 * x(n-1) + 1/3 * x(n-2) with x(-1) = x(-2) = 0 The task of selecting digital filter coefficients is the results of a design effort called DIGITAL FILTER DESIGN. We will not discuss filter design in this class; coefficients will always be given to you, or there will be an explanation how to compute them. The filter has only a single input x(n). The other values, x(n-1) and x(n-2), are produced by remembering old values of x(n). Hence you get the following C function double movingAverage(double x) { double y; // filter state and initialization static double xn1 = 0.0, xn2 = 0.0; // filter y = x / 3. + xn1 / 3. + xn2 / 3.; // state update xn2 = xn1; xn1 = x; return y; } 10:15 Infinite Impulse Response Filters An infinite impulse response filter has a recurrence equation that includes the output y(n). For example y(n) = 1/4 * y(n-1) + 1/3 * x(n) + 1/3 * x(n-1) + 1/3 * x(n-2) with x(-1) = x(-2) = y(-1) = 0 We observe: - The filter now has to store three samples: x(n-1), x(n-2) and y(n-2) - A non-zero input sample x(n) will have effect on every future output, assuming that the filter can be computed with infinite precision. For this reason, we call this an INFINITE IMPULSE RESPONSE filter or IIR However, the same design principle as for FIR filters applies. Hence, the following function shows how to implement this IIR. double movingAverageIIR(double x) { double y; // filter state and initialization static double xn1 = 0.0, xn2 = 0.0, yn1 = 0.0; // filter y = yn1 / 4. + x / 3. + xn1 / 3. + xn2 / 3.; // state update xn2 = xn1; xn1 = x; yn1 = y; return y; } Unlike FIR filters, IIR filters have a feedback from the output to the input. Hence, IIR filters can be unstable (positive feedback). Unstability in FIR is impossible. 10:20 Floating Data Types in Microcontrollers In the discrete-time signals we have considered so far, the signal values are floating point. That is, they can have am arbitrary precision. Computing with such data requires floating point data types. In C, the floating point data types are float 32 bit double 64 bit long double 128 bit Computing with floating point numbers is expensive - Need to align operands before every computation - Need separate operations for exponent and mantissa - Need to renormalize after every computation There are two solutions to implement/ work with float data types. 1. Use a software library that implements floating point data types. This solution is called EMULATION. 2. Use dedicated floating point hardware, if it's available. Floating point hardware is power-hungry. For example, the FPU in our Cortex-M4 CPU is turned off by default. To use hardware floating point operations, you have to enable it. Floating point hardware is also expensive. Very simple processors do not include floating point hardware. For such processors, the only way to work with double and float is to use emulation, which is very slow. For these reasons, floating point computations are usually avoided on microcontrollers. So, in many DSP systems, discrete time signals are not only discretized in time (samples), they are also discretizes in amplitude (like integers). We will discuss one well-known strategy, fixed-point arithmetic. 10:25 Fixed Point Arithmetic A fixed-point number is a bit vector of W bits, from which L are fractional bits. Signed integers are a subset of fixed-point numbers. A signed integer can be thought of as a <32,0> number. Example: <10,4> data type W <---------------------------------------> L <--------------> +---+---+---+---+---+---+---+---+---+---+ | | | | | | | | | | | +---+---+---+---+---+---+---+---+---+---+ | 2^0 | 2^-1 2^1 | 2^-2 2^2 | 2^-3 2^3 | 2^-4 2^4 | 2^5 | binary point Fixed-point numbers of length W smaller than or equal to 32 bits can be implemented as integer data types. (a) Fractional Constant -> To map a fractional constant v into a fixed point number implemented as an integer k, we compute: k = (v << L); For example, in a <32,4> data type, the constant 0.25 would be represented as the integer k = (0.25 << 4) = 4 (b) Value of a fixed-point number Similarly, we can compute the value of a fixed point number v represented in an integer k as follows: v = (k / (1 << L)); For example, in a <32,8> data type, the value integer 15 actually represents the value v = 15 / 256 = 0.05859375 10:35 Operations using fixed-point numbers The advantage of fixed-point numbers, is that the operations on fixed-point numbers are integer operations, and hence they are efficient to implement. Fixed-point numbers do not use normalization logic or exponent-computation logic such as is find in a floating-point hardware. However, fixed-point operations have to handle the binary point at position L, especially if an operation would affect it. The rules are as follows. 1. The addition of two fixed point numbers yields a similar fixed point number = + 2. The multiplication of two fixed-point numbers creates a fixed-point number of twice the size <2W, 2L> = * 3. The convert a fixed-point number into , where L1 > L2, use a shift operation: = () >> (L1 – L2) 4. To apply rule 3 for multiplication, we can produce a result from operands as follows: = ( * ) >> L Note that this formula assumes that no overflow will occur, or in other words, that there are no significant bits in the result between position W+1 and 2W. This behavior corresponds to what we are used to for integer arithmetic. 10:45 Example I. Let’s compute 0.25 * 0.35 + 1.15 using <32,8> arithmetic. #define FRACBITS 8 int a = (int) (0.25 * (1 << FRACBITS)); int b = (int) (0.35 * (1 << FRACBITS)); int c = (int) (1.15 * (1 << FRACBITS)); int tmp1, tmp2; tmp1 = (a * b) >> FRACBITS; tmp2 = tmp1 + c; This code produces the following integer values. a 64 b 89 c 294 tmp1 22 tmp2 316 The resulting value, tmp2, corresponds to the value 316/(1 << FRACBITS) = 1.234375. Note that the floating-point result of the expression (0.25 * 0.35 + 1.15) is 1.2375. There is some precision loss using fixed-point computation, because we do not keep all fractional bits required to store the result of 0.25 * 0.35, This precision loss is called the quantization error. In fixed-point computations, we try to minimize the quantization error. At the same time, we try to avoid overflow. For example, the fixed-point data type cannot represent values bigger than 0.99... or smaller than -1. 10:50 Example II. Let's design a fixed-point version of the moving average filter Original design: double movingAverage(double x) { double y; // filter state and initialization static double xn1 = 0.0, xn2 = 0.0; // filter y = x / 3. + xn1 / 3. + xn2 / 3.; // state update xn2 = xn1; xn1 = x; return y; } We will work with a <32,8> datatype. Hence, the constant 1/3 should have integer value 256/3 = 85 Hence, the fixed-point version of the filter is the following: int movingAverage(int x) { int y; // filter state and initialization static int xn1 = 0, xn2 = 0; // filter y = (85 * x) >> 8 + (85 * xn1) >> 8 + (85 * xn2) >> 8; // state update xn2 = xn1; xn1 = x; return y; } In other words, almost exactly the same, after replacing double with int, and scaling coefficients! 10:55 Demonstration of the msp432-recorder-demo example We will discuss the operation of this example in detail on Friday