Multirate Digital Signal Processing

Winser Alexander , Cranos Williams , in Digital Signal Processing, 2017

7.3 Frequency Interpretation of the Downsampler

The frequency domain representation a downsampler can be derived in a similar fashion. For some down sampling factor M, the Z Transform of the downsampled signal in (7.2) can be written as

(7.7) Y ( z ) = n = y ( n ) z n = n = x ( n M ) z n .

Unfortunately, this representation does not allow Y ( z ) to be written directly as a function of X ( z ) . An intermediate signal can be generated, which can link Y ( z ) and X ( z ) . This provides the needed link between the Z Transforms of the input and output of the downsampler. Define a new signal x int ( n )

(7.8) x int ( n ) = { x ( n ) , n = 0 , ± M , ± 2 M , , 0 , otherwise.

Since every Mth sample of x int ( n ) is equal to every Mth sample of x ( n ) (i.e., x int ( n M ) = x ( n M ) ), we proceed from (7.7) yielding

(7.9) Y ( z ) = n = x int ( n M ) z n = k = x int ( k ) z k / M = X int ( z 1 / M ) .

The signal X int ( z 1 / M ) can be obtained by deriving X int ( z ) in terms of X ( z ) . It is clear that x int ( n ) can be written as

(7.10) x int ( n ) = c ( n ) x ( n )

where

(7.11) c ( n ) = { 1 , n = 0 , ± M , ± 2 M , , 0 , otherwise.

The signal c ( n ) can be rewritten as

(7.12) c ( n ) = 1 M k = 0 M 1 e j 2 π k n M

where this equality can be proven using the geometric series. Thus

(7.13) X int ( z ) = n = c ( n ) x ( n ) z n = 1 M n = ( x ( n ) k = 0 M 1 e j 2 π k n M ) z n = 1 M k = 0 M 1 n = x ( n ) ( z e j 2 π k M ) n = 1 M k = 0 M 1 X ( z e j 2 π k n M ) .

Substituting X int ( z ) into Eq. (7.9), we get

(7.14) Y ( z ) = 1 M k = 0 M 1 X ( z 1 M e j 2 π k M )

with a frequency response of

(7.15) Y ( e j ω ) = 1 M k = 0 M 1 X ( e j ω M e j 2 π k M ) = 1 M k = 0 M 1 X ( e j ω 2 π k M ) ,

or

(7.16) Y ( ω ) = 1 M k = 0 M 1 X ( ω 2 π k M ) .

To get an idea of what is going on conceptually, let us look at the case when M = 2 . This gives a frequency response of

(7.17) Y ( ω ) = 1 2 ( X ( ω 2 ) + X ( ω 2 π 2 ) ) .

X ( ω / 2 ) is just an expanded version of X ( ω ) and X ( ( ω 2 π ) / 2 ) is the expanded version shifted by 2π. These two signals will not overlap (alias) only if X ( ω ) is bandlimited to ± π / 2 or X ( ω ) = 0 , | ω | π / 2 . In general, aliasing due to a factor of M downsampling is absent if and only if the signal x ( n ) is bandlimited to ± π / M . As was demonstrated conceptually above, if the original signal x ( n ) possessed frequency content outside of π / 2 , aliasing would occur. The process of filtering a signal to bandlimit it over the range ± π / M and then downsampling by a factor of M is known as decimation. In the following section, we discuss the process of interpolation and decimation in greater detail.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128045473000073

Linear Systems Analysis in the Time Domain—Convolution

John Semmlow , in Circuits, Signals and Systems for Bioengineers (Third Edition), 2018

5.3.2 The Impulse Signal in the Frequency Domain

The impulse signal has a very special frequency-domain representation. Again, we show this by example. In the next example, we find the magnitude spectra for two of the pulse signals used in Example 5.1 and the magnitude spectrum of a true discrete impulse signal: a signal that has a value of 1.0 for one the first sample and zero everywhere else.

Example 5.2

Find the magnitude spectra of two pulse signals having widths of 5 and 2   ms and a true discrete impulse signal. The true impulse signal should have a value of 1.0 for the first sample and zeros everywhere else. Assume a sample rate of 1   kHz as in the last example, but make the signal length 1000 samples. Plot the spectra superimposed, but scale the maximum value of each magnitude spectrum to 1.0 for easy comparison. Label the three spectra. As always, plot only the valid spectral points and label the plots.

Solution: We can borrow the same code used in Example 5.1. Since f s   =   1   kHz, the pulse width vector has only three entries of 5, 2, and 1 representing the three pulse widths in milliseconds. Instead of using the pulse signals as inputs to an unknown system, we find their magnitude spectra using the fft routine. We normalize the magnitude spectra to have a maximum value of 1.0 then plot the three spectra superimposed.

% Example 5.2 Find the magnitude spectra of two pulses having a width of

%   5 and 2 msec and a true discrete impulse signal.

%

PW = [5, 2, 1];           % Pulse widths in msec

fs = 1000;                 % Sample frequency (given)

N = 1000;                         % Data length (given)

N2 = 500;                   % Valid spectral points

f = (1:N)∗fs/N;           % Frequency vector for plotting

%

for k = 1:3                 % Do 3 spectral plots

    x = [ones(1,PW(k)) zeros(1,N-PW(k))];             % Generate pulse signal

    X = abs(fft(x));           % Find magnitude spectrum

    X = X/max(X);             % Normalize the spectra

    plot(f(1:N2),X(1:N2),'k'); hold on;               % Plot pulse spectra

  …...label spectral curves.......

end

Results: The three spectral curves are shown in Figure 5.4. As shown in Example 3.3, the magnitude spectrum of a pulse signal has a shape given by |sin(x)/x)| (see Figure 3.12). This is seen for the 2- and 5-ms pulses, where the shorter pulse produces a spectrum that falls off less rapidly with increasing frequency, but still goes to zero at the Nyquist frequency, f s /2. The true impulse has a much different magnitude spectrum. It is a constant value across all frequencies between 0 and f s /2   Hz. Its phase spectrum is also a constant. As shown in one of the problems, the phase angle is 0.0   degree over the frequency range of 0 to f s /2   Hz.

Figure 5.4. The magnitude spectra of three pulse signals. Since f s   =   1   kHz, the 1.0-ms pulse width signal is a true discrete impulse signal. Although the longer signals decrease in frequency (as |sin(x)/x|, see Example 3.3), the impulse signal's spectrum is a constant over all valid frequencies, 0 to f s /2.

The spectrum of the true impulse is quite different and rather remarkable. It contains an equal amount of energy for all the valid frequencies in the signal, a property that can be very useful in exploring the frequency characteristics of a system. Just as a signal can have a spectrum, so can a system. 2 A system's spectrum shows how that system attenuates or enhances an input signal as a function of frequency. The impulse response can be used to find a system's spectrum. Here is the rationale: if the input signal in the frequency domain is a constant across all frequencies, the output frequencies show how the system modifies signals as a function of frequency. In other words, if the impulse has a constant spectrum, the spectrum of the impulse response must be identical to the spectrum of the system. So to find the spectrum of a system, we only need to convert the impulse response to the frequency domain using the Fourier transform.

We can use the impulse signal to find the frequency characteristics of the unknown system used in Example 5.1. When the input is effectively an impulse, the spectrum of the output is shown in Figure 5.5 (solid line). (Note the expanded frequency scale ranges between 0 and 15   Hz.) The system is seen to decrease for frequencies that are above 2.0   Hz. This system, whatever its real purpose, acts like a low-pass filter. It can respond to signals having low-frequency energy, but for input signals much above 5   Hz there is little response. The dashed line in Figure 5.5 shows the magnitude spectrum of a 5.0-ms pulse to be almost constant over this limited frequency range. That is why it acts like an impulse in Example 5.1. For the limited range of frequencies to which the unknown system is capable of responding, the 5.0-ms pulse signal looks like an impulse.

Figure 5.5. The magnitude spectrum of the unknown system from Example 5.1. The response of this system decreases rapidly for frequencies above 2.0   Hz. In the frequency domain it looks like a low-pass filter, although its actual function is unknown. The dashed line shows the magnitude spectrum of a 5.0-ms pulse signal, which appears to be constant over this limited frequency range. This explains the finding in Example 5.1 that such a pulse can serve as an impulse signal.

This illustrates another way to determine whether a short pulse can be considered an impulse if you know the frequency characteristics of the system. Just compare the system's spectrum with that of the short pulse. If the pulse spectrum is more or less flat over the range of the system's nonzero spectrum, it can be considered an impulse. This idea is presented in one of the problems. In the next two chapters, we become deeply involved in the frequency characteristics of the system and we will return to the impulse input with its unique spectral properties.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128093955000059

Accurate classification of heart sounds for disease diagnosis by using spectral analysis and deep learning methods

Pratima Upretee , Mehmet Emin Yüksel , in Data Analytics in Biomedical Engineering and Healthcare, 2021

The discrete Fourier transform method

The DFT is a method to obtain a frequency domain representation of a finite sequence acquired by sampling a continuous function at regular intervals. It is widely used to investigate the spectral content of a finite length discrete-time signal sequence, which corresponds to a heart sound signal frame in this work. The mathematical expression for calculation of DFT is as follows:

X k = n = 0 N 1 x n e j 2 π N nk

where n denotes the sample number of the signal samples within the heart sound signal frame, k denotes the frequency bin number of the discrete frequency-domain signal, and N denotes the total number of signal samples within the frame.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128193143000148

System Modeling

Patricia Mellodge , in A Practical Approach to Dynamical Systems for Engineers, 2016

2.3.1 Overview

So far we have dealt exclusively with systems in the time domain. We have used differential equations and difference equations to mathematically represent how a system behaves, and we have plotted variables versus time and generated phase plots. However, there is another way to mathematically represent systems that is a bit more abstract but holds much information.

A transfer function (or system function) is a frequency domain representation of a dynamical system. Before giving going further, let us first express three assumptions that we will use when discussing transfer functions.

1.

Transfer functions are used for linear time-invariant systems. Nonlinear or time-varying systems need different analysis techniques.

2.

Transfer functions assume the system is initially at rest (zero initial conditions). An example of trying to use transfer functions with nonzero initial conditions (and the associated difficulties) will be given.

3.

Transfer functions describe behavior between a single input and a single output. Multi-input and multi-output systems have more than one transfer function to describe the various input–output relationships.

Simply stated, a transfer function of a continuous-time system is defined by

(2.25) H ( s ) = Y ( s ) X ( s )

where X(s) and Y(s) are the Laplace transforms of the system input x(t) and output y(t) respectively.

For a discrete-time systems, the transfer function is defined by

(2.26) H ( z ) = Y ( z ) X ( z )

where X(z) and Y(z) are the z-transforms of the system input x[n] and output y[n], respectively.

We will not discuss Laplace and z-transform theory in detail, nor will we derive many of the relationships and characteristics. In the following sections, we will discuss what the transforms and transfer functions are, how they are used, and apply them to various examples.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081002025000024

Signal Analysis in the Frequency Domain

John Semmlow , in Circuits, Signals and Systems for Bioengineers (Third Edition), 2018

3.3.4 Finding the Fourier Coefficients

The Fourier series equation, Equation 3.3 , can give us the frequency domain representation of any signal, x(t). For each sinusoidal term (i.e., each value of harmonic number m), the coefficient C m contributes one magnitude frequency point, and the coefficient θ m provides one magnitude phase point. 8 So all we have to do is find those coefficients. In principle we could use cross-correlation, but that is difficult to do analytically (and not efficient when we use the computer). Instead we split the sinusoidal term in Equation 3.3 into sine and cosine terms using Equations 2.23 and 2.24 Equations 2.23 Equations 2.24 . The Fourier series equation of Equation 3.3 then becomes:

(3.5) x ( t ) = a 0 2 + m = 1 a m cos ( 2 π m f 1 t ) + m = 1 b m sin ( 2 π m f 1 t )

where a 0  C 0 in Equation 3.3, and from Equation 2.24, noting that θ is defined as negative in those equations: a m   = C m cos(−θ)   = C m cos(θ) and b m   = C m sin(−θ)   =   C m sin(θ). In Equation 3.5 the a and b coefficients are the amplitudes of the cosine and sine components of x(t). The Fourier series equation using a and b coefficients is referred to as the "rectangular representation," whereas the equation using C and θ is called the "polar representation." Rectangular to polar conversion is done using Equations 2.25 and 2.26 and are repeated here slightly modified:

(3.6) C m = a m 2 + b m 2

(3.7) θ m = tan 1 ( b m a m )

Again, Equation 2.26 solves for negative θ, but in the Fourier series equations, Equations 3.2 and 3.3, θ is positive. By making b m negative in Equation 3.7, the resulting θ becomes positive.

Using the rectangular representation of the Fourier series (Equation 3.5), we can find the a m and b m coefficients by simple correlation. We apply Equation 2.31, the basic correlation equation in the continuous domain, where y(t) is the sine or cosine term and x(t) is the signal (or vice versa):

(3.8) a m = 2 T 0 T x ( t ) cos ( 2 π m f 1 t ) d t m = 1,2,3 ,

(3.9) b m = 2 T 0 T x ( t ) sin ( 2 π m f 1 t ) d t m = 1,2,3 ,

These correlations are calculated for each harmonic number, m, to obtain the Fourier series coefficients, the a m 's and b m 's, representing the cosine and sine amplitudes at the associated frequencies, 2πmf 1. (A formal derivation of Equations 3.8 and 3.9 are given in Appendix A-2.) Equations that calculate the Fourier series coefficients from x(t) are termed "analysis equations." Equation 3.5 works in the other direction, generating x(t) from the a's and b's, and is known as the "synthesis equation."

The factor of 2 is used because in the Fourier series, the coefficients, a m and b m , are defined in Equation 3.3 as amplitudes, not correlations. However, there is really no agreement on how to scale the Fourier equations, so you might find these equations with other scalings. MATLAB's approach is avoidance: its routine for determining these coefficients uses no scaling. 9 When using MATLAB to find a m and b m , it is up to you to scale the output as you wish.

Our usual strategy for transforming a continuous signal into the frequency domain is to first calculate the a and b coefficients, then convert them to C and θ coefficients using Equations 3.6 and 3.7. We use this approach because it is easier to work out the correlations using sines and cosines (i.e., Equations 3.8 and 3.9) than cross-correlating against a sinusoidal term. That said, for complicated signals, it can still be quite difficult to solve for the Fourier coefficients equations analytically. Fortunately, for all real-world signals, these equations are implemented on a computer.

The constant term in Equation 3.5, a 0/2, is the same as C 0/2 in Equation 3.3 and is also known as the "DC term." It accounts for the offset or bias in the signal. If the signal has zero mean, as is often the case, then a 0  = C 0  =   0. Otherwise, the value of the DC term is just twice the mean:

(3.10) a 0 = 2 T 0 T x ( t ) d t

The reason a 0 and C 0 are calculated as twice the average value is for them to be compatible with the Fourier analysis equations of Equations 3.8 and 3.9, which also involve a factor of 2. To offset this doubling, a 0 and/or C 0 are divided by 2 in the synthesis equations, Equations 3.3 and 3.5 (logical, if a bit confusing).

Sometimes, 2πmf 1 is stated in terms of radians, where 2πmf 1  = 1. Using frequency in radians makes the equations look cleaner, but in engineering practice frequency is measured in hertz. Both are used here. Another way of writing 2πmf 1 is to combine the mf 1 into a single term, f m , so the sinusoid is written as C m cos(2πf m t  + θ m ) or in terms of radians as C m cos(ω m t  + θ m ). So an equivalent representation of Equation 3.3 is:

(3.11) x ( t ) = C 0 2 + m = 1 C m cos ( ω m t + θ m )

Equations 3.8 and 3.9 are also sometimes written in terms of a period normalized to 2π. This can be useful when working with a generalized x(t) without the requirement for defining a specific period.

(3.12) a m = 2 π π π x ( t ) cos ( m t ) d t m = 1,2,3 ,

(3.13) b m = 2 π π π x t sin m t d t m = 1,2,3 ,

Here we always work with time functions that have a specific period to make our analyses correspond more closely to real-world applications. So Equations 3.12 and 3.13 are for reference only.

To implement the integration and correlation in Equations 3.8 and 3.9, there are a few constraints on x(t). First, x(t) must be capable of being integrated over its period; specifically:

(3.14) 0 T | x ( t ) | d t <

Unfortunately this constraint rules out a large class of interesting signals: transient signals as described in Section 1.4.2 and shown in Figure 1.20. Recall, these are signals that change, rapidly or slowly, but do not repeat and do not return to a baseline in a finite amount of time. Because the change lasts forever, the integral in Equation 3.14 must be taken between 0 to ∞ and is not finite. In some cases, you might be able to recast a transient signal into a periodic signal and this is explored in one of the problems.

The second constraint is that, although x(t) can have discontinuities, those discontinuities must be finite in number and have finite amplitudes. Finally, the number of maxima and minima must also be finite. These three criteria are sometimes referred to as the "Dirichlet conditions" and are met by many real-world signals.

A brief note about terminology: The analysis (Equations 3.8 and 3.9) and their discrete equivalents should be called "Fourier series analysis," but they are often called the "Fourier transform," especially when implemented in the discrete domain on a computer. Technically, "Fourier transform" should be reserved for the analysis of continuous aperiodic signals. Likewise the synthesis equation, Equation 3.5, and its discrete equivalent are usually called the "inverse Fourier transform." This usage is so common that it is pointless to make distinctions between the Fourier transform and Fourier series analysis.

The more sinusoids included in the summations of Equations 3.3 or 3.5, the better the representation of the signal, x(t). For an exact representation of a continuous signal, the summation should be infinite, but in practice the number of sine and cosine components that have meaningful amplitudes is limited. Often only a few sinusoids are required for a decent representation of the signal. Figure 3.5 shows the reconstruction of a square wave by a different number of sinusoids: 3, 9, 18, and 36. The square wave is one of the most difficult waveforms to represent using a sinusoidal series because of the sharp transitions. Figure 3.5 shows that the reconstruction is fairly accurate even when the summation contains only nine sinusoids.

Figure 3.5. Reconstruction of a square wave using 3, 9, 18, and 36 sinusoids. The square wave is one of the most difficult signals to represent with a sinusoidal series. The oscillations seen in the sinusoidal approximations are known as "Gibbs artifacts." They increase in frequency, but do not diminish in amplitude, as more sinusoids are added to the summation.

The square wave reconstructions shown in Figure 3.5 become sharper as more sinusoids are added, but still contain oscillations. These oscillations, termed Gibbs artifacts, occur whenever a finite sinusoidal series is used to represent a signal with discontinuity. They increase in frequency when more sinusoidal terms are added so the largest overshoot moves closer to the discontinuity. Gibbs artifacts occur in a number of circumstances that involve truncation. They can be found in MR images when there is a sharp transition in the image and the resonance signal is truncated during data acquisition. They are seen as subtle dark ridges adjacent to high contrast boundaries as shown in the MR image of Figure 3.6. We encounter Gibbs artifacts again due to truncation of digital filter coefficients in Chapter 9.

Figure 3.6. MR image of the eye showing Gibbs artifacts near the left boundary of the eye. This section is enlarged on the right side.

Image courtesy of Susan and Lawrence Strenk of MRI Research, Inc.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128093955000035

Modulation

Rodger Ziemer , in Encyclopedia of Physical Science and Technology (Third Edition), 2002

I.A Signals and Spectra

The study of modulation methods is facilitated by using both time and frequency domain representations. A signal's frequency domain description, or spectrum, is given by

(1) X ( f ) = x ( t ) exp ( j 2 π f t ) d t = Δ F [ x ( t ) ]

The time domain representation of the signal x(t) is obtained from its spectrum by the integral

(2) x ( t ) = X ( f ) exp ( j 2 π f t ) d t = Δ F 1 [ x ( t ) ]

which is called the inverse Fourier transform of X(f). These integrals provide what is referred to as a Fourier transform pair of any signal. Not only can tables of Fourier transform pairs be derived, but theorems relating important properties of Fourier transforms can be obtained and give much insight into how various operations on signals affect their spectra. Important theorems for the consideration of modulation techniques are the frequency translation and modulation theorems, which state the following. Given a signal x(t) with Fourier transform X(f), then

(3) F [ x ( t ) exp ( ± j 2 π f 0 t ) ] = X ( f ± f 0 ) frequency translation theorem

and

(4) F [ x ( t ) cos ( 2 π f 0 t ) ] = 1 2 [ X ( f f 0 ) + X ( f + f 0 ) modulation theorem

These theorems simply state that multiplication of a signal by a sinusoid of constant frequency f 0 shifts the spectrum of the signal from being centered around zero frequency to being centered around frequency f 0.

Convenient Fourier transform pairs for the sequel are

(5) F [ A cos ( 2 π f 0 t ) ] = A 2 [ δ ( f f 0 ) + δ ( f + f 0 ) ]

and

(6) F [ A ] = A δ ( f )

where δ(f) is the unit impulse or delta function and A is a constant. These transform pairs, as well as the theorems given previously, will prove useful in the description of modulation systems.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105004567

Data stream processing: models and methods

Patrick Schneider , Fatos Xhafa , in Anomaly Detection and Complex Event Processing over IoT Data Streams, 2022

Fast Fourier Transform (FFT)

The Fast Fourier Transform (FFT) algorithm transforms a time series into a frequency domain representation. The frequency spectrum of a digital signal is represented as a frequency resolution of sampling rate/FFT points, where the FFT point is a chosen scalar that must be greater than or equal to the time series length. Because of its simplicity and effectiveness, FFT is often used as a base method against which other spectrum analysis methods are compared.

The FFT takes an N sample time series and produces N frequency samples evenly distributed over a frequency range of sample rate/2, making it a one-to-one transform that does not cause any loss of information.

The Nyquist frequency (folding frequency) is the maximum frequency of the sampling rate/2 in this transform, reconstructed with the FFT. The bins of the FFT magnitude spectrum track the sinusoidal amplitude of the signal at the corresponding frequency. The FFT produces several values that can be converted into size and phase. The FFT spectrum of a signal has a symmetry so that only half of the bins are unique, from zero to + sample rate / 2. The bins from zero to sample rate / 2 are a mirror image of the positive bins around the origin (i.e., H, the frequency zero). Therefore, N / 2 unambiguous frequency ranges from zero to the sampling rate / 2 for a real N-sampling signal. By following this principle, the FFT can be applied and interpreted without the complicated mathematics associated with the concept of "negative Frequencies" necessary to be understood precisely.

Accurate frequency sampling can be achieved by appending M zeros to the N sample signal, thereby generating ( M + N ) / 2 bins from zero to sampling rate/2. This is known as zero paddings. Padding with zero does not increase the spectral resolution, as no additional signal information is included in the calculation, but it does provide an interpolated spectrum with different bin frequencies. The magnitude spectrum is usually referred to as the power spectrum density PSD or power spectrum.

The signal power is proportional to the amplitude (or magnitude) squared. A power spectrum estimate can be obtained just by squaring the FFT amplitude. Variants of the periodogram can be used to obtain more robust FFT-based estimates of the power spectrum [12]. Fig. 2.4 illustrates the power spectral density using a squared FFT on a time series consisting of the sum of two sinusoids. Note that the sinusoid frequencies are resolved using the FFT. In the figure, the time-domain signal was sampled at a rate of 50 Hz resulting in a Nyquist-Frequency of 25 Hz. In the frequency-domain, it can be seen that the time-domain signal was created by a combination of a 5 Hz and a 10 Hz sinusoidal signal.

Figure 2.4

Figure 2.4. Signal represented in Time-Domain (top) and Frequency-Domain (bottom).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128238189000122

Volume 3

Babak Azmoudeh , Dean Cvetkovic , in Encyclopedia of Biomedical Engineering, 2019

Time frequency trade-off

Physiological signals due to their intrinsic characteristic have dynamic variation in both time and frequency domain, this means that their frequency domain representation of spectrum changes over time. Classical signal processing tools such as the Fourier transform are not well suited for analyzing dynamic and nonstationary signals. Based on Heisenberg's uncertainty principle there is a built-in trade-off between time and frequency resolution which is the key to CWT and makes it possible to analyzing signals with rapidly varying high-frequency components superimposed on slowly varying low-frequency components, as described by the following equation:

(13) Δ ω ω a Δ t ω a = Δ ω ω Δ t ω = constant 0.5

It is possible to divide any time-varying signal into time intervals short enough that signal is stationary in each section. Time-frequency analysis is most commonly performed by segmenting a signal into those short periods and estimating the spectrum over sliding window. Time frequency tradeoff is illustrated in Fig.6 which shows the time frequency boundaries for the Mexican hat wavelet for various values of a. By adjusting spectrum function, it is possible to tradeoff time and frequency to get best representation of desired signal. Based on represented information longer segment provides better frequency resolution and shorter segments represents better time resolution. When no frequency resolution or time resolution values are specified, spectrum attempts to find a good balance between time and frequency resolution based on input signal length.

Fig. 6

Fig. 6. Time-frequency tradeoff of wavelets.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128012383999720

Bits 'n' Pieces – Digital Audio

Richard Brice , in Music Engineering (Second Edition), 2001

The discrete cosine transform

The encoded data's similarity to a Fourier transform representation has already been noted. Indeed, in a process developed for a very similar application, Sony's compression scheme for MiniDisc, actually uses a frequency domain representation utilising a variation of the DFT method known as the Discrete Cosine Transform. The DCT takes advantage of a distinguishing feature of the cosine function which is illustrated in Figure 10.23; that the cosine curve is symmetrical about the time origin. In fact, it's true to say that any waveform which is symmetrical about an arbitrary 'origin', is made up of solely cosine functions. Difficult to believe, but consider adding other cosine functions to the curve illustrated in Figure 10.23. It doesn't matter what size or what period waves you add, the curve will always be symmetrical about the origin. Now, it would obviously be a great help, when we come to perform a Fourier transform, if we knew the function to be transformed was only made up of cosines because that would cut down the maths by half (see Chapter 2). This is exactly what is done in the DCT. A sequence of samples from the incoming waveform are stored and reflected about an origin. Then one half of the Fourier transform performed. When the waveform is inverse transformed, the front half of the waveform is simple ignored, revealing the original structure.

Figure 10.23. Cosine function

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750650403500300

Common Analog Modulation and Pulse-Shaping Methods

Tony J. Rouphael , in RF and Digital Signal Processing for Software-Defined Radio, 2009

2.3.1 The Rectangular Pulse

The rectangular pulse is defined as

(2.22) Π LT ( t ) = { A - LT 2 t LT 2 0 otherwise

where A is the amplitude of the pulse and L is an integer. The frequency domain representation of the rectangular pulse is

(2.23) F { Π LT ( t ) } = A LT 2 LT 2 e - j 2 π ft dt = ALT sin ( π LTf ) π LTf

The Fourier transform of the rectangular pulse is real and its spectrum, a sinc function, is unbounded. This is equivalent to an upsampled pulse-train of upsampling factor L. In real systems, rectangular pulses are spectrally bounded via filtering before transmission which results in pulses with finite rise and decay time.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750682107000023