Home
Science
I.T.
Arts

“mean Value Theorem Of The Integral Calculus In Fast Fourier Transform And Its Applications”  


Abstract Category: Other Categories
Course / Degree: M.Phil Mathematics
Institution / University: Sri Venkateshwara University, India
Published in: 2013


Dissertation Abstract / Summary:

Digital Signal Processing (DSP) is one of the most powerful technologies that will shape science and engineering in the twenty-first century. Revolutionary changes have already been made in a broad range of fields: communications, medical imaging, radar & sonar, high fidelity music reproduction, and oil prospecting, to name just a few. Each of these areas has developed a deep DSP technology, with its own algorithms, mathematics, and specialized techniques. This combination of breath and depth makes it impossible for any one individual to master all of the DSP technology that has been developed. DSP education involves two tasks: learning general concepts that apply to the field as a whole, and learning specialized techniques for your particular area of interest. This chapter starts our journey into the world of Digital Signal Processing by describing the dramatic effect that DSP has made in several diverse fields. The revolution has begun.

Digital Signal Processing is distinguished from other areas in computer science by the unique type of data it uses: signals. In most cases, these signals originate as sensory data from the real world: seismic vibrations, visual images, sound waves, etc. DSP is the mathematics, the algorithms, and the techniques used to manipulate these signals after they have been converted into a digital form. This includes a wide variety of goals, such as: enhancement of visual images, recognition and generation of speech, compression of data for storage and transmission, etc. Suppose we attach an analog-to-digital converter to a computer and use it to acquire a chunk of real world data.

As an analogy, DSP can be compared to a previous technological revolution: electronics. While still the realm of electrical engineering, nearly every scientist and engineer has some background in basic circuit design. Without it, they would be lost in the technological world. DSP has the same future.

This recent history is more than a curiosity; it has a tremendous impact on your ability to learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other publications to find a solution. What you will typically find is page after page of equations, obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! Much of the DSP literature is baffling even to those experienced in the field. It's not that there is anything wrong with this material, it is just intended for a very specialized audience. State-of-the-art researchers need this kind of detailed mathematics to understand the theoretical implications of the work.

A basic premise of this book is that most practical DSP techniques can be learned and used without the traditional barriers of detailed mathematics and theory. The Scientist and Engineers Guide to Digital Signal Processing is written for those who want to use DSP as a tool, not a new career.

Telecommunications is about transferring information from one location to another. This includes many forms of information: telephone conversations, television signals, computer files, and other types of data. To transfer the information, you need a channel between the two locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications companies receive payment for transferring their customer's information, while they must pay to establish and maintain the channel. The financial bottom line is simple: the more information they can pass through a single channel, the more money they make. DSP has revolutionized the telecommunications industry in many areas: signaling tone generation and detection, frequency band shifting, filtering to remove power line hum, etc. Three specific examples from the telephone network will be discussed here: multiplexing, compression, and echo control.

When a voice signal is digitized at 8000 samples/sec, most of the digital information is redundant. That is, the information carried by any one sample is largely duplicated by the neighboring samples. Dozens of DSP algorithms have been developed to convert digitized voice signals into data streams that require fewer bits/sec. These are called data compression algorithms. Matching uncompression algorithms are used to restore the signal to its original form. These algorithms vary in the amount of compression achieved and the resulting sound quality. In general, reducing the data rate from 64 kilobits/sec to 32 kilobits/sec results in no loss of sound quality. When compressed to a data rate of 8 kilobits/sec, the sound is noticeably affected, but still usable for long distance telephone networks. The highest achievable compression is about 2 kilobits/sec, resulting in sound that is highly distorted, but usable for some applications such as military and undersea communications.

This included to given some idea of the decisions now being reached about standards for video compression. Starting from an image in which each colour at each small square (pixel) is assigned a numerical shading between 0 to 255, the goal is to compress all that data to reduce the transmission cost. Since 256 = 28, we have 8 bits for each of red-green-blue. The bit-rate of transmission is set by the channel capacity, the compression rule is decided by the filters and quantizes, and the picture quality is subjective. Standard images are so familiar that experts know what to look for – like tasting wine or tea.

Think of the problem mathematically. We are given f(x,y,T), with x-y axes on the TV screen and the image f changing with time t. For digital signals all variables are discrete, but a continuous function is closed –or piecewise continuous when the image has edges, probably f changes gradually as the camera moves. We could treat f as a sequence of still images to compress independently, which seems inefficient. But the direction of movement is unpredictable, and too much effort spent on extrapolation is also inefficient.

A compromise is to encode every fifth or tenth image, and between those to work the time differences ∆f – which have less information and can be compressed further. Fourier methods generally use real transforms. The picture is broken into blocks often 8 by 8. This improvement in the scale length is more important than the control of logn in the FFT cost. After twenty years of refinement, the algorithms are still being fought over and improved. Wavelets are a recent entry, not yet among the heavyweights. The accuracy test Ap is often set aside in the goal of constructing “brick wall filters” – whose symbols P() are near to characteristic functions. An exact zero-one function in Figure 3 is of course impossible – the designers are frustrated by a small theorem un mathematics. In any case the Fourier transform of a step function has oscillations that can murder a pleasing signal – so a compromise is reached.

This brief report is included to give some idea of the decisions now being reached about standards for video compression. Starting from an image in which each colour at each small square (pixel) is assigned a numerical shading between 0 to 255, the goal is to compress all that data to reduce the transmission cost. Since 256 = 28, we have 8 bits for each of red-green-blue. The bit-rate of transmission is set by the channel capacity, the compression rule is decided by the filters and quantizes, and the picture quality is subjective. Standard images are so familiar that experts know what to look for – like tasting wine or tea.

Think of the problem mathematically. We are given f(x,y,T), with x-y axes on the TV screen and the image f changing with time t. For digital signals all variables are discrete, but a continuous function is closed –or piecewise continuous when the image has edges, probably f changes gradually as the camera moves. We could treat f as a sequence of still images to compress independently, which seems inefficient. But the direction of movement is unpredictable, and too much effort spent on extrapolation is also inefficient.

A wave is usually defined as an oscillating function of time or space, such as sinusoids, which has proven to be extremely valuable in mathematics, science and engineering, especially for periodic time-invariant or stationary phenomena. A wavelet id a “small wave”, which has its energy, concentrated in time to give a tool for the analysis of transient, non-stationary or time varying phenomena. It still has the oscillating wave like characteristics but also has the ability to allow simultaneous time and frequency analysis with a flexible mathematical foundation. This is illustrated with the wave (sinusoid) oscillating with equal amplitude over - < t <  and therefore having infinite energy and with the wavelet having its finite energy concentrated around a point.

Virtually all wavelet systems have these very general characteristics. Where the Fourier series maps a one-dimensional function of a continuous variable into a one-dimensional sequence of coefficient, the wavelet expansion maps it into a two dimensional array of coefficients. It is this two-dimensional representation that allows localizing the signal in both time and frequency.

A Fourier series expansion localizes in frequency in that if a Fourier series expansion of a signal has only one large coefficient than the signal is essentially a single sinusoid at the frequency determined by the index of the coefficient. The simple time domain representation of the signal itself gives the localization in time. If the signal is a simple pulse, the location of that pulse is the localization in time. A wavelet representation will give the location in the both the time and frequency simultaneously.

If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems- many still unsolved. A sensible person would stop first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe.


Dissertation Keywords/Search Tags:
Dissertation Abstract

This Dissertation Abstract may be cited as follows:
No user preference. Please use the standard reference methodology.

Dissertation Images:
Other Categories - “mean Value Theorem Of The Integral Calculus In Fast Fourier Transform And Its Applications” S Purushothaman
(click to enlarge)

 

Submission Details: Dissertation Abstract submitted by Purushothaman Subramani from India on 01-Nov-2013 05:54.
Abstract has been viewed 2156 times (since 7 Mar 2010).

Purushothaman Subramani Contact Details: Email: purushi.ind@gmail.com Phone: 8685000207



Disclaimer
Great care has been taken to ensure that this information is correct, however ThesisAbstracts.com cannot accept responsibility for the contents of this Dissertation abstract titled "“mean Value Theorem Of The Integral Calculus In Fast Fourier Transform And Its Applications” ". This abstract has been submitted by Purushothaman Subramani on 01-Nov-2013 05:54. You may report a problem using the contact form.
© Copyright 2003 - 2024 of ThesisAbstracts.com and respective owners.


Copyright © Thesis Abstract | Dissertation Abstracts Thesis Library 2003-2024.
by scope.com.mt @ website design