19 Most Cited papers in Signal Processing

Robotics Logo

Signal Processing is a sub-field in the electrical engineering. It is growing rapidly due to the advance in the processing power of computers and Integrated Circuits (ICs), as well as mathematical theory of this field. In signal processing we manipulate and analysis of signals. The signal itself can be either analog or digital (sampled and quantized). Signal processing has many applications, for example, filtering electrical signals to remove unwanted noise, separating mixed signals from each other. An practical example is the noise cancelling headphones that many of us know and use. Today signal processing has applications in communications, control, biomedical engineering, image and video processing, economic forecasting, radar, sonar, geophysical exploration, and consumer electronics.

To come up with the list of most cited papers in signal processing, we have used Google Scholar. In there we have done some extensive search and found the Robotics most cited papers. Usually the more a paper is cited the more impact and importance it has. What we see in this blog post is a list of the 19 most cited papers in signal processing. For each paper, we have included its authors, the number of citations, publication date and the journal or conference that the paper has been published there. We have also inclouded a short summay for each paper.

1. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information

Authors: Emmanuel J Candès, Justin Romberg, Terence Tao
Published in: IEEE Transactions on information theory, 2006/1/23
Number of citations: 17,941
Summary: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal and a randomly chosen set of frequencies of mean size . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set ? A typical result of this paper is as follows: for each M > 0, suppose that f obeys then with probability at least , f can be reconstructed exactly as the solution to the minimization problem
In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for α which depends on the desired probability of success; except for the logarithmic factor, the condition on the size of the support is sharp. The methodology extends to a variety of other setups and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one or two-dimensional) object from incomplete frequency samples—provided that the number of jumps (discontinuities) obeys the condition above—by minimizing other convex functionals such as the total-variation of f.

Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
Citation Trend

2. Independent component analysis, a new concept?

Authors: Pierre Comon
Published in: Signal processing, 1994/4/1
Number of citations: 11,383
Summary: The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of ICA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution.

Independent component analysis, a new concept?
Citation Trend

3. An introduction to compressive sampling

Authors: Emmanuel J Candès, Michael B Wakin
Published in: Signal Processing Magazine, IEEE, 2008/3
Number of citations: 11,103
Summary: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

An introduction to compressive sampling
Citation Trend

4. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation

Authors: Michal Aharon, Michael Elad, Alfred Bruckstein
Published in: Signal Processing, IEEE Transactions on, 2006/11
Number of citations: 10,590
Summary: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the K-SVD algorithm—generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data.

K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation
Citation Trend

5. Beamforming: A versatile approach to spatial filtering

Authors: Barry D Van Veen, Kevin M Buckley
Published in: IEEE assp magazine, 1988/4
Number of citations: 5,225
Summary: A beamformer is a processor used in conjunction with an array of sensors to provide a versatile form of spatial filtering. The sensor array collects spatial samples of propagating wave fields, which are processed by the beamformer. The objective is to estimate the signal arriving from a desired direction in the presence of noise and interfering signals. A beamformer performs spatial filtering to separate signals that have overlapping frequency content but originate from different spatial locations. In this paper an overview of beamforming from a signal-processing perspective is provided, with an emphasis on recent research. Data-independent, statistically optimum, adaptive, and partially adaptive beamforming are discussed. Basic notation, terminology, and concepts are included. Several beamformer implementations are briefly described.

Beamforming: A versatile approach to spatial filtering
Citation Trend

6. Two decades of array signal processing research: the parametric approach

Authors: Hamid Krim, Mats Viberg
Published in: Signal Processing Magazine, IEEE, 1996/7
Number of citations: 5,189
Summary: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as …

Two decades of array signal processing research: the parametric approach
Citation Trend

7. Enhancing sparsity by reweighted ℓ 1 minimization

Authors: Emmanuel J Candes, Michael B Wakin, Stephen P Boyd
Published in: Journal of Fourier analysis and applications, 2008/12
Number of citations: 4,938
Summary: It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.

Enhancing sparsity by reweighted ℓ 1 minimization
Citation Trend

8. The unscented Kalman filter for nonlinear estimation

Authors: Eric A Wan, Rudolph Van Der Merwe
Published in: Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373), 2000/10/4
Number of citations: 4,929
Summary: This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the …

The unscented Kalman filter for nonlinear estimation
Citation Trend

9. Vector quantization

Authors: Robert Gray
Published in: IEEE Assp Magazine, 1984/4
Number of citations: 4,423
Summary: A vector quantizer is a system for mapping a sequence of continuous or discrete vectors into a digital sequence suitable for communication over or storage in a digital channel. The goal of such a system is data compression: to reduce the bit rate so as to minimize communication channel capacity or digital storage memory requirements while maintaining the necessary fidelity of the data. The mapping for each vector may or may not have memory in the sense of depending on past actions of the coder, just as in well established scalar techniques such as PCM, which has no memory, and predictive quantization, which does. Even though information theory implies that one can always obtain better performance by coding vectors instead of scalars, scalar quantizers have remained by far the most common data compression system because of their simplicity and good performance when the communication rate is …

Vector quantization
Citation Trend

10. Compressive sampling

Authors: Emmanuel J Candès
Published in: Proceedings of the international congress of mathematicians, 2006/8/22
Number of citations: 4,354
Summary:
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, ie the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, eg the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.


Compressive sampling
Citation Trend

11. Wavelets and signal processing

Authors: Olivier Rioul, Martin Vetterli
Published in: IEEE signal processing magazine, 1991/10
Number of citations: 3,992
Summary: Wavelet theory provides a unified framework for a number of techniques which had been developed independently for various signal processing applications. For example, multiresolution signal processing, used in computer vision; subband coding, developed for speech and image compression; and wavelet series expansions, developed in applied mathematics, have been recently recognized as different views of a single theory. In this paper a simple, nonrigorous, synthetic view of wavelet theory is presented for both review and tutorial purposes. The discussion includes nonstationary signal analysis, scale versus frequency, wavelet analysis and synthesis, scalograms, wavelet frames and orthonormal bases, the discrete-time case, and applications of wavelets in signal processing. The main definitions and properties of wavelet transforms are covered, and connections among the various fields where results have been developed are shown.

Wavelets and signal processing
Citation Trend

12. Adaptive wavelet thresholding for image denoising and compression

Authors: S Grace Chang, Bin Yu, Martin Vetterli
Published in: IEEE transactions on image processing, 2000/9
Number of citations: 3,935
Summary: The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of …

Adaptive wavelet thresholding for image denoising and compression
Citation Trend

13. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems

Authors: Mário AT Figueiredo, Robert D Nowak, Stephen J Wright
Published in: Selected Topics in Signal Processing, IEEE Journal of, 2007/12
Number of citations: 3,925
Summary: Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications …

Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
Citation Trend

14. Blind beamforming for non-Gaussian signals

Authors: Jean-François Cardoso, Antoine Souloumiac
Published in: IEE proceedings F (radar and signal processing), 1993/12/1
Number of citations: 3,792
Summary: The paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resort to their hypothesised value. By using estimates of the directional vectors obtained via blind identification, i.e. without knowing the array manifold, beamforming is made robust with respect to array deformations, distortion of the wave front, pointing errors etc., so that neither array calibration nor physical modelling is necessary. Rather suprisingly, ‘blind beamformers’ may outperform ‘informed beamformers’ in a plausible range of parameters, even when the array is perfectly known to the informed beamformer. The key assumption on which blind identification relies is the statistical independence of the sources, which is exploited using fourth-order cumulants. A computationally efficient technique is presented for the blind estimation of directional vectors, based on joint …

Blind beamforming for non-Gaussian signals
Citation Trend

15. Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels

Authors: Quentin H Spencer, A Lee Swindlehurst, Martin Haardt
Published in: IEEE transactions on signal processing, 2004/1/21
Number of citations: 3,725
Summary: The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as "block-diagonalization," is a generalization of channel inversion when there …

Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels
Citation Trend

16. Single-pixel imaging via compressive sampling

Authors: Marco F Duarte, Mark A Davenport, Dharmpal Takhar, Jason N Laska, Ting Sun, Kevin F Kelly, Richard G Baraniuk
Published in: IEEE signal processing magazine, 2008/3/21
Number of citations: 3,591
Summary: The authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling (CS). CS combines sampling and compression into a single nonadaptive linear measurement process. Rather than measuring pixel samples of the scene under view, we measure inner products between the scene and a set of test functions. Interestingly, random test functions play a key role, making each measurement a random sum of pixel values taken across the entire image. When the scene under view is compressible by an algorithm like JPEG or JPEG2000, the CS theory enables us to stably reconstruct an image of the scene from fewer measurements than the number of reconstructed pixels. In this manner we achieve sub-Nyquist image acquisition.

Single-pixel imaging via compressive sampling
Citation Trend

17. A survey of dynamic spectrum access

Authors: Qing Zhao, Brian M Sadler
Published in: IEEE signal processing magazine, 2007/5/21
Number of citations: 3,529
Summary: Compounding the confusion is the use of the broad term cognitive radio as a synonym for dynamic spectrum access. As an initial attempt at unifying the terminology, the taxonomy of dynamic spectrum access is provided. In this article, an overview of challenges and recent developments in both technological and regulatory aspects of opportunistic spectrum access (OSA). The three basic components of OSA are discussed. Spectrum opportunity identification is crucial to OSA in order to achieve nonintrusive communication. The basic functions of the opportunity identification module are identified

A survey of dynamic spectrum access
Citation Trend

18. A blind source separation technique using second-order statistics

Authors: Adel Belouchrani, Karim Abed-Meraim, J-F Cardoso, Eric Moulines
Published in: IEEE Transactions on signal processing, 1997/2
Number of citations: 3,374
Summary: Separation of sources consists of recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is available: The linear mixture should be "blindly" processed. This typically occurs in narrowband array processing applications when the array manifold is unknown or distorted. This paper introduces a new source separation technique exploiting the time coherence of the source signals. In contrast with other previously reported techniques, the proposed approach relies only on stationary second-order statistics that are based on a joint diagonalization of a set of covariance matrices. Asymptotic performance analysis of this method is carried out; some numerical simulations are provided to illustrate the effectiveness of the proposed method.

A blind source separation technique using second-order statistics
Citation Trend

19. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture

Authors: Christian Jutten, Jeanny Herault
Published in: Signal processing, 1991/7/1
Number of citations: 3,363
Summary: The separation of independent sources from an array of sensors is a classical but difficult problem in signal processing. Based on some biological observations, an adaptive algorithm is proposed to separate simultaneously all the unknown independent sources. The adaptive rule, which constitutes an independence test using non-linear functions, is the main original point of this blind identification procedure. Moreover, a new concept, that of INdependent Components Analysis (INCA), more powerful than the classical Principal Components Analysis (in decision tasks) emerges from this work.

Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture
Citation Trend