SlideShare a Scribd company logo
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE)
e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 6, Ver. II (Nov - Dec .2015), PP 88-94
www.iosrjournals.org
DOI: 10.9790/2834-10628894 www.iosrjournals.org 88 | Page
Performance of Matching Algorithmsfor Signal Approximation
U.L Jyothirmayee
M.Tech, Student RISE Krishna Sai Group of Institutions
Abstract: One of the important classes of sparse signals isthe non-negative signals. Many algorithms have
alreadybeenproposed to recover such non-negative representations, whereveravaricious and convex relaxed
algorithms are among the mostpopular methods. The covetous techniques have been changedto incorporate the
non-negativity of the depictions. Onesuch modification has been proposed for Extranious EquivalentDetection
(EED), which first chooses positive constants and usesa non-negative optimization technique as a replacement
for theorthogonal projection onto the nominated support. Which graduallybuilds the sparse representation of a
signal by iterativelyadding the most correlated element of the lexicon, which iscalled an atom, to the set of
selected elements. A disadvantageof ED is that the depiction found by the algorithm isnot the best
representation using selected atoms. It may alsoreselect already selected atoms in the later iterations,
whichslows down the junction of the algorithm. Asa result, we present a novel fast implementation of the
Nonnegative EED, which is based on the QR decomposition and aniterative constants apprise. We will
empirically show that such amodification can easily accelerate the implementation by a factorof ten in a
reasonable size problem. We explain how the non-negativityconstraint of the coefficients stops us of using the
canonical EED and how we can modify the algorithm to not onlyhave a more intuitive atom selection step, but
have a lowercomputational complexity.
Keywords: Detection, decomposition, iteration, non-negative
I. Introduction
The phrase compressed sensing refers to the problem of realizing a sparse input xusing few linear
measurements that possess some incoherence properties. The fieldoriginated recently from an unfavorable
opinion about the current signal compressionmethodology. The conventional scheme in signal processing,
acquiring the entire signal and then compressing it, was questioned by Donoho [2].Indeed, this techniqueuses
tremendous resources to acquire often very large signals, just to throw awayinformation during compression.
The natural question then is whether we can combine these twoprocesses, and directly sense the signal or its
essential parts using fewlinear measurements. Recent work in compressed sensing has answered this questionin
positive, and the field continues to rapidly produce encouraging results.
The key objective in compressed sensing (also referred to as sparse signal recoveryor compressive
sampling) is to reconstruct a signal accurately and efficiently from aset of few non-adaptive linear
measurements. Signals in this context are vectors,many of which in the applications will represent images. Of
course, linear algebraeasily shows that in general it is not possible to reconstruct an arbitrary signal froman
incomplete set of linear measurements. Thus one must restrict the domain inwhich the signals belong. To this
end, we consider sparse signals, those with fewnon-zero coordinates. It is now known that many signals such as
real-world imagesor audio signals are sparse either in this sense, or with respect to a different basis.Since sparse
signals lie in a lower dimensional space, one would think intuitivelythat they may be represented by few linear
measurements.
This is indeed correct,but the difficulty is determining in which lower dimensional subspace such a
signallies.Multicarrier modulation has regained interest over the last decade. Several all-digital variants have
been proposed: discrete multitone (DMT) is adopted as transmission format for asymmetric digital subscriber
line (ADSL) and presented as a candidate for very high bit rate digital subscriber line (VDSL); orthogonal
frequency division multiplexing (OFDM) is proposed for wireless local area applications, e.g. HiperLAN. DMT
schemes divide the bandwidth into parallel subbands or tones. The incoming bitstream is split into parallel
streams that are used to QAM-modulate the different tones. The modulation is done by means of an inverse fast
Fourier transform (IFFT). Before transmission of a DMT symbol, a cyclic prefix of samples is added. If the
channel impulse response order is less than or equal to the cyclic prefix length v, demodulation can be
implemented by means of an FFT, followed by a (complex) 1-tap frequency domain equalizer (FEQ) per tone to
compensate for channel amplitude and phase effects.The challenges of building a classifier that can distinguish
between high-dimensionalmembers of various classes based on their shape differences, involves devising a
reliabledissimilarity measure that can perform shape-based comparisons of very high-dimensionalsignals.
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 89 | Page
Also, theautomation of the feature selection process to minimize human intervention and reliance on
domain knowledge. Finally, a robust prototype-based classifier thatcan detect outliers in test data.In order to
achieve the above objectives, a Matching Pursuits Dissimilarity Measureis presented. The EDDM extends the
well-known signal approximation technique EquivalentDetections (ED) for signalcomparison purposes [1]. MP
is a greedy algorithm thatapproximates a signal x as a linear combination of signals from a pre-defined
dictionary.MP is commonly used for signal representation and compression, particularly image andvideo
compression [5, 6]. The dictionary and coefficients information produced by the ED algorithm has been
previously used in some classification applications. However, mostof these applications work on some
underlying assumptions about the data and the MPdictionary (section 2.3). The EDDM is the first MP based
comparison measure that doesnot require any assumptions about the problem domain. It is versatile enough to
performshape-based comparisons of very high-dimensional signals and it can also be adoptedto perform
magnitude-based comparisons, similar to the Euclidean Distance. Since the EDDM is a differentiable measure,
it can be seamlessly used with existing clustering ordiscrimination algorithms.
Therefore, the EDDM may find application in a variety ofclassification and approximation problems of
very high-dimensional signals, includingimage and video signals. The experimental results show that EDDM is
more useful thanthe Euclidean distance for shape-based comparison between signals in high dimensions.
The potential usefulness of the MPDM for a variety of problems is demonstrated bydevising two
important EDDM-based algorithms. The first algorithm, called CAMP, dealswith the prototype-based
classification of high-dimensional signals. The second algorithmis called the EK-SVD algorithm and it
automates the dictionary learning process for theMP approximation of signals.In the CAMP algorithm, EDDM
is used with the Competitive Agglomeration (CA)clustering algorithm by Frigui and Krishnapuram to propose a
probabilistic classificationmodel [2]. The CA algorithm is a fuzzy clustering algorithm that learns the optimal
number of clusters duringtraining. Therefore, it eliminates the need for manually specifyingthe number of
clusters beforehand. This algorithm has been named as CAMP as an abbreviation of CA and ED algorithms.
II. Non-Negative Least Squares Algorithm
Let A be am × n matrix and b be a vector of dimension m.Consider the following feasibility problem:
Ax = b (1)
x ≥ 0 (2)
A straightforward way of solving the above problem through linear programming isby solving the
following LP problem:
(LP) : min Pnj=1 |sj|
Ax + s = bx ≥ 0
Observe that this norm 1 minimization can be carried out very efficiently by thesimplex in most cases.
However, there are some constraint matrices for which theSimplex method performs a large number of
degenerate pivots, not improving thesolution for many iterations, leading to a poor performance.
Our approach to solve the feasibility problem posed by relations 1 and 2 will alsobe the minimization of a p-
norm, but different from the norm 1 considered in (LP),we will consider the norm 2, i.e., we will solve the
following problem:
At first glance, problem (PLS) seems much harder than problem (LP). However,in cases where (LP) is
highly degenerate (as pointed out before), it is usually simplerto solve (PLS).E. Barnes et al. in [6] showed that
the normalized direction obtained by (PLS) isthe direction of the steepest ascent at π0 on the dual polyhedron
(D). This suggeststhat the dual direction obtained by (PLS) may be much better in practice than theone obtained
by the linear update in (LP). E. Barnes et al. in [6] showed empiricallythat this is indeed true for some classes of
problems.Since (PLS) is a convex program, KKT conditions are necessary and sufficient foroptimality. Thus,
the vector (x, s) is asolution for P if and only if there exists π suchthat:
The NNLS algorithm starts with a primal feasible solution, i.e., one that is feasiblefor (PLS), and tries
to find a solution for the problem (DLS). The NNLS algorithmis similar to the simplex method in the sense that
we have a subset of the columnsof A that is a primal feasible basis, and then we move from one primal feasible
basisto another. Unlike the simplex method, our ’basis’ is not required to be square. Theonly requirement is that
it is composed of linearly independent columns.Let B be a basis, i.e., a linearly independent subset of the
columns of A. Thenone crucial step of the nonnegative least squares algorithm is to solve the followingproblem:
MinkBx − bk2
Since the columns of B are linearly independent, the solution will be:
x = B+b, where B+ = (BtB)−1Bt
The matrix B+ is called the generalized inverse or pseudo inverse. If B is a basis, wesay that it
isfeasible for (PLS) if we have:
x = B+b>0
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 90 | Page
Theorem 1. Algorithm 1 terminates with a solution of problem (PLS).
1. Let B be the feasible basis for problem P
2. Let IB be the index set of the columns in B
3. bBx 


4.

 xBb
5.  0:  jAjS
6. if S the
7. stop: optimal solution found
8. end if
9. Let sk 
10. kABd 

11.
j
j
j
d
x
d 0
min


12. 
 BBIP
13. 2
j
j
t
PA
A
 
14.








 ,min
15. if

  then
16.  jII BB 
17. if

  then
18.   jj dxx  

19.   0:  xjII BB
20. end if
21.   Bjj IAB  ,
22. Return to 3
23. else
24.










j
j
BB
d
x
JII :
25.   Bjj IAB  ,
26. bBx 


27. Return to 10
28. end if
Proof.We will show that the algorithm terminates by showing that no basis can berepeated. Since the number of
basis is finite, the result follows.In order to prove that no basis can be repeated, we will show that, if a basis B
isupdated to Bˆ , then we must have:
2
0
2
0
minmin bBxbxB
xx



Let us suppose first that θ = θ ≤ θ
Let B be the current basis and Ajbe the entering column.Ifx is the current primal solution, then
x = (BtB)−1Btb
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 91 | Page
 2
0
2
0,
2
0
minminmin j
x
j
txx
AbBxbtABxbxB 


Let Bˆ be the new basis. Then the solution of the last minimization problem is:
    dxAbBBBx j
tt
 
1
III. Extranious EquivalentDetection
One such greedy algorithm is Orthogonal Matching Pursuit (OMP), put forth byMallat and his
collaborators (see e.g. [47]) and analyzed by Gilbert and Tropp [62].OMP uses subGaussian measurement
matrices to reconstruct sparse signals. If Φis such a measurement matrix, then Φ ∗Φ is in a loose sense close to
the identity.Therefore one would expect the largest coordinate of the observation vector y = Φ ∗Φxto correspond
to a non-zero entry of x. Thus one coordinate for the support of thesignal x is estimated. Subtracting off that
contribution from the observation vectory and repeating eventually yields the entire support of the signal x.
OMP is quitefast, both in theory and in practice, but its guarantees are not as strong as those ofBasis Pursuit.
The OMP algorithm can thus be described as follows:
Orthogonal Matching Pursuit (OMP)
Input: Measurement matrix Φ, measurement vector u = Φx, sparsity level s
Output: Index set I ⊂ {1, d}
Procedure:
Initialize Let the index set I = ∅ and the residual r = u.
Repeat the following s times:
Identify Select the largest coordinate λ of y = Φ ∗r in absolute value. Break
ties lexicographically.
Update Add the coordinate λ to the index set: I ← I ∪ {λ}, and update the
residual:
xˆ = argmin
z
k u − Φ| Izk 2; r = u − Φˆx.
Once the support I of the signal x is found, the estimate can be reconstructed asxˆ = Φ †Iu, whererecall
we define the pseudoinverse by Φ †I def= (Φ ∗IΦI) −1Φ ∗I.The algorithm’s simplicity enables a fast runtime.
The algorithm iterates s times,and each iteration does a selection through d elements, multiplies by Φ ∗, and
solvesa least squares problem. The selection can easily be done in O(d) time, and themultiplication of Φ ∗ in the
general case takes O(md). When Φ is an unstructuredmatrix, the cost of solving the least squares problem is
O(s2d). However, maintaininga QR-Factorization of Φ| I and using the modified Gram-Schmidt algorithm
reducesthis time to O(| I| d) at each iteration. Using this method, the overall cost of OMPbecomes O(smd). In the
case where the measurement matrix Φ is structured with afast-multiply, this can clearly be improved.
IV. Stagewise Extranious EquivalentDetection
An alternative greedy approach, StagewiseOrthogonal Matching Pursuit (StOMP)developed and
analyzed by Donoho and his collaborators [23], uses ideas inspired bywireless communications. As in OMP,
StOMP utilizes the observation vector y = Φ ∗uwhere u = Φx is the measurement vector. However, instead of
simply selecting thelargest component of the vector y, it selects all of the coordinates whose values areabove a
specified threshold. It then solves a least-squares problem to update theresidual. The algorithm iterates through
only a fixed number of stages and thenterminates, whereas OMP requires s iterations where s is the sparsity
level.
The pseudo-code for StOMP can thus be described by the following.
Input: Measurement matrix Φ, measurement vector u = Φx,
Output: Estimate ˆx to the signal x
Procedure:Initialize Let the index set I = ∅, the estimate ˆx = 0, and the residual r = u.Repeat the following until
stopping condition holds:
Identify Using the observation vector y = Φ ∗r, setJ = {j : | yj| >tkσk},where σk is a formal noise level and tk is
a threshold parameter for iterationk.
Update Add the set J to the index set: I ← I ∪ J, and update the residual and
estimate:xˆ| I = (Φ ∗IΦI) −1Φ ∗Iu, r = u − Φˆx.
The thresholding strategy is designed so that many terms enter at each stage,
and so that algorithm halts after a fixed number of iterations. The formal noise level
σk is proportional the Euclidean norm of the residual at that iteration.
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 92 | Page
Syphilis antigen in a bloodsample. Since this test was expensive, the method was to sample a group of
mentogether and test the entire pool of blood samples. If the pool did not contain theantigen, then one test
replaced many. If it was found, then the process could eitherbe repeated with that group, or each individual in
the group could then be tested.These sublinear algorithms in compressed sensing use this same idea to test for
elements of the support of the signal x. Chaining pursuit, for example, uses a measurement matrixconsisting of a
row tensor product of a bit test matrix and anisolation matrix, both of which are 0-1 matrices. Chaining pursuit
first uses bit teststo locate the positions of the large components of the signal x and estimate thosevalues. Then
the algorithm retains a portion of the coordinates that are largestmagnitude and repeats. In the end, those
coordinates which appeared throughouta large portion of the iterations are kept, and the signal is estimated using
these.Pseudo-code is available in [3], where the following result is proved.
Theorem (Chaining pursuit [31]). With probability at least 1 − O(d −3), theO(s log2 d) × d random
measurement operator Φ has the following property. Forx ∈ Rd and its measurements u = Φx, the Chaining
Pursuit algorithm produces asignal xˆ with at most s nonzero entries. The output xˆ satisfies
kx − xˆk 1 ≤ C(1 + log s)kx − xsk 1.
The time cost of the algorithm is O(s log2 s log2 d).HHS Pursuit, a similar algorithm but with
improved guarantees, uses a measurement matrix that consists again of two parts. The first part is an
identificationmatrix, and the second is an estimation matrix. As the names suggest, the identification matrix is
used to identify the location of the large components of the signal,whereas the estimation matrix is used to
estimate the values at those locations. Eachof these matrices consist of smaller parts, some deterministic and
some random. Using this measurement matrix to locate large components and estimate their values, HHS
Pursuit then adds the new estimate to the previous, and prunes it relative tothe sparsity level.
This estimation is itself then sampled, and the residual of thesignal is updated. Let x ∈ Rd and let u =
Φx be the measurement vector.The HHS Pursuit algorithm produces a signal approximation xˆ with O(s/ε2)
nonzeroentries. k x − xˆ k 2 ≤ √εs k x − xsk 1,where again xs denotes the vector consisting of the s largest
entries in magnitudeof x. The number of measurements m is proportional to (s/ε2) polylog(d/ε), andHHS Pursuit
runs in time (s2/ε4)polylog(d/ε). The algorithm uses working space(s/ε2)polylog(d/ε), including storage of the
matrix Φ.
There are other algorithms such as the Sudocodes algorithm that as of now onlywork in the noiseless,
strictly sparse case. However, these are still interesting becauseof the simplicity of the algorithm. The
Sudocodes algorithm is a simple two-phasealgorithm. In the first phase, an easily implemented avalanche bit
testing schemes applied iteratively to recover most of the coordinates of the signal x. At thispoint, it remains to
reconstruct an extremely low dimensional signal (one whosecoordinates are only those that remain). In the
second phase, this part of the signalis reconstructed, which completes the reconstruction. Since the recovery is
twophase, the measurement matrix is as well. For the first phase, it must contain asparse submatrix, one
consisting of many zeros and few ones in each row. For thesecond phase, it also contains a matrix whose small
submatrices are invertible. Thefollowing result for strictly sparse signals.Combinatorial algorithms such as HHS
pursuit provide sublinear timerecovery withoptimal error bounds and optimal number of measurements. Some
of these arestraightforward and easy to implement, and others require complicated structures.The
majordisadvantage however is the structural requirement on the measurementmatrices. Not only do these
methods only work with one particular kind of measurement matrix, but that matrix is highly structured which
limits its use in practice.There are no known sublinear methods in compressed sensing that allow for
unstructured or generic measurement matrices
V. Outputs
Fig 1 Computation time for the fixed N = 256 and K = 24 & 32
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 93 | Page
Fig 2: Average exact recovery and Computational time(sec)
Fig. 3. Computation time for the fixed M = 128 and K = 64 & 96
Fig 4: Percentage of recovered signals
Performance of Matching Algorithmsfor Signal Approximation
DOI: 10.9790/2834-10628894 www.iosrjournals.org 94 | Page
Fig 5: Sparse AOMP data
VI. Concusion
A matching pursuits dissimilarity measure has been presented, which is capableof performing accurate
shape-based comparisons between high-dimensional data. Itextends the matching pursuits signal approximation
technique and uses its dictionaryand coefficient information to compare two signals. AOMP is capable of
performingshape-based comparisons of very high dimensional data and it can also be adapted toperform
magnitude-based comparisons, similar to the Euclidean distance. Since AOMP is a differentiable measure, it can
be seamlessly integrated with existing clusteringor discrimination algorithms. Therefore, AOMP may find
application in a variety ofclassification and approximation problems of very high dimensional data.The AOMP
is used to develop an automated dictionary learning algorithm for MPapproximation of signals, called Enhanced
K-SVD. The EK-SVD algorithm uses theAOMP and the CA clustering algorithm to learn the required number
of dictionaryelements during training. Under-utilized and replicated dictionary elements are graduallypruned to
produce a compact dictionary, without compromising its approximationcapabilities. The experimental results
show that the size of the dictionary learned by ourmethod is 60% smaller but with same approximation
capabilities as the existing dictionarylearning algorithms.The AOMP is also used with the competitive
agglomeration fuzzy clustering algorithm to build aprototype-based classifier called AMP. The AMP algorithm
buildsrobust shape-based prototypes for each class and assigns a confidence to a test patternbased on its
dissimilarity to the prototypes of all classes. If a test pattern is different fromall the prototypes, it will be
assigned a low confidence value. Therefore, our experimentalresults show that the CAMP algorithm is able to
identify outliers in the given test databetter than discrimination-based classifiers, like, multilayer and support
vectormachines.We presented a new greedy technique based on OMP,suitable for non-negative
sparserepresentation, which is muchfaster than the state of the art algorithm. The new algorithm hasa slightly
different atom selection procedure, which guaranteesthe non-negativity of the signal approximations. Although
theselection step is more involved, the overall algorithm has amuch faster implementation. The reason is that
with the newselection procedure, we can use fast QR implementation of theOMP. The computational
complexity of two NNOMP’s werederived and the differences were demonstrated.
The experimental resultsshow that the size of the dictionary learned by our method is 60% smaller but
with sameapproximation capabilities as the existing dictionary learning algorithms.
References
[1] S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no.
12, pp. 3397–3415, 1993.
[2] M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,”
Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on], vol. 54, no.
11, pp. 4311–4322, 2006.
[3] H. Abut, R. Gray, and G. Rebolledo, “Vector quantization of speech and speech-like waveforms,” Acoustics, Speech, and Signal
Processing [see also IEEE Transactions on Signal Processing], IEEE Transactions on, vol. 30, no. 3, pp. 423–435, Jun 1982.
[4] Stephen Boyd and LievenVandenberghe, Convex Optimization, Cambridge University Press, March 2004.
[5] K. Wang, C.-H. Lee, and B.-H. Juang, “Maximum likelihood learning of auditoryfeature maps for stationary vowels,” Spoken
Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on, vol. 2, pp. 1265–1268 vol.2, 3-6 Oct 1996.
[6] G.Z. Karabulut, L. Moura, D. Panario, and A. Yongacoglu, “Integrating flexibletree searches to orthogonal matching pursuit
algorithm,” Vision, Image and Signal Processing, IEE Proceedings -, vol. 153, no. 5, pp. 538–548, Oct. 2006.
[7] F. Bergeaud and S. Mallat, “Matching pursuit of images,” in ICIP, 1995, pp. 53–56
[8] P.K. Bharadwaj, P.R. Runkle, and L. Carin, “Target identification with wave-based matched pursuits and hidden markov models,”
Antennas and Propagation, IEEE Transactions on, vol. 47, no. 10, pp. 1543–1554, Oct 1999.

More Related Content

What's hot (20)

PDF
A Scheme for Joint Signal Reconstruction in Wireless Multimedia Sensor Networks
ijma
 
PDF
QRC-ESPRIT Method for Wideband Signals
IDES Editor
 
PDF
Compressive Sensing in Speech from LPC using Gradient Projection for Sparse R...
IJERA Editor
 
PDF
Introduction to compressive sensing
Ahmed Nasser Agag
 
PDF
Communication by Whispers Paradigm for Short Range Communication in Cognitive...
IDES Editor
 
PDF
An Image representation using Compressive Sensing and Arithmetic Coding
IJCERT
 
PDF
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Jason Fernandes
 
PDF
Deep Learning Based Voice Activity Detection and Speech Enhancement
NAVER Engineering
 
PDF
An Efficient DSP Based Implementation of a Fast Convolution Approach with non...
a3labdsp
 
PDF
H010234144
IOSR Journals
 
PDF
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
Tanjarul Islam Mishu
 
PPTX
Blind Source Separation using Dictionary Learning
Davide Nardone
 
PDF
Sparse and Redundant Representations: Theory and Applications
Distinguished Lecturer Series - Leon The Mathematician
 
PDF
Ad24210214
IJERA Editor
 
PDF
Wavelet
Surendhar S
 
PDF
L046056365
IJERA Editor
 
PDF
Compressive spectrum sensing using two-stage scheme for cognitive radio netwo...
IJECEIAES
 
PDF
iscas07
Charan Litchfield
 
PDF
A New Approach for Speech Enhancement Based On Eigenvalue Spectral Subtraction
CSCJournals
 
PPTX
Convolutional neural networks deepa
deepa4466
 
A Scheme for Joint Signal Reconstruction in Wireless Multimedia Sensor Networks
ijma
 
QRC-ESPRIT Method for Wideband Signals
IDES Editor
 
Compressive Sensing in Speech from LPC using Gradient Projection for Sparse R...
IJERA Editor
 
Introduction to compressive sensing
Ahmed Nasser Agag
 
Communication by Whispers Paradigm for Short Range Communication in Cognitive...
IDES Editor
 
An Image representation using Compressive Sensing and Arithmetic Coding
IJCERT
 
Sparsity based Joint Direction-of-Arrival and Offset Frequency Estimator
Jason Fernandes
 
Deep Learning Based Voice Activity Detection and Speech Enhancement
NAVER Engineering
 
An Efficient DSP Based Implementation of a Fast Convolution Approach with non...
a3labdsp
 
H010234144
IOSR Journals
 
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
Tanjarul Islam Mishu
 
Blind Source Separation using Dictionary Learning
Davide Nardone
 
Sparse and Redundant Representations: Theory and Applications
Distinguished Lecturer Series - Leon The Mathematician
 
Ad24210214
IJERA Editor
 
Wavelet
Surendhar S
 
L046056365
IJERA Editor
 
Compressive spectrum sensing using two-stage scheme for cognitive radio netwo...
IJECEIAES
 
A New Approach for Speech Enhancement Based On Eigenvalue Spectral Subtraction
CSCJournals
 
Convolutional neural networks deepa
deepa4466
 

Viewers also liked (20)

DOCX
tư vấn làm phim quảng cáo tốt nhất
britt720
 
PPTX
ソーシャルメディア位置情報分析を行う前のTips
Yohei Ikawa
 
PPTX
Sanal topluluk uygulamalari
ceren26
 
DOCX
bảng giá làm video quảng cáo 3d
gregory165
 
DOCX
Nama jurulatih n masa latihan2015
Norhayati Abd Aziz
 
PPTX
Foredrag Digital arbeidsflyt for skoleleder og lederteam sdu13
geir dahlberg
 
PDF
MSc
Shayne Gee
 
PDF
Thesis 1 abstract
Ioannis Asmis
 
PPTX
CCT Environmental Emergencies
UFJaxEMS
 
PDF
Colis suspects preavis_de_greve_depose_par_les_syndicats_de_la_ligne_a_du_rer
Quoimaligne Idf
 
PDF
Review of Online Monitoring of status of air for Automation with alarm
iosrjce
 
PPTX
Aprendizaje autónomo y significativo leopoldo chávez
Poloou
 
DOCX
nơi nào làm phim quảng cáo giá rẻ
alonzo840
 
PPTX
Revista interdisciplinaria de acceso abierto: calidad.
presentacionesCRECS2015
 
DOCX
chỗ nào làm video quảng cáo cao cấp
rebecka411
 
PDF
Oxidizer PPT
Jason Payne
 
DOCX
Menghitung perubahan nilai y pada sistem pegas teredam menggunakan...
arda koto
 
PDF
Gain Analysis of EDF Amplifier Based WDM System Using Different Pumping Wavel...
iosrjce
 
PPTX
Premio Nacional de Divulgación Periodística en Sustentabilidad 2013
Coca-Cola de México
 
PDF
Fidic course
Alaa Samy
 
tư vấn làm phim quảng cáo tốt nhất
britt720
 
ソーシャルメディア位置情報分析を行う前のTips
Yohei Ikawa
 
Sanal topluluk uygulamalari
ceren26
 
bảng giá làm video quảng cáo 3d
gregory165
 
Nama jurulatih n masa latihan2015
Norhayati Abd Aziz
 
Foredrag Digital arbeidsflyt for skoleleder og lederteam sdu13
geir dahlberg
 
Thesis 1 abstract
Ioannis Asmis
 
CCT Environmental Emergencies
UFJaxEMS
 
Colis suspects preavis_de_greve_depose_par_les_syndicats_de_la_ligne_a_du_rer
Quoimaligne Idf
 
Review of Online Monitoring of status of air for Automation with alarm
iosrjce
 
Aprendizaje autónomo y significativo leopoldo chávez
Poloou
 
nơi nào làm phim quảng cáo giá rẻ
alonzo840
 
Revista interdisciplinaria de acceso abierto: calidad.
presentacionesCRECS2015
 
chỗ nào làm video quảng cáo cao cấp
rebecka411
 
Oxidizer PPT
Jason Payne
 
Menghitung perubahan nilai y pada sistem pegas teredam menggunakan...
arda koto
 
Gain Analysis of EDF Amplifier Based WDM System Using Different Pumping Wavel...
iosrjce
 
Premio Nacional de Divulgación Periodística en Sustentabilidad 2013
Coca-Cola de México
 
Fidic course
Alaa Samy
 
Ad

Similar to Performance of Matching Algorithmsfor Signal Approximation (20)

PPTX
Dct and adaptive filters
GIET,Bhubaneswar
 
PDF
Image compression based on
ijma
 
PDF
omp-and-k-svd - Gdc2013
Manchor Ko
 
PDF
Bag of Pursuits and Neural Gas for Improved Sparse Codin
Karlos Svoboda
 
PDF
SURF 2012 Final Report(1)
Eric Zhang
 
PDF
Image Denoising Based On Sparse Representation In A Probabilistic Framework
CSCJournals
 
PDF
IRJET- K-SVD: Dictionary Developing Algorithms for Sparse Representation ...
IRJET Journal
 
PDF
The marginal value of adaptive gradient methods in machine learning
Jamie Seol
 
PDF
離散値ベクトル再構成手法とその通信応用
Ryo Hayakawa
 
PDF
Gi2429352937
IJMER
 
PDF
Tamara G. Kolda, Distinguished Member of Technical Staff, Sandia National Lab...
MLconf
 
PDF
G234247
inventionjournals
 
PDF
Distributed ADMM
Pei-Che Chang
 
PDF
Alpha-divergence two-dimensional nonnegative matrix factorization for biomedi...
IJECEIAES
 
PDF
Performance analysis of compressive sensing recovery algorithms for image pr...
IJECEIAES
 
PDF
PhysRevE.89.042911
chetan.nichkawde
 
PDF
Defense_Talk
castanan2
 
PDF
Pattern learning and recognition on statistical manifolds: An information-geo...
Frank Nielsen
 
PDF
MBIP-book.pdf
VaideshSiva1
 
PDF
Application of Bayes Compressed Sensing in Image rocessing
IJRESJOURNAL
 
Dct and adaptive filters
GIET,Bhubaneswar
 
Image compression based on
ijma
 
omp-and-k-svd - Gdc2013
Manchor Ko
 
Bag of Pursuits and Neural Gas for Improved Sparse Codin
Karlos Svoboda
 
SURF 2012 Final Report(1)
Eric Zhang
 
Image Denoising Based On Sparse Representation In A Probabilistic Framework
CSCJournals
 
IRJET- K-SVD: Dictionary Developing Algorithms for Sparse Representation ...
IRJET Journal
 
The marginal value of adaptive gradient methods in machine learning
Jamie Seol
 
離散値ベクトル再構成手法とその通信応用
Ryo Hayakawa
 
Gi2429352937
IJMER
 
Tamara G. Kolda, Distinguished Member of Technical Staff, Sandia National Lab...
MLconf
 
Distributed ADMM
Pei-Che Chang
 
Alpha-divergence two-dimensional nonnegative matrix factorization for biomedi...
IJECEIAES
 
Performance analysis of compressive sensing recovery algorithms for image pr...
IJECEIAES
 
PhysRevE.89.042911
chetan.nichkawde
 
Defense_Talk
castanan2
 
Pattern learning and recognition on statistical manifolds: An information-geo...
Frank Nielsen
 
MBIP-book.pdf
VaideshSiva1
 
Application of Bayes Compressed Sensing in Image rocessing
IJRESJOURNAL
 
Ad

More from iosrjce (20)

PDF
An Examination of Effectuation Dimension as Financing Practice of Small and M...
iosrjce
 
PDF
Does Goods and Services Tax (GST) Leads to Indian Economic Development?
iosrjce
 
PDF
Childhood Factors that influence success in later life
iosrjce
 
PDF
Emotional Intelligence and Work Performance Relationship: A Study on Sales Pe...
iosrjce
 
PDF
Customer’s Acceptance of Internet Banking in Dubai
iosrjce
 
PDF
A Study of Employee Satisfaction relating to Job Security & Working Hours amo...
iosrjce
 
PDF
Consumer Perspectives on Brand Preference: A Choice Based Model Approach
iosrjce
 
PDF
Student`S Approach towards Social Network Sites
iosrjce
 
PDF
Broadcast Management in Nigeria: The systems approach as an imperative
iosrjce
 
PDF
A Study on Retailer’s Perception on Soya Products with Special Reference to T...
iosrjce
 
PDF
A Study Factors Influence on Organisation Citizenship Behaviour in Corporate ...
iosrjce
 
PDF
Consumers’ Behaviour on Sony Xperia: A Case Study on Bangladesh
iosrjce
 
PDF
Design of a Balanced Scorecard on Nonprofit Organizations (Study on Yayasan P...
iosrjce
 
PDF
Public Sector Reforms and Outsourcing Services in Nigeria: An Empirical Evalu...
iosrjce
 
PDF
Media Innovations and its Impact on Brand awareness & Consideration
iosrjce
 
PDF
Customer experience in supermarkets and hypermarkets – A comparative study
iosrjce
 
PDF
Social Media and Small Businesses: A Combinational Strategic Approach under t...
iosrjce
 
PDF
Secretarial Performance and the Gender Question (A Study of Selected Tertiary...
iosrjce
 
PDF
Implementation of Quality Management principles at Zimbabwe Open University (...
iosrjce
 
PDF
Organizational Conflicts Management In Selected Organizaions In Lagos State, ...
iosrjce
 
An Examination of Effectuation Dimension as Financing Practice of Small and M...
iosrjce
 
Does Goods and Services Tax (GST) Leads to Indian Economic Development?
iosrjce
 
Childhood Factors that influence success in later life
iosrjce
 
Emotional Intelligence and Work Performance Relationship: A Study on Sales Pe...
iosrjce
 
Customer’s Acceptance of Internet Banking in Dubai
iosrjce
 
A Study of Employee Satisfaction relating to Job Security & Working Hours amo...
iosrjce
 
Consumer Perspectives on Brand Preference: A Choice Based Model Approach
iosrjce
 
Student`S Approach towards Social Network Sites
iosrjce
 
Broadcast Management in Nigeria: The systems approach as an imperative
iosrjce
 
A Study on Retailer’s Perception on Soya Products with Special Reference to T...
iosrjce
 
A Study Factors Influence on Organisation Citizenship Behaviour in Corporate ...
iosrjce
 
Consumers’ Behaviour on Sony Xperia: A Case Study on Bangladesh
iosrjce
 
Design of a Balanced Scorecard on Nonprofit Organizations (Study on Yayasan P...
iosrjce
 
Public Sector Reforms and Outsourcing Services in Nigeria: An Empirical Evalu...
iosrjce
 
Media Innovations and its Impact on Brand awareness & Consideration
iosrjce
 
Customer experience in supermarkets and hypermarkets – A comparative study
iosrjce
 
Social Media and Small Businesses: A Combinational Strategic Approach under t...
iosrjce
 
Secretarial Performance and the Gender Question (A Study of Selected Tertiary...
iosrjce
 
Implementation of Quality Management principles at Zimbabwe Open University (...
iosrjce
 
Organizational Conflicts Management In Selected Organizaions In Lagos State, ...
iosrjce
 

Recently uploaded (20)

PDF
Plant Control_EST_85520-01_en_AllChanges_20220127.pdf
DarshanaChathuranga4
 
PDF
Module - 5 Machine Learning-22ISE62.pdf
Dr. Shivashankar
 
PDF
CLIP_Internals_and_Architecture.pdf sdvsdv sdv
JoseLuisCahuanaRamos3
 
PPTX
Electrical_Safety_EMI_EMC_Presentation.pptx
drmaneharshalid
 
PPTX
Alan Turing - life and importance for all of us now
Pedro Concejero
 
PDF
PROGRAMMING REQUESTS/RESPONSES WITH GREATFREE IN THE CLOUD ENVIRONMENT
samueljackson3773
 
PDF
Decision support system in machine learning models for a face recognition-bas...
TELKOMNIKA JOURNAL
 
PPTX
FSE_LLM4SE1_A Tool for In-depth Analysis of Code Execution Reasoning of Large...
cl144
 
PPTX
Computer network Computer network Computer network Computer network
Shrikant317689
 
PDF
Authentication Devices in Fog-mobile Edge Computing Environments through a Wi...
ijujournal
 
PPTX
Explore USA’s Best Structural And Non Structural Steel Detailing
Silicon Engineering Consultants LLC
 
PDF
LLC CM NCP1399 SIMPLIS MODEL MANUAL.PDF
ssuser1be9ce
 
PDF
A Brief Introduction About Robert Paul Hardee
Robert Paul Hardee
 
PDF
June 2025 - Top 10 Read Articles in Network Security and Its Applications
IJNSA Journal
 
PPTX
Comparison of Flexible and Rigid Pavements in Bangladesh
Arifur Rahman
 
PDF
Bayesian Learning - Naive Bayes Algorithm
Sharmila Chidaravalli
 
PPTX
CM Function of the heart pp.pptxafsasdfddsf
drmaneharshalid
 
PPT
FINAL plumbing code for board exam passer
MattKristopherDiaz
 
PPTX
darshai cross section and river section analysis
muk7971
 
PDF
lesson4-occupationalsafetyandhealthohsstandards-240812020130-1a7246d0.pdf
arvingallosa3
 
Plant Control_EST_85520-01_en_AllChanges_20220127.pdf
DarshanaChathuranga4
 
Module - 5 Machine Learning-22ISE62.pdf
Dr. Shivashankar
 
CLIP_Internals_and_Architecture.pdf sdvsdv sdv
JoseLuisCahuanaRamos3
 
Electrical_Safety_EMI_EMC_Presentation.pptx
drmaneharshalid
 
Alan Turing - life and importance for all of us now
Pedro Concejero
 
PROGRAMMING REQUESTS/RESPONSES WITH GREATFREE IN THE CLOUD ENVIRONMENT
samueljackson3773
 
Decision support system in machine learning models for a face recognition-bas...
TELKOMNIKA JOURNAL
 
FSE_LLM4SE1_A Tool for In-depth Analysis of Code Execution Reasoning of Large...
cl144
 
Computer network Computer network Computer network Computer network
Shrikant317689
 
Authentication Devices in Fog-mobile Edge Computing Environments through a Wi...
ijujournal
 
Explore USA’s Best Structural And Non Structural Steel Detailing
Silicon Engineering Consultants LLC
 
LLC CM NCP1399 SIMPLIS MODEL MANUAL.PDF
ssuser1be9ce
 
A Brief Introduction About Robert Paul Hardee
Robert Paul Hardee
 
June 2025 - Top 10 Read Articles in Network Security and Its Applications
IJNSA Journal
 
Comparison of Flexible and Rigid Pavements in Bangladesh
Arifur Rahman
 
Bayesian Learning - Naive Bayes Algorithm
Sharmila Chidaravalli
 
CM Function of the heart pp.pptxafsasdfddsf
drmaneharshalid
 
FINAL plumbing code for board exam passer
MattKristopherDiaz
 
darshai cross section and river section analysis
muk7971
 
lesson4-occupationalsafetyandhealthohsstandards-240812020130-1a7246d0.pdf
arvingallosa3
 

Performance of Matching Algorithmsfor Signal Approximation

  • 1. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 6, Ver. II (Nov - Dec .2015), PP 88-94 www.iosrjournals.org DOI: 10.9790/2834-10628894 www.iosrjournals.org 88 | Page Performance of Matching Algorithmsfor Signal Approximation U.L Jyothirmayee M.Tech, Student RISE Krishna Sai Group of Institutions Abstract: One of the important classes of sparse signals isthe non-negative signals. Many algorithms have alreadybeenproposed to recover such non-negative representations, whereveravaricious and convex relaxed algorithms are among the mostpopular methods. The covetous techniques have been changedto incorporate the non-negativity of the depictions. Onesuch modification has been proposed for Extranious EquivalentDetection (EED), which first chooses positive constants and usesa non-negative optimization technique as a replacement for theorthogonal projection onto the nominated support. Which graduallybuilds the sparse representation of a signal by iterativelyadding the most correlated element of the lexicon, which iscalled an atom, to the set of selected elements. A disadvantageof ED is that the depiction found by the algorithm isnot the best representation using selected atoms. It may alsoreselect already selected atoms in the later iterations, whichslows down the junction of the algorithm. Asa result, we present a novel fast implementation of the Nonnegative EED, which is based on the QR decomposition and aniterative constants apprise. We will empirically show that such amodification can easily accelerate the implementation by a factorof ten in a reasonable size problem. We explain how the non-negativityconstraint of the coefficients stops us of using the canonical EED and how we can modify the algorithm to not onlyhave a more intuitive atom selection step, but have a lowercomputational complexity. Keywords: Detection, decomposition, iteration, non-negative I. Introduction The phrase compressed sensing refers to the problem of realizing a sparse input xusing few linear measurements that possess some incoherence properties. The fieldoriginated recently from an unfavorable opinion about the current signal compressionmethodology. The conventional scheme in signal processing, acquiring the entire signal and then compressing it, was questioned by Donoho [2].Indeed, this techniqueuses tremendous resources to acquire often very large signals, just to throw awayinformation during compression. The natural question then is whether we can combine these twoprocesses, and directly sense the signal or its essential parts using fewlinear measurements. Recent work in compressed sensing has answered this questionin positive, and the field continues to rapidly produce encouraging results. The key objective in compressed sensing (also referred to as sparse signal recoveryor compressive sampling) is to reconstruct a signal accurately and efficiently from aset of few non-adaptive linear measurements. Signals in this context are vectors,many of which in the applications will represent images. Of course, linear algebraeasily shows that in general it is not possible to reconstruct an arbitrary signal froman incomplete set of linear measurements. Thus one must restrict the domain inwhich the signals belong. To this end, we consider sparse signals, those with fewnon-zero coordinates. It is now known that many signals such as real-world imagesor audio signals are sparse either in this sense, or with respect to a different basis.Since sparse signals lie in a lower dimensional space, one would think intuitivelythat they may be represented by few linear measurements. This is indeed correct,but the difficulty is determining in which lower dimensional subspace such a signallies.Multicarrier modulation has regained interest over the last decade. Several all-digital variants have been proposed: discrete multitone (DMT) is adopted as transmission format for asymmetric digital subscriber line (ADSL) and presented as a candidate for very high bit rate digital subscriber line (VDSL); orthogonal frequency division multiplexing (OFDM) is proposed for wireless local area applications, e.g. HiperLAN. DMT schemes divide the bandwidth into parallel subbands or tones. The incoming bitstream is split into parallel streams that are used to QAM-modulate the different tones. The modulation is done by means of an inverse fast Fourier transform (IFFT). Before transmission of a DMT symbol, a cyclic prefix of samples is added. If the channel impulse response order is less than or equal to the cyclic prefix length v, demodulation can be implemented by means of an FFT, followed by a (complex) 1-tap frequency domain equalizer (FEQ) per tone to compensate for channel amplitude and phase effects.The challenges of building a classifier that can distinguish between high-dimensionalmembers of various classes based on their shape differences, involves devising a reliabledissimilarity measure that can perform shape-based comparisons of very high-dimensionalsignals.
  • 2. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 89 | Page Also, theautomation of the feature selection process to minimize human intervention and reliance on domain knowledge. Finally, a robust prototype-based classifier thatcan detect outliers in test data.In order to achieve the above objectives, a Matching Pursuits Dissimilarity Measureis presented. The EDDM extends the well-known signal approximation technique EquivalentDetections (ED) for signalcomparison purposes [1]. MP is a greedy algorithm thatapproximates a signal x as a linear combination of signals from a pre-defined dictionary.MP is commonly used for signal representation and compression, particularly image andvideo compression [5, 6]. The dictionary and coefficients information produced by the ED algorithm has been previously used in some classification applications. However, mostof these applications work on some underlying assumptions about the data and the MPdictionary (section 2.3). The EDDM is the first MP based comparison measure that doesnot require any assumptions about the problem domain. It is versatile enough to performshape-based comparisons of very high-dimensional signals and it can also be adoptedto perform magnitude-based comparisons, similar to the Euclidean Distance. Since the EDDM is a differentiable measure, it can be seamlessly used with existing clustering ordiscrimination algorithms. Therefore, the EDDM may find application in a variety ofclassification and approximation problems of very high-dimensional signals, includingimage and video signals. The experimental results show that EDDM is more useful thanthe Euclidean distance for shape-based comparison between signals in high dimensions. The potential usefulness of the MPDM for a variety of problems is demonstrated bydevising two important EDDM-based algorithms. The first algorithm, called CAMP, dealswith the prototype-based classification of high-dimensional signals. The second algorithmis called the EK-SVD algorithm and it automates the dictionary learning process for theMP approximation of signals.In the CAMP algorithm, EDDM is used with the Competitive Agglomeration (CA)clustering algorithm by Frigui and Krishnapuram to propose a probabilistic classificationmodel [2]. The CA algorithm is a fuzzy clustering algorithm that learns the optimal number of clusters duringtraining. Therefore, it eliminates the need for manually specifyingthe number of clusters beforehand. This algorithm has been named as CAMP as an abbreviation of CA and ED algorithms. II. Non-Negative Least Squares Algorithm Let A be am × n matrix and b be a vector of dimension m.Consider the following feasibility problem: Ax = b (1) x ≥ 0 (2) A straightforward way of solving the above problem through linear programming isby solving the following LP problem: (LP) : min Pnj=1 |sj| Ax + s = bx ≥ 0 Observe that this norm 1 minimization can be carried out very efficiently by thesimplex in most cases. However, there are some constraint matrices for which theSimplex method performs a large number of degenerate pivots, not improving thesolution for many iterations, leading to a poor performance. Our approach to solve the feasibility problem posed by relations 1 and 2 will alsobe the minimization of a p- norm, but different from the norm 1 considered in (LP),we will consider the norm 2, i.e., we will solve the following problem: At first glance, problem (PLS) seems much harder than problem (LP). However,in cases where (LP) is highly degenerate (as pointed out before), it is usually simplerto solve (PLS).E. Barnes et al. in [6] showed that the normalized direction obtained by (PLS) isthe direction of the steepest ascent at π0 on the dual polyhedron (D). This suggeststhat the dual direction obtained by (PLS) may be much better in practice than theone obtained by the linear update in (LP). E. Barnes et al. in [6] showed empiricallythat this is indeed true for some classes of problems.Since (PLS) is a convex program, KKT conditions are necessary and sufficient foroptimality. Thus, the vector (x, s) is asolution for P if and only if there exists π suchthat: The NNLS algorithm starts with a primal feasible solution, i.e., one that is feasiblefor (PLS), and tries to find a solution for the problem (DLS). The NNLS algorithmis similar to the simplex method in the sense that we have a subset of the columnsof A that is a primal feasible basis, and then we move from one primal feasible basisto another. Unlike the simplex method, our ’basis’ is not required to be square. Theonly requirement is that it is composed of linearly independent columns.Let B be a basis, i.e., a linearly independent subset of the columns of A. Thenone crucial step of the nonnegative least squares algorithm is to solve the followingproblem: MinkBx − bk2 Since the columns of B are linearly independent, the solution will be: x = B+b, where B+ = (BtB)−1Bt The matrix B+ is called the generalized inverse or pseudo inverse. If B is a basis, wesay that it isfeasible for (PLS) if we have: x = B+b>0
  • 3. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 90 | Page Theorem 1. Algorithm 1 terminates with a solution of problem (PLS). 1. Let B be the feasible basis for problem P 2. Let IB be the index set of the columns in B 3. bBx    4.   xBb 5.  0:  jAjS 6. if S the 7. stop: optimal solution found 8. end if 9. Let sk  10. kABd   11. j j j d x d 0 min   12.   BBIP 13. 2 j j t PA A   14.          ,min 15. if    then 16.  jII BB  17. if    then 18.   jj dxx    19.   0:  xjII BB 20. end if 21.   Bjj IAB  , 22. Return to 3 23. else 24.           j j BB d x JII : 25.   Bjj IAB  , 26. bBx    27. Return to 10 28. end if Proof.We will show that the algorithm terminates by showing that no basis can berepeated. Since the number of basis is finite, the result follows.In order to prove that no basis can be repeated, we will show that, if a basis B isupdated to Bˆ , then we must have: 2 0 2 0 minmin bBxbxB xx    Let us suppose first that θ = θ ≤ θ Let B be the current basis and Ajbe the entering column.Ifx is the current primal solution, then x = (BtB)−1Btb
  • 4. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 91 | Page  2 0 2 0, 2 0 minminmin j x j txx AbBxbtABxbxB    Let Bˆ be the new basis. Then the solution of the last minimization problem is:     dxAbBBBx j tt   1 III. Extranious EquivalentDetection One such greedy algorithm is Orthogonal Matching Pursuit (OMP), put forth byMallat and his collaborators (see e.g. [47]) and analyzed by Gilbert and Tropp [62].OMP uses subGaussian measurement matrices to reconstruct sparse signals. If Φis such a measurement matrix, then Φ ∗Φ is in a loose sense close to the identity.Therefore one would expect the largest coordinate of the observation vector y = Φ ∗Φxto correspond to a non-zero entry of x. Thus one coordinate for the support of thesignal x is estimated. Subtracting off that contribution from the observation vectory and repeating eventually yields the entire support of the signal x. OMP is quitefast, both in theory and in practice, but its guarantees are not as strong as those ofBasis Pursuit. The OMP algorithm can thus be described as follows: Orthogonal Matching Pursuit (OMP) Input: Measurement matrix Φ, measurement vector u = Φx, sparsity level s Output: Index set I ⊂ {1, d} Procedure: Initialize Let the index set I = ∅ and the residual r = u. Repeat the following s times: Identify Select the largest coordinate λ of y = Φ ∗r in absolute value. Break ties lexicographically. Update Add the coordinate λ to the index set: I ← I ∪ {λ}, and update the residual: xˆ = argmin z k u − Φ| Izk 2; r = u − Φˆx. Once the support I of the signal x is found, the estimate can be reconstructed asxˆ = Φ †Iu, whererecall we define the pseudoinverse by Φ †I def= (Φ ∗IΦI) −1Φ ∗I.The algorithm’s simplicity enables a fast runtime. The algorithm iterates s times,and each iteration does a selection through d elements, multiplies by Φ ∗, and solvesa least squares problem. The selection can easily be done in O(d) time, and themultiplication of Φ ∗ in the general case takes O(md). When Φ is an unstructuredmatrix, the cost of solving the least squares problem is O(s2d). However, maintaininga QR-Factorization of Φ| I and using the modified Gram-Schmidt algorithm reducesthis time to O(| I| d) at each iteration. Using this method, the overall cost of OMPbecomes O(smd). In the case where the measurement matrix Φ is structured with afast-multiply, this can clearly be improved. IV. Stagewise Extranious EquivalentDetection An alternative greedy approach, StagewiseOrthogonal Matching Pursuit (StOMP)developed and analyzed by Donoho and his collaborators [23], uses ideas inspired bywireless communications. As in OMP, StOMP utilizes the observation vector y = Φ ∗uwhere u = Φx is the measurement vector. However, instead of simply selecting thelargest component of the vector y, it selects all of the coordinates whose values areabove a specified threshold. It then solves a least-squares problem to update theresidual. The algorithm iterates through only a fixed number of stages and thenterminates, whereas OMP requires s iterations where s is the sparsity level. The pseudo-code for StOMP can thus be described by the following. Input: Measurement matrix Φ, measurement vector u = Φx, Output: Estimate ˆx to the signal x Procedure:Initialize Let the index set I = ∅, the estimate ˆx = 0, and the residual r = u.Repeat the following until stopping condition holds: Identify Using the observation vector y = Φ ∗r, setJ = {j : | yj| >tkσk},where σk is a formal noise level and tk is a threshold parameter for iterationk. Update Add the set J to the index set: I ← I ∪ J, and update the residual and estimate:xˆ| I = (Φ ∗IΦI) −1Φ ∗Iu, r = u − Φˆx. The thresholding strategy is designed so that many terms enter at each stage, and so that algorithm halts after a fixed number of iterations. The formal noise level σk is proportional the Euclidean norm of the residual at that iteration.
  • 5. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 92 | Page Syphilis antigen in a bloodsample. Since this test was expensive, the method was to sample a group of mentogether and test the entire pool of blood samples. If the pool did not contain theantigen, then one test replaced many. If it was found, then the process could eitherbe repeated with that group, or each individual in the group could then be tested.These sublinear algorithms in compressed sensing use this same idea to test for elements of the support of the signal x. Chaining pursuit, for example, uses a measurement matrixconsisting of a row tensor product of a bit test matrix and anisolation matrix, both of which are 0-1 matrices. Chaining pursuit first uses bit teststo locate the positions of the large components of the signal x and estimate thosevalues. Then the algorithm retains a portion of the coordinates that are largestmagnitude and repeats. In the end, those coordinates which appeared throughouta large portion of the iterations are kept, and the signal is estimated using these.Pseudo-code is available in [3], where the following result is proved. Theorem (Chaining pursuit [31]). With probability at least 1 − O(d −3), theO(s log2 d) × d random measurement operator Φ has the following property. Forx ∈ Rd and its measurements u = Φx, the Chaining Pursuit algorithm produces asignal xˆ with at most s nonzero entries. The output xˆ satisfies kx − xˆk 1 ≤ C(1 + log s)kx − xsk 1. The time cost of the algorithm is O(s log2 s log2 d).HHS Pursuit, a similar algorithm but with improved guarantees, uses a measurement matrix that consists again of two parts. The first part is an identificationmatrix, and the second is an estimation matrix. As the names suggest, the identification matrix is used to identify the location of the large components of the signal,whereas the estimation matrix is used to estimate the values at those locations. Eachof these matrices consist of smaller parts, some deterministic and some random. Using this measurement matrix to locate large components and estimate their values, HHS Pursuit then adds the new estimate to the previous, and prunes it relative tothe sparsity level. This estimation is itself then sampled, and the residual of thesignal is updated. Let x ∈ Rd and let u = Φx be the measurement vector.The HHS Pursuit algorithm produces a signal approximation xˆ with O(s/ε2) nonzeroentries. k x − xˆ k 2 ≤ √εs k x − xsk 1,where again xs denotes the vector consisting of the s largest entries in magnitudeof x. The number of measurements m is proportional to (s/ε2) polylog(d/ε), andHHS Pursuit runs in time (s2/ε4)polylog(d/ε). The algorithm uses working space(s/ε2)polylog(d/ε), including storage of the matrix Φ. There are other algorithms such as the Sudocodes algorithm that as of now onlywork in the noiseless, strictly sparse case. However, these are still interesting becauseof the simplicity of the algorithm. The Sudocodes algorithm is a simple two-phasealgorithm. In the first phase, an easily implemented avalanche bit testing schemes applied iteratively to recover most of the coordinates of the signal x. At thispoint, it remains to reconstruct an extremely low dimensional signal (one whosecoordinates are only those that remain). In the second phase, this part of the signalis reconstructed, which completes the reconstruction. Since the recovery is twophase, the measurement matrix is as well. For the first phase, it must contain asparse submatrix, one consisting of many zeros and few ones in each row. For thesecond phase, it also contains a matrix whose small submatrices are invertible. Thefollowing result for strictly sparse signals.Combinatorial algorithms such as HHS pursuit provide sublinear timerecovery withoptimal error bounds and optimal number of measurements. Some of these arestraightforward and easy to implement, and others require complicated structures.The majordisadvantage however is the structural requirement on the measurementmatrices. Not only do these methods only work with one particular kind of measurement matrix, but that matrix is highly structured which limits its use in practice.There are no known sublinear methods in compressed sensing that allow for unstructured or generic measurement matrices V. Outputs Fig 1 Computation time for the fixed N = 256 and K = 24 & 32
  • 6. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 93 | Page Fig 2: Average exact recovery and Computational time(sec) Fig. 3. Computation time for the fixed M = 128 and K = 64 & 96 Fig 4: Percentage of recovered signals
  • 7. Performance of Matching Algorithmsfor Signal Approximation DOI: 10.9790/2834-10628894 www.iosrjournals.org 94 | Page Fig 5: Sparse AOMP data VI. Concusion A matching pursuits dissimilarity measure has been presented, which is capableof performing accurate shape-based comparisons between high-dimensional data. Itextends the matching pursuits signal approximation technique and uses its dictionaryand coefficient information to compare two signals. AOMP is capable of performingshape-based comparisons of very high dimensional data and it can also be adapted toperform magnitude-based comparisons, similar to the Euclidean distance. Since AOMP is a differentiable measure, it can be seamlessly integrated with existing clusteringor discrimination algorithms. Therefore, AOMP may find application in a variety ofclassification and approximation problems of very high dimensional data.The AOMP is used to develop an automated dictionary learning algorithm for MPapproximation of signals, called Enhanced K-SVD. The EK-SVD algorithm uses theAOMP and the CA clustering algorithm to learn the required number of dictionaryelements during training. Under-utilized and replicated dictionary elements are graduallypruned to produce a compact dictionary, without compromising its approximationcapabilities. The experimental results show that the size of the dictionary learned by ourmethod is 60% smaller but with same approximation capabilities as the existing dictionarylearning algorithms.The AOMP is also used with the competitive agglomeration fuzzy clustering algorithm to build aprototype-based classifier called AMP. The AMP algorithm buildsrobust shape-based prototypes for each class and assigns a confidence to a test patternbased on its dissimilarity to the prototypes of all classes. If a test pattern is different fromall the prototypes, it will be assigned a low confidence value. Therefore, our experimentalresults show that the CAMP algorithm is able to identify outliers in the given test databetter than discrimination-based classifiers, like, multilayer and support vectormachines.We presented a new greedy technique based on OMP,suitable for non-negative sparserepresentation, which is muchfaster than the state of the art algorithm. The new algorithm hasa slightly different atom selection procedure, which guaranteesthe non-negativity of the signal approximations. Although theselection step is more involved, the overall algorithm has amuch faster implementation. The reason is that with the newselection procedure, we can use fast QR implementation of theOMP. The computational complexity of two NNOMP’s werederived and the differences were demonstrated. The experimental resultsshow that the size of the dictionary learned by our method is 60% smaller but with sameapproximation capabilities as the existing dictionary learning algorithms. References [1] S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415, 1993. [2] M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing, IEEE Transactions on], vol. 54, no. 11, pp. 4311–4322, 2006. [3] H. Abut, R. Gray, and G. Rebolledo, “Vector quantization of speech and speech-like waveforms,” Acoustics, Speech, and Signal Processing [see also IEEE Transactions on Signal Processing], IEEE Transactions on, vol. 30, no. 3, pp. 423–435, Jun 1982. [4] Stephen Boyd and LievenVandenberghe, Convex Optimization, Cambridge University Press, March 2004. [5] K. Wang, C.-H. Lee, and B.-H. Juang, “Maximum likelihood learning of auditoryfeature maps for stationary vowels,” Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on, vol. 2, pp. 1265–1268 vol.2, 3-6 Oct 1996. [6] G.Z. Karabulut, L. Moura, D. Panario, and A. Yongacoglu, “Integrating flexibletree searches to orthogonal matching pursuit algorithm,” Vision, Image and Signal Processing, IEE Proceedings -, vol. 153, no. 5, pp. 538–548, Oct. 2006. [7] F. Bergeaud and S. Mallat, “Matching pursuit of images,” in ICIP, 1995, pp. 53–56 [8] P.K. Bharadwaj, P.R. Runkle, and L. Carin, “Target identification with wave-based matched pursuits and hidden markov models,” Antennas and Propagation, IEEE Transactions on, vol. 47, no. 10, pp. 1543–1554, Oct 1999.