ali pezeshki - walter scott, jr. college of engineeringpezeshki/pdfs/alipezeshki_cv_rh.pdfali...

28
Ali Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins, CO 80523 Phone: (970) 491-3242, Fax: (970) 491-2249 Email: [email protected] E DUCATION: Ph.D. in Electrical Engineering, Colorado State University, Ft. Collins, CO Dec. 2004 Advisors: Mahmood Azimi and Louis Scharf M.Sc. in Electrical Engineering, University of Tehran, Tehran, Iran Aug. 2001 B.Sc. in Electrical Engineering, University of Tehran, Tehran, Iran May 1999 P OSITIONS : Associate Professor Colorado State University Electrical and Computer Engineering Jul. 2014–present Associate Professor (Courtesy Appointment) Colorado State University Mathematics Jul. 2014–present Assistant Professor Colorado State University Electrical and Computer Engineering Aug. 2008–Jun. 2014 Assistant Professor (Courtesy Appointment) Colorado State University Mathematics Nov. 2012–Jun. 2014 Postdoctoral Research Associate Princeton University Applied and Computational Mathematics Jan. 2006–Aug. 2008 Advisor: Robert Calderbank Post-Doctoral Researcher Colorado State University Electrical and Computer Engineering Feb. 2005–Jan. 2006 Advisor: Louis Scharf Graduate Research Assistant Colorado State University Electrical and Computer Engineering Aug. 2001–Dec. 2004 Advisors: Mahmood Azimi and Louis Scharf R ESEARCH I NTERESTS : Statistical signal processing, machine learning, optimization theory, coding theory, geometry, and harmonic analysis (see research highlights at the end of CV).

Upload: others

Post on 17-Aug-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Ali PezeshkiElectrical and Computer Engineering

Colorado State University, Fort Collins, CO 80523Phone: (970) 491-3242, Fax: (970) 491-2249

Email: [email protected]

EDUCATION:

Ph.D. in Electrical Engineering, Colorado State University, Ft. Collins, CO Dec. 2004Advisors: Mahmood Azimi and Louis Scharf

M.Sc. in Electrical Engineering, University of Tehran, Tehran, Iran Aug. 2001

B.Sc. in Electrical Engineering, University of Tehran, Tehran, Iran May 1999

POSITIONS:

Associate Professor Colorado State UniversityElectrical and Computer Engineering Jul. 2014–present

Associate Professor (Courtesy Appointment) Colorado State UniversityMathematics Jul. 2014–present

Assistant Professor Colorado State UniversityElectrical and Computer Engineering Aug. 2008–Jun. 2014

Assistant Professor (Courtesy Appointment) Colorado State UniversityMathematics Nov. 2012–Jun. 2014

Postdoctoral Research Associate Princeton UniversityApplied and Computational Mathematics Jan. 2006–Aug. 2008Advisor: Robert Calderbank

Post-Doctoral Researcher Colorado State UniversityElectrical and Computer Engineering Feb. 2005–Jan. 2006Advisor: Louis Scharf

Graduate Research Assistant Colorado State UniversityElectrical and Computer Engineering Aug. 2001–Dec. 2004Advisors: Mahmood Azimi and Louis Scharf

RESEARCH INTERESTS:Statistical signal processing, machine learning, optimization theory, coding theory, geometry, andharmonic analysis (see research highlights at the end of CV).

Page 2: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

HONORS AND AWARDS:

ECE Excellence in Teaching Award, Colorado State University, Apr. 2019.

Member of the team that received the Provost’s N. Preston Davis Instructional InnovationAward, Colorado State University, Mar. 2018.

George T. Abell Award for Outstanding Teaching and Service, Colorado State University,Nov. 2017.

Co-author of the 2013 IEEE Signal Processing Society Young Author Best Paper Award [forthe paper “Sensitivity to Basis Mismatch in Compressed Sensing,” (co-authored with Y. Chi,L. L. Scharf, and R. Calderbank), published in the May 2011 issue of IEEE Transactions onSignal Processing.]

PUBLICATIONS:

Books chapters: 3; Journal papers: 30; Conference papers/presentations: 78Citations: 1918; h-index: 19; i10-index: 46 (Source: Google Scholar; Retrieved on Aug. 22, 2019)

Book Chapters:1. A. Pezeshki, Y. Chi, L. L. Scharf, and E. K. P. Chong, “Compressed sensing, sparse in-

version, and model mismatch,” in Compressed Sensing and Its Applications, H. Boche, R.Calderbank, G. Kutyniok, and J. Vybiral, Eds., Applied and Numerical Harmonic Analysis,New York, NY: Springer, ISBN: 978-3-319-16041-2 (Print) 978-3-319-16042-9 (Online),2015, ch. 3, pp. 75-95.

2. W. U. Bajwa and A. Pezeshki, “Finite frames for sparse signal processing,” in Finite Frames:Theory and Applications, P. Casazza and G. Kutyniok, Eds., Birkhauser, 2012, Chapter 9,pp. 303-336.

3. Y. Chi, A. Pezeshki, and A. R. Calderbank, “Complementary waveforms for sidelobe sup-pression and radar polarimetry,” in Principles of Waveform Diversity and Design, M. Wicks,E. Mokole, S. Blunt, R. Schneible, and V. Amuso, Eds., SciTech Publishing, Inc., 2011,Chapter 53, pp. 828-843.

Journal Papers:1. W. Dang, A. Pezeshki, S. D. Howard, W. Moran, and R. Calderbank, “Coordinating comple-

mentary waveforms for suppressing range sidelobes in a doppler band,” IEEE Transactionson Signal Processing, submitted Aug. 2019.

2. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Submodular optimization problems and greedystrategies: A survey,” Discrete Event Dynamic Systems, submitted May 2019.

3. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Improved bounds for greedy strategy in optimiza-tion problems with curvature,” J. Combinatorial Optimization, vol. 37, no. 4, pp. 1126-1149,May 2019.

Page 3: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

4. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Performance bounds for Nash equilibria in sub-modular utility systems with user groups” J. Control and Decision, vol. 5, no. 1, pp. 1-18,2018.

5. Y. Liu, Z. Zhang, E. K. P. Chong, and A. Pezeshki, “Performance bounds with curvature forbatched greedy optimization,” J. Optimization Theory and Applications, vol. 177, no. 2, pp.535-562, May 2018.

6. A. A. Maciejewski, T. W. Chen, Z. S. Byrne, M. A. de Miranda, L. B. Sample-McMeeking,B. M. Notaros, A. Pezeshki, S. Roy, A. M. Leland, M. D. Reese, A. H. Rosales, T. J. Siller,R. F. Toftness, O. Notaros, “A holistic approach to transforming undergraduate electricalengineering education,” IEEE Access,, vol. 5, pp. 8148-8161, Mar. 2017.

7. Z. Zhang, Y. Wang, E. K. P. Chong, A. Pezeshki, and L. L. Scharf, “Subspace selectionfor projection maximization with matroid constraints,” IEEE Trans. on Signal Processing,vol. 65, no. 5, pp. 1339-1351, Mar. 2017.

8. Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Near-optimal distributed detectionin balanced binary relay trees,” IEEE Trans. on Control of Network Systems, vol. 4, no. 4,pp. 826-837, Dec. 2017.

9. P. Pakrooh, L. L. Scharf, and A. Pezeshki, “Threshold effects in parameter estimation fromcompressed data,” IEEE Trans. Signal Processing, vol. 64, no. 9, pp. 2345-2354, May 2016.

10. P. Pakrooh, L. L. Scharf, and A. Pezeshki, “Modal analysis using co-prime arrays,” IEEETrans. Signal Processing, vol. 64, no. 9, pp. 2429-2442, May 2016.

11. Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “String submodular function withcurvature constraints,” IEEE Trans. Automatic Control, vol. 61, no. 3, pp. 601-616, Mar.2016.

12. P. Pakrooh, A. Pezeshki, L. L. Scharf, D. Cochran, and S. D. Howard, “Analysis of FisherInformation and the Cramer-Rao bound for nonlinear parameter estimation after randomcompression,” IEEE Trans. Signal Processing, vol.63, no.23, pp. 6423-6428, Dec. 2015.

13. V. Bandara, A. Pezeshki, and A. Jayasumana, “A spatiotemporal model for internet traf-fic anomalies,” IET Networks, special issue on Teletraffic Engineering in CommunicationsSystems, vol. 3, no. 1, pp. 41-53, Mar. 2014.

14. Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Hypothesis testing in feedforwardnetworks with broadcast failures,” IEEE J. Selected Topics in Signal Processing, specialissue on Learning-Based Decision Making in Dynamic Systems under Uncertainty, vol. 7,no. 5, pp. 797-810, Oct. 2013.

15. R. Zahedi, L. Krakow, A. Pezeshki, and E. K. P. Chong, “Adaptive estimation of time-varyingsparse signals,” IEEE Access, vol. 1, pp. 449-464, Jul. 2013.

16. Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Detection perfor-mance for balanced binary relay trees with node and link failures,” IEEE Trans. SignalProcessing, vol. 61, no. 9, pp. 2165-2177, May 2013.

Page 4: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

17. Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Learning in hier-archical social networks,” IEEE J. Selected Topics in Signal Processing, special issue onAdaptation and Learning over Complex Networks, vol. 7, no. 2, pp. 305-317, April 2013.

18. Z. Zhang, A. Pezeshki, W. Moran, S. D. Howard, and E. K. P. Chong, “Error probabilitybounds for balanced binary relay trees,” IEEE Trans. Information Theory, vol. 58, no. 6, pp.3548-3563, Jun. 2012.

19. R. Zahedi, A. Pezeshki, and E. K. P. Chong, “Measurement design for detecting sparse sig-nals,” Physical Communication, special issue on Compressive Sensing in Communications,vol. 5, no. 2, pp. 64-75, Jun. 2012.

20. T. R. Qureshi, M. D. Zoltowski, R. Calderbank, and A. Pezeshki “Unitary design of radarwaveform diversity sets,” Digital Signal Processing, vol. 21, no. 5, pp. 552-567, Sep. 2011.

21. A. R. Calderbank, P. G. Casazza, G. Kutyniok, A. Heinke, and A. Pezeshki, “Sparse fusionframes: Existence and construction,” Advances in Computational Mathematics, vol. 35, no.1, pp. 1-31, Jul. 2011.

22. Y. Chi, L. L. Scharf, A. Pezeshki, and R. Calderbank, “Sensitivity to basis mismatch incompressed sensing,” IEEE Trans. Signal Processing, vol. 59, no. 5, pp. 2182-2195, May2011 (2013 IEEE SPS Young Author Best Paper Award).

23. V. Bandara, A. P. Jayasumana, A. Pezeshki, T. H. Illangasekare, and K. Barnhart, “Subsur-face plume tracking using sparse wireless sensor networks,” Electronic Journal of StructuralEngineering (EJSE), special issue on Wireless Sensor Networks and Practical Applications,pp. 1-11, 2010.

24. A. Pezeshki, L. L. Scharf, and E. K. P. Chong, “The geometry of linearly and quadraticallyconstrained optimization problems for signal processing and communications,” Journal ofThe Franklin Institute, special issue on Modeling and Simulation in Advanced Communica-tions, vol. 347, no. 5, pp. 818-835, Jun. 2010.

25. G. Kutyniok, A. Pezeshki, R. Calderbank, and T. Liu, “Robust dimension reduction, fusionframes and Grassmannian packings,” Applied and Computational Harmonic Analysis, vol.26, pp. 64-76, 2009.

26. M. R. Azimi-Sadjadi, N. Roseveare, and A. Pezeshki, “Wideband DOA estimation algo-rithms for multiple moving sources using unattended acoustic sensors,” IEEE Trans. AerospaceElectron. Syst., vol. 44, no. 4, pp. 1585-1599, Oct. 2008.

27. A. Pezeshki, A. R. Calderbank, W. Moran, and S. D. Howard, “Doppler resilient Golaycomplementary waveforms,” IEEE Trans. Information Theory, vol. 54, no. 9, pp. 4254-4266, Sep. 2008.

28. A. Pezeshki, B. D. Van Veen, L. L. Scharf, H. Cox, and M. Lundberg, “Eigenvalue beam-forming using a multi-rank MVDR beamformer and subspace selection,” IEEE Trans. SignalProcessing, vol. 56, no. 5, pp. 1954-1967, May 2008.

Page 5: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

29. A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, “Undersea target classification usingcanonical correlation analysis,” IEEE J. Oceanic Engineering, vol. 32, no. 4, pp. 948-955,Oct. 2007.

30. A. Pezeshki, L. L. Scharf, J. K. Thomas, and B. D. Van Veen, “Canonical coordinates are theright coordinates for low-rank Gauss-Gauss detection and estimation,” IEEE Trans. SignalProcessing, vol. 54, no. 12, pp. 4817-4820, Dec. 2006.

31. A. Pezeshki, L. L. Scharf, M. R. Azimi-Sadjadi, and Y. Hua, “Two-channel constrained leastsquares problems: Solutions using power methods and connections with canonical coordi-nates,” IEEE Trans. Signal Processing, vol. 53, no. 1, pp. 121-135, Jan. 2005.

32. A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, “A network for recursive extraction ofcanonical coordinates,” Neural Networks, vol. 16(5-6), pp. 801-808, Jul. 2003.

Conference Papers/Presentations:1. A. Pezeshki, M. R. Azimi-Sadjadi, and C. Robbiano, “A multiple kernel machine with in-

situ learning using sparse representation,” Proc. International Joint Conference on neuralNetworks (IJCNN), Budapest, Hungary, Jul. 14-19, 2019.

2. R. Stokoe, P. Stockton, A. Pezeshki, and R. Bartels, “General theoretical analysis of noisein single-pixel imaging,” Frontiers in Optics: Computational/Transformation Optics andOptics in Computing, Washington DC, Sept. 16-20, 2018.

3. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Extending polymatroid set functions with curvatureand bounding the greedy strategy,” in Proc. IEEE Statistical Signal Processing Workshop,Freiburg, Germany, Jun. 10-13, 2018 (invited paper).

4. S. Roy, A. Pezeshki, B. M. Notaros, T. Chen, T. J. Siller, and A. A. Maciejewski, “Activelearning model as a way to prepare students for knowledge integration,” in Proc. 125thAmerican Association of Engineering Education (ASEE) Annual Conference and Exposition,June 2018.

5. A. A. Maciejewski, T. W. Chen, Z. S. Byrne, M. D. Reese, B. M. Notaros, A. Pezeshki,S. Roy, A. M. Leland, L. B. Sample McMeeking, and T. J. Siller, “Throwing Away theCourse-centric Teaching Model to Enable Change,” in Proc. 125th American Association ofEngineering Education (ASEE) Annual Conference and Exposition, June 2018.

6. Y. Liu, A. Pezeshki, S. Roy, B. M. Notaros, T. Chen, and A. A. Maciejewski, “Why mathmatters: demonstrating the relevance of mathematics in ECE education,” in Proc. 2017ASEE Annual Conference & Exposition, Columbus, OH, Jun. 25-28, 2017.

7. T. Chen, B. M. Notaros, A. Pezeshki, S. Roy, and A. A. Maciejewski, “WIP: Knowledge in-tegration to understand why,” in Proc. 2017 ASEE Annual Conference & Exposition, Colum-bus, OH, Jun. 25-28, 2017.

8. D. P. Guralnik, B. Moran, A. Pezeshki, and O. Arslan, “Detecting poisoning attacks onhierarchical malware classification systems,” in Proc. SPIE 10185, Cyber Sensing 2017,101850E, Anaheim, CA, Apr. 9–13, 2017.

Page 6: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

9. Y. Li, A. Pezeshki, L. L. Scharf, and Y. Chi, “Performance bounds for modal analysis usingsparse linear arrays,” in Proc. SPIE: Compressive Sensing VI: From Diverse Modalities toBig Data Analytics, Anaheim, CA, Apr. 9–13, 2017.

10. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Performance bounds for the k-batch greedy strat-egy in optimization problems with curvatures,” in Proc. American Control Conference,Boston, MA, Jul. 6-8, 2016, pp. 7177-7182.

11. T. Chen, A. A. Maciejewski, B. M. Notaros, A. Pezeshki, and M. D. Reese, “Masteringthe core competencies of electrical engineering through knowledge integration,” in Proc.2016 ASEE Annual Conference & Exposition, New Orleans, LA, Jun. 26-29, 2016. DOI:10.18260/p.25683.

12. Y. Liu, E. K. P. Chong, and A. Pezeshki, “Bounding the greedy strategy in finite-horizonstring optimization,” in Proc. 54th IEEE Conference on Decision and Control, Osaka, Japan,December 15-18, 2015, pp. 3900-3905.

13. P. Pakrooh, A. Pezeshki, L. L. Scharf, D. Cochran, and S. D. Howard, “Distribution ofthe Fisher information loss due to random compressed sensing,” Conf. Rec. 49th AnnualAsilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 8-11, 2015.

14. P. Pakrooh, L. L. Scharf, and A. Pezeshki, “Performance breakdown in parameter estimationusing co-prime arrays,” Conf. Rec. 49th Annual Asilomar Conf. Signals, Syst., Comput.,Pacific Grove, CA, Nov. 8-11, 2015.

15. C. Kleinkort, G.-J. Huang, E. Chobanyan, A. Manic, M. Ilic, A/ Pezeshki, V. N. Bringi, B.Notaros, “Visual hull method based shape reconstruction of snowflakes from MASC pho-tographs,” Proc. IEEE International Symposium on Antennas and Propagation and NorthAmerican Radio Science Meeting in Vancouver, British Columbia, Canada, July 19-25, 2015.

16. Y. Liu, E. K. P. Chong, A. Pezeshki, and W. Moran, “Bounds for approximate dynamicprogramming based on string submodularity and curvature,” in Proc. 53rd IEEE Conferenceon Decision and Control (CDC), Los Angeles, CA, Dec. 15-17, 2014, pp. 6653-6658.

17. W. Dang, A. Pezeshki, and R. Bartels, “Analysis of misfocus effects in compressive opticalimaging,” in Conf. Rec. 48th Annual Asilomar Conf. Signals, Syst., Comput., Pacific Grove,CA, Nov. 2-5, 2014 (Invited paper).

18. P. Pakrooh, A. Pezeshki, and L. L. Scharf, “Characterization of orthogonal subspaces foralias-free reconstruction of damped complex exponential modes in sparse arrays,” in Conf.Rec. 48th Annual Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 2-5,2014 (Invited paper).

19. P. Pakrooh, A. Pezeshki, and L. L. Scharf, “Threshold effects in parameter estimation fromcompressed data,” in Proc. 1st IEEE Global Conference on Signal and Information Process-ing, Austin, TX, Dec. 3-5, 2013 (Invited paper).

20. L. Krakow, R. Zahedi, E. K. P. Chong, and A. Pezeshki, “Adaptive compressive sensing inthe presence of noise and erasure,” in Proc. 1st IEEE Global Conference on Signal andInformation Processing, Austin, TX, Dec. 3-5, 2013 (invited paper).

Page 7: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

21. Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Near optimality of greedy strategiesfor string submodular functions with forward and backward curvature constraints,” in Proc.52th IEEE Conference on Decision and Control (CDC13), Firenze, Italy, Dec. 10-13, 2013.

22. Z. Zhang, E. K. P. Chong, and A. Pezeshki, “Learning rates in social networks,” presentedat the Joint CSS Kyoto University Workshop on Systems and Control 2013, Clock TowerCentennial Hall, Kyoto University, Kyoto, Japan, May 9, 2013 (Invited paper).

23. Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Asymptotic learning in feedforwardnetworks with binary symmetric channels,” in Proc. IEEE Int. Conf. Acoust., Speech, SignalProcess. (ICASSP), Vancouver, BC, May 26-31, 2013.

24. P. Pakrooh, L. L. Scharf, A. Pezeshki, and Y. Chi, “Analysis of Fisher Information and theCramer-Rao bound for nonlinear parameter estimation after compressed sensing,” in Proc.IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Vancouver, BC, May 26-31,2013.

25. D. C. Dhanapala, V. W. Bandara, A. Pezeshki, and A. P. Jayasumana, “Phenomena discoveryin WSNs: A compressive sensing based approach,” in Proc. IEEE Int. Conf. Comm.,Budapest, Hungary, Jun. 9-13, 2013.

26. R. Zahedi, L. W. Krakow, E. K. P. Chong, A. Pezeshki, “Adaptive compressive measurementdesign using approximate dynamic programming,” in Proc. American Control Conference,Washington DC, Jun. 17-19, 2013, pp. 2448-2453.

27. Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Submodularity andoptimality of fusion rules in balanced binary relay trees,” in Proc. 51st IEEE Conf. onDecision and Control, Maui, Hawaii, December 10–13, 2012, pp. 3802-3807.

28. Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Rate of learning inhierarchical social networks,” in Proc. 50th Annual Allerton Conference on Communication,Control and Computing, Urbana-Champaign, IL, Oct. 1-5, 2012.

29. W. Dang, A. Pezeshki, S. D. Howard, and W. Moran, “Coordinating complementary wave-forms across time and frequency,” IEEE Statistical Signal Processing Workshop, Ann Arbor,MI, Aug. 2012.

30. Z. Zhang, E. K. P. Chong, A. Pezeshki, S. D. Howard, and W. Moran, “Detection perfor-mance of M-ary relay trees with non-binary message alphabets,” Proc. IEEE StatisticalSignal Processing Workshop, Ann Arbor, MI, Aug. 2012, pp. 796-799.

31. Z. Zhang, E. K. P. Chong, and A. Pezeshki, “Rate of Bayesian learning in hierarchical socialnetworks,” presented at the 2012 North American School of Information Theory, CornellUniversity, Ithaca, NY, June 1922, 2012.

32. Z. Zhang, E. K. P. Chong, and A. Pezeshki, “Convergence rate of Bayesian learning in socialnet- works,” presented at the Symposium on Network Science in Biological, Social, andGeographic Systems, University of Wyoming, Laramie, Wyoming, April 21, 2012.

Page 8: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

33. R. Zahedi, L. W. Krakow, E. K. P. Chong, and A. Pezeshki, “Adaptive compressive samplingusing partially observable Markov decision processes,” in Proc. IEEE Int. Conf. Acoust.,Speech, Signal Process. (ICASSP), Kyoto, Japan, Mar. 25-30, 2012, pp. 5265-5272 (InvitedPaper).

34. W. Dang, D. G. Winters, D. Higley, A. Pezeshki and R. A. Bartels, “High-speed single-pixelline-scan imaging with a time sequence of intensity masks reconstructed through compressedsensing,” presented at SPIE Photonics West, San Francisco, CA, Jan. 21-26, 2012.

35. Z. Zhang, A. Pezeshki, W. Moran, S. D. Howard, and E. K. P. Chong, “Error probabilitybounds for balanced binary relay trees,” in Proc. 50th IEEE Conf. Decision and Control(CDC), Orlando, FL, Dec. 12-15, 2011.

36. W. Dang, A. Pezeshki, S. D. Howard, W. Moran, and R. Calderbank, “Coordinating com-plementary waveforms for sidelobe suppression,” in Conf. Rec. Forty-fifth Asilomar Conf.Signals, Syst., Comput., Pacific Grove, CA, Nov. 6-9, 2011.

37. Z. Zhang, A. Pezeshki, W. Moran, S. D. Howard, and E. K. P. Chong, “Error probabilitybounds for binary relay trees with unreliable communications,” in Conf. Rec. Forty-fifthAsilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 6-9, 2011.

38. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. Luo, “Sensitivity considerations in com-pressed sensing,” in Conf. Rec. Forty-fifth Asilomar Conf. Signals, Syst., Comput., PacificGrove, CA, Nov. 6-9, 2011.

39. R. A. Bartels, D. Winters, D. Kupka, W. Dang, and A. Pezeshki, “Extracting informationfrom optical fields through spatial and temporal modulation,” Frontiers in Optics, San Jose,California, Oct. 16-20, 2011.

40. Z. Zhang, A. Pezeshki, W. Moran, S. D. Howard, and E. K. P. Chong, “Error probabilitybounds for binary relay trees with crummy sensors,” in Proc. 2011 Workshop on DefenseApplications of Signal Processing (DASP11), The Hyatt Coolum Resort, Coolum, Queens-land, Australia, July 10-14, 2011 (Invited Paper).

41. L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. Luo, “Compressive sensing and sparseinversion in signal processing: Cautionary notes,” in Proc. 2011 Workshop on Defense Ap-plications of Signal Processing (DASP11), The Hyatt Coolum Resort, Coolum, Queensland,Australia, July 10-14, 2011 (Invited Paper).

42. A. Pezeshki, W. Dang and R. Bartels, “Mask design for high-resolution compressive opticalimaging,” presented at SPIE Wavelets and Sparsity XIV, San Diego, CA, Aug. 21-25, 2011.

43. Z. Zhang, A. Pezeshki, W. Moran, S. Howard, and E. K. P. Chong, “Performance analysisof fusion trees,” presented at the 1st Southwest Workshop on Theory and Applications ofCyber-Physical Systems, Tucson, Arizona, March 1011, 2011.

44. V. Bandara, A. Pezeshki, and A. Jayasumana, “Modeling spatial and temporal behavior ofInternet traffic anomalies,” in Proc. 35th Annual IEEE Conference on Local Computer Net-works and Workshops (LCN), Denver, CO, Oct. 11-14, 2010.

Page 9: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

45. R. Zahedi, A. Pezeshki, and E. K. P. Chong, “Robust measurement design for detectingsparse signals: Equiangular uniform tight frames and Grassmannian packings,” in Proc.American Control Conference (ACC), Baltimore, MD, Jun. 30-Jul. 2, 2010.

46. Y. Chi, A. Pezeshki, L. L. Scharf, and R. Calderbank, “Sensitivity to basis mismatch ofcompressed sensing,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP),Dallas, TX, Mar. 14-19, 2010.

47. Y. Chi, R. Calderbank, and A. Pezeshki, “Golay complementary waveforms for sparse Delay-Doppler radar imaging,” in Proc. Third International Workshop on Computational Advancesin Multi-Sensor Adaptive Processing (CAMSAP), Aruba, Dutch Antilles, Dec. 13-16, 2009.

48. Y. I. Abramovich, B. A. Johnson, L. L. Scharf, A. Pezeshki, and N. K. Spencer, “Expectedlikelihood-based detection-estimation of multirank signals,” in Conf. Rec. Forty-Third Asilo-mar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 7-11, 2009.

49. Y. Chi, L. L. Scharf, A. Pezeshki, and R. Calderbank, “The sensitivity to basis mismatchof compressed sensing for spectrum analysis and beamforming,” in Proc. 6th Workshop onDefence Applications of Signal Processing (DASP), Lihue, HI, Sep. 26-30, 2009.

50. A. R. Calderbank, P. G. Casazza, G. Kutyniok, A. Heinke, and A. Pezeshki, “Constructingfusion frames with desired parameters,” in SPIE Wavelets XIII, San Diego, CA, Aug. 2-6,2009.

51. B. G. Bodmann, G. Kutyniok, and A. Pezeshki, “Erasure-proof coding with fusion frames,”in Proc. 8th international conference on Sampling Theory and Applications (SampTA), Mar-seille, France, May 18-22, 2009.

52. A. Pezeshki, R. Calderbank, and L. L. Scharf, ”Sidelobe suppression in a desired range/Dopplerinterval,” in Proc. IEEE Radar Conference, Pasadena, CA, May 4-8, 2009.

53. Y. Chi, A. Pezeshki, and A. R. Calderbank, “Range sidelobe suppression in a desired Dopplerinterval,” in Proc. IEEE Waveform Diversity and Design Conference, Orlando, FL, Feb. 8-13, 2009.

54. S. Jafarpour, A. Pezeshki, R. Calderbank, “Experiments with compressively sampled imagesand a new debluring-denoising algorithm,” in Proc. Tenth IEEE International Symposium onMultimedia, (ISM), Berkeley, CA, Dec. 15-17, 2008.

55. A. Pezeshki, G. Kutyniok, and R. Calderbank, “Fusion frames and robust dimension re-duction,” in Proc. 42nd Annual Conference on Information Sciences and Systems (CISS),Princeton University, Princeton, NJ, Mar. 19-21, 2008.

56. S. Suvorova, S. D. Howard, W. Moran, R. Calderbank, and A. Pezeshki, “Doppler resilience,Reed-Muller codes, and complementary waveforms,” in Conf. Rec. Forty-first AsilomarConf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 4-7, 2007.

57. L. L. Scharf and A. Pezeshki, “Eigenanalysis of subspace decompositions for robust adaptivebeamforming,” in Conf. Rec. Forty-first Asilomar Conf. Signals, Syst., Comput., PacificGrove, CA, Nov. 4-7, 2007.

Page 10: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

58. A. Pezeshki, A. R. Calderbank, S. D. Howard, and W. Moran, “Doppler resilient Golaycomplementary pairs for radar,” in Proc. IEEE Workshop on Statistical Signal Processing,Madison, WI, Aug. 26-29, 2007.

59. L. L. Scharf and A. Pezeshki, “Eigenanalysis of subspace beamformers,” presented at Fif-teenth Annu. Workshop Adaptive Sensor Array Process., Lexington, MA: Lincoln Lab,Mass. Inst. Tech., June 5-6, 2007.

60. Il-Young Son, Trond Varslot, Can Evren Yarman, Ali Pezeshki, Birsen Yazici, and Mar-garet Cheney, “Radar detection using sparsely distributed apertures in urban environment,”in Proc. SPIE Defense and Security Symposium, Orlando, FL, May 7, 2007.

61. L. L. Scharf, A. Pezeshki, B. D. Van Veen, H. Cox, and O. Besson, “Eigenvalue beamformingusing a multi-rank MVDR beamformer,” in Proc. 5th Workshop on Defence Applications ofSignal Processing, Queensland, Australia, Dec. 10-14, 2006.

62. A. R. Calderbank, S. D. Howard, W. Moran, A. Pezeshki, and M. Zoltowski, “Instantaneousradar polarimetry with multiple dually-polarized antennas,” in Conf. Rec. Fortieth AsilomarConf. Signals, Syst., Comput. (Special Session on Active Sensing and Waveform Diversity),Pacific Grove, CA, Oct. 29-Nov. 1, 2006.

63. L. L. Scharf and A. Pezeshki, “Virtual array processing for active radar and sonar sensing,”in Conf. Rec. Fortieth Asilomar Conf. Signals, Syst., Comput. (Special Session on ActiveSensing and Waveform Diversity), Pacific Grove, CA, Oct. 29-Nov. 1, 2006.

64. H. Cox, A. Pezeshki, L. L. Scharf, M. Lundberg, and H. Lai, “Multi-rank adaptive beam-forming with linear and quadratic constraints,” in Conf. Rec. Thirty-ninth Asilomar Conf.Signals, Syst., Comput., Pacific Grove, CA, Oct 30-Nov 2, 2005.

65. A. Pezeshki, L. L. Scharf, M. Lundberg, and E. K. P. Chong, “Constrained quadratic min-imizations for signal processing and communications,” in Proc. Forty-fourth IEEE Conf.Decision Contr., Seville, Spain, Dec. 12-15, 2005.

66. L. L. Scharf, A. Pezeshki, and M. Lundberg, “Multi-rank adaptive beamforming,” in Proc.IEEE Workshop Stat. Signal Processing, Bordeaux, France, July 17-20, 2005.

67. M. Yamada, A. Pezeshki, and M. R. Azimi-Sadjadi, “Relation between kernel CCA andkernel FDA,” in Proc. IEEE Int. Joint Conf. Neural Networks, Montreal, Canada, July31-Aug. 4, 2005.

68. L. L. Scharf, A. Pezeshki, and M. Lundberg, “A generalized sidelobe canceller formula-tion for multi-rank Capon beamformer,” presented at Thirteenth Annu. Workshop AdaptiveSensor Array Process., Lexington, MA: Lincoln Lab, Mass. Inst. Technol., June 7-8, 2005.

69. M. R. Azimi-Sadjadi and A. Pezeshki, “Unattended sparse acoustic array configurations andbeamforming algorithms,” in Proc. SPIE Defense and Security Symposium, Orlando, FL,March 28-April 1, 2005.

Page 11: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

70. A. Pezeshki, L. L. Scharf, M. R. Azimi-Sadjadi, and M. Lundberg, “Empirical canonicalcorrelation analysis in subspaces,” in Conf. Rec. Thirty-eighth Asilomar Conf. Signals,Syst., Comput., Pacific Grove, CA, Nov 7-10, 2004.

71. M. Lundberg, L. L. Scharf, and A. Pezeshki, “Multi-rank Capon beamforming,” in Conf.Rec. Thirty-eighth Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov 7-10,2004.

72. A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, “Kernel-based canonical coordinatedecomposition of two-channel nonlinear maps,” in Proc. IEEE Int. Joint Conf. NeuralNetworks, Budapest, Hungary, July 25-29, 2004.

73. A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, “Classification of underwater mine-likeand non-mine-like objects using canonical correlations,” in Proc. SPIE Defense and SecuritySymposium, Orlando, FL, April 12-16, 2004.

74. A. Pezeshki, M. R. Azimi-Sadjadi, and R. Wade, “Coherence analysis using canonical coor-dinate decomposition with applications to sparse processing and optimal array deployment,”in Proc. SPIE Defense and Security Symposium, Orlando, FL, April 12-16, 2004.

75. M. R. Azimi-Sadjadi, L. L. Scharf, A. Pezeshki, and M. Hohil, “Wideband DOA estimationalgorithms for multiple target detection and tracking using unattended acoustic sensors,” inProc. SPIE Defense and Security Symposium, Orlando, FL, April 12-16, 2004.

76. A. Pezeshki, M. R. Azimi-Sadjadi, and L. L. Scharf, “A canonical coordinate decompositionnetwork,” in Proc. IEEE Int. Joint Conf. Neural Networks, Portland, OR, July 20-24, 2003.

77. A. Pezeshki, M. R. Azimi-Sadjadi, L. L. Scharf, and M. Robinson, “Underwater target clas-sification using canonical correlations,” in Proc. Oceans’03 MTS/IEEE, San Diego, CA,Sept. 22-26, 2003.

78. A. Pezeshki, M. R. Azimi-Sadjadi, L. L. Scharf, and M. Robinson, “A canonical correlation-based feature extraction method for underwater target classification,” in Proc. Oceans’02MTS/IEEE, Biloxi, MI, Oct. 29-31, 2002.

Dissertation:Ali Pezeshki, Two-Channel Signal Processing in Canonical Coordinates, Ph.D. Dissertation,Colorado State University, Ft. Collins, CO, Oct. 2004.

CONTRACTS AND GRANTS:

Total Funding: More than $2.2M received from NSF, ONR, AFOSR, and DARPA/DSO

1. Co-Principal Investigator, “Sonar Echoic and Information Flow Field Processing and Learn-ing for Interactive Sensing and Inference,” Office of Naval Research (ONR) Award N00014-18-1-2805, PI: Mahmood Azimi, Co-PI: Louis Scharf, July 16, 2018 to July 15, 2020,Amount $260,000.

Page 12: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

2. Principal Investigator, “Workshop: Geometry for Signal Processing and Machine Learning,”National Science Foundation (NSF) Award CCF-1654873, August 15, 2016 to July 31, 2018,Amount $79,824.

3. Principal Investigator, “CCF:Small: String Submodularity and Near-Optimal Adaptive Con-trol and Sensing,” National Science Foundation (NSF) Award CCF-1422658, Co-PI: EdwinK. P. Chong, July 1, 2014 to Dec. 31, 2018, Amount $499,962.

4. Principal Investigator, “Forty Six Years (and Counting) of Statistical Signal Processing,”National Science Foundation (NSF), Nov. 1, 2015 to June 30, 2016, Amount $6,800.

5. Principal Investigator, “Underwater Target Detection and Classification with In-Situ Learn-ing,” Information Systems Technologies, Inc. (Co-PI: Mahmood Azimi), Subcontract forONR funded project (Algorithms for In-situ Learning for Automatic Target-Recognition(ATR) Sonar Imagery), Nov. 1, 2012 to Dec. 31, 2014, Amount $99,975.

6. Co-Principal Investigator, “CCF:Small:Collaborative Research: Compressed Sensing forHigh-Resolution Image Inversion,” National Science Foundation (NSF) Award CCF-1018472,PI: Louis L. Scharf, Sep. 1, 2010 to Aug. 31, 2014, Collaborative Institute: Princeton Uni-versity (PI: Robert Calderbank), Total Amount $499,843, CSU Amount $333,176. CSU isthe lead institution.

7. Principal Investigator, “CCF:Small:Collaborative Research: Signal Design for Low-ComplexityActive Sensing,” National Science Foundation (NSF) Award CCF-0916314, July 1, 2009to June 30, 2013, Collaborative Institutes: Princeton University (PI: Robert Calderbank)and Purdue University (PI: Michael Zoltowski), Total Amount $481,545, CSU Amount$161,545. CSU is the lead institution.

8. Co-Principal Investigator, “Mathematical Infrastructure for Knowledge Enhanced Compres-sive Measurement”, Johns Hopkins University Applied Physics Lab Contract N66001-11-C-4023 (PI: Edwin K. P. Chong, other Co-PIs: Louis L. Scharf and J. Rockey Luo), Subcontractfor jointly funded DARPA/DSO project (Knowledge Enhanced Compressive MeasurementKECoM BAA-10-38), January 2011 to January 2012, Amount $397,000.

9. Co-Principal Investigator, “Information Fusion and Control in Hierarchical Systems,” AirForce Office of Scientific Research (AFOSR) contract FA9550-09-1-0518, PI: Edwin K. P.Chong, July 1, 2009 to November 30, 2011, Amount $397,877.

INVITED TALKS/SEMINARS:

Invited Talks: 31

1. “Submodular Optimizations, Greedy Policies, and Approximate Dynamic Programming,”Erik Jonsson School of Engineering and Computer Science, The University of Texas-Dallas,and the IEEE Dallas Chapter of Signal Processing Society, Dallas, TX, Aug. 15, 2019.

2. “Submodular Optimizations, Greedy Policies, and Approximate Dynamic Programming,”Department of Electrical and Electronic Engineering, The University of Melbourne, Mel-bourne, VIC, Australia, Jun. 21, 2019.

Page 13: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

3. “Compressive Sampling and Sparse Recovery for High-Resolution Image Inversion: Cau-tionary Notes,” Department of Electrical and Electronic Engineering, The University of Mel-bourne, Melbourne, VIC, Australia, Jun. 7, 2019.

4. “Compressive Sampling and Sparse Recovery for High-Resolution Image Inversion: Cau-tionary Notes,” Erik Jonsson School of Engineering and Computer Science, The Universityof Texas-Dallas, and the IEEE Dallas Chapter of Signal Processing Society, Dallas, TX,Mar. 1, 2019.

5. “Interactive Sensing and Navigation for Underwater Target Detection and Classification,”ONR Unmanned Maritime Systems Technology (UMST) Program Review, Panama City, FL,Jan. 28-31, 2019.

6. “Compressed Sensing, Sparse Recovery, and High-Resolution Image Inversion: CautionaryNotes,” Department of Applied Mathematics, University of Colorado, Boulder, CO, Apr. 4,2017.

7. “Distribution of Fisher Information after Random Compression,” NSF Workshop on Geom-etry for Signal Processing and Machine Learning, Estes Park, CO, Oct. 13, 2016.

8. “Compressed Sensing, Sparse Recovery, and High-Resolution Image Inversion: CautionaryNotes,” Draper Laboratory, Cambridge, MA, Nov. 2, 2015.

9. “Compressed Sensing, Sparse Recovery, and High-Resolution Image Inversion: CautionaryNotes,” School of Engineering and Applied Science, Harvard University, Cambridge, MA,Oct. 30, 2015.

10. “Compressed Sensing and High-Resolution Image Inversion: Cautionary Notes,” School ofElectrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ, Oct.9, 2015.

11. “Compressed Sensing and High-Resolution Image Inversion: Cautionary Notes,” Depart-ment of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester,MA, Aug. 28, 2015.

12. “Compressed sampling and Fisher Information,” MATHEON Workshop on CompressiveSensing and Its Applications, Technische Universitaet Berlin, Berlin, Germany, Dec 9-13,2013 (Plenary talk).

13. “Coordinating complementary waveforms for sidelobe suppression”, SIAM Annual Meeting,Minisymposium on Radar Imaging, San Diego, CA, Jul. 8-12, 2013.

14. “Compressed sensing and high-resolution image inversion,” Department of Mathematics,Colorado State University, Fort Collins, CO, Nov. 15, 2012.

15. “Sense and sensitivity: compressed sensing and high resolution image inversion,” IEEE Den-ver Signal Processing Society, University of Colorado-Boulder, CO, June 28, 2011.

16. “Compressed sensing and parameter estimation,” Sparse and Low Rank Approximation Work-shop, Banff International Research Station, Banff, AB, Canada, Mar. 6, 2011.

Page 14: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

17. “Compressed sensing for high resolution image inversion,” Dagstuhl Seminar: Sparse Rep-resentations and Efficient Sensing of Data, Schloss Dagstuhl–Leibniz Center for Informatics,Wadern, Germany, Jan. 31, 2011.

18. “Sense and sensitivity: model mismatch in compressed sensing,” Winter Meeting of theCanadian Mathematical Society, Vancouver, BC, Dec. 4, 2010.

19. “Low-complexity active sensing: asking the right questions,” Department of Electrical andComputer Engineering, University of Wisconsin, Madison, WI, Feb. 16, 2009.

20. “Signal design for low-complexity active sensing,” Spectral Analysis Laboratory, Depart-ment of Electrical and Computer Engineering, University of Florida, Gainesville, FL, Feb.11, 2009.

21. “Waveform diversity in active sensing,” Center for Advanced Communications, VillanovaUniversity, Villanova, PA, Dec. 3, 2008.

22. “Waveform diversity in active sensing,” Department of Mechanical and Nuclear Engineering,Penn State University, State College, PA, Dec. 1, 2008.

23. “New directions in active sensing,” Department of Electrical and Computer Engineering,Colorado State University, Fort Collins, CO, April 3, 2008.

24. “A diversity/multiplexing trade-off for distributed sensing,” Department of Electrical andComputer Engineering, Drexel University, Philadelphia, PA, Feb. 15, 2008.

25. “A diversity/multiplexing trade-off for distributed sensing,” Department of Electrical Engi-neering, Stanford University, Palo Alto, CA, Nov. 1, 2007.

26. “A diversity/multiplexing trade-off for distributed sensing,” Department of Electrical andComputer Engineering, Colorado State University, Ft. Collins, CO, Oct. 24, 2007.

27. “Multi-rank MVDR beamforming,” Information Sciences and Systems (ISS) Seminar, De-partment of Electrical Engineering, Princeton University, Princeton, NJ, Feb. 1, 2007.

28. “Multi-rank Capon beamforming,” Spring 2006 Time-frequency Brown Bag Seminar, TheProgram in Applied and Computational Mathematics and Department of Mathematics, Prince-ton University, Princeton, NJ, March 28, 2006.

29. “Multi-rank Capon beamforming,” Department of Electrical Engineering, University of California-Santa Cruz, Santa Cruz, CA, March 2, 2006.

30. “Multi-rank Capon beamforming,” Department of Electrical Engineering, Santa Clara Uni-versity, Santa Clara, CA, March 1, 2006.

31. “Multi-rank beamforming using the generalized sidelobe canceller and subspace selection,”Department of Electrical Engineering, Rensselaer Polytechnic Institute, Troy, NY, Nov. 9,2005.

Page 15: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

TEACHING ACTIVITY:Courses Taught Colorado State University, Fort Collins, COECE311: Linear Systems Analysis I (junior level) Fall 2019ECE312: ECE312: Linear Systems Analysis II (junior level) Spring 2019ECE652: Estimation and Filtering Theory (graduate level) Spring 2019ECE311: Linear Systems Analysis I (junior level) Fall 2018ECE/Math430: Fourier and Wavelet Analysis with Applications (senior level) Spring 2018ECE312: Linear Systems Analysis II (junior level) Spring 2018ECE311: Linear Systems Analysis I (junior level) Fall 2017ECE652: Estimation and Filtering Theory (graduate level) Spring 2017ECE/Math520: Optimization Theory (graduate level) Spring 2017ECE311: Linear Systems Analysis I (junior level) Fall 2016ECE752: Topics in Signal Processing (graduate level) Spring 2016ECE/Math520: Optimization Theory (graduate level) Fall 2015ECE311: Linear Systems Analysis I (junior level) Fall 2015ECE652: Estimation and Filtering Theory (graduate level) Spring 2015ECE102: Digital Logic (freshman level) Fall 2014ECE311: Linear Systems Analysis I (junior level) Fall 2014ECE681A1: Algebraic Coding Theory (graduate level) Spring 2014ECE312: Linear Systems Analysis II (junior level) Spring 2014ECE311: Linear Systems Analysis I (junior level) Fall 2013ECE312: Linear Systems Analysis II (junior level) Spring 2013ECE652: Estimation and Filtering Theory (graduate level) Spring 2013ECE312: Linear Systems Analysis II (junior level) Spring 2012ECE303: Introduction to Communication Principles (junior level) Fall 2011ECE681A1: Algebraic Coding Theory (graduate level) Spring 2011ECE421: Telecommunications I (senior level) Fall 2010ECE303: Introduction to Communication Principles (junior level) Fall 2009ECE514: Applications of random Processes (graduate level) Fall 2009ECE752: Wavelets and Filter Banks (graduate level) Spring 2009ECE421: Telecommunications I (senior level) Fall 2008ECE421: Telecommunications I (senior level) Fall 2005

Guest Lectures Colorado State University, Ft. Collins, COMATH261: Calculus II (Partial fraction recitation for ECE students) Fall 2012ECE652: Estimation and Filtering Theory (graduate level) Spring 2003 & Spring 2004ECE513: Digital Image Processing (graduate level) Spring 2003 & Spring 2004ECE656: Neural Networks and Adaptive Filters (graduate level) Fall 2003 & Fall 2004

Page 16: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Guest Lectures Princeton University, Princeton, NJCoding Theory (graduate level) Spring 2008Developed and taught a course module (4 lectures) on sequence design for active sensing

STUDENT ADVISING AND TRAINING:Graduate Students:

1. Yifan Yang, PhD student, ECE (jointly with Mahmood Azimi) FA19-present

2. Yuan (Maxine) Xu, MSc student, ECE (jointly with Randy Bartels) SP19-present

3. Saied Ahmadinia, PhD student, ECE (jointly with Mahmood Azimi) FA18-present

4. Robby Stokoe, MSc student, ECE (jointly with Randy Bartels) FA17-present

5. Yajing Liu, PhD student, ECE (jointly with Edwin Chong) FA12-SUM18

6. Pooria Pakrooh, PhD student, ECE (jointly with Louis scharf) FA11-SUM15

7. Somayeh Hosseini, MSc student, ECE (jointly with Mahmood Azimi) SP11-SP15

8. Vidarshana Bandara, PhD student, ECE (jointly with Anura Jayasumana) FA08-SP15

9. Wenbing Dang, PhD student, ECE (jointly with Mahmood Azimi) FA09-SP14

10. Zhenliang Zhang, PhD student, ECE (jointly with Edwin Chong) FA09-FA13

11. Ramin Zahedi, PhD student, ECE (jointly with Edwin Chong) FA08-FA13

Undergraduate Students:

1. Matthew Channell, Kojo Otoo, and Dustin Rerko FA19Senior Design Project: Woodward Smart Microgrid

2. Colton Martin SP19Independent Research: Analysis of Conventional and Sparse Inversion Methodsfor Radar using KASPER Data

3. James Beitner FA17Independent Research: Semi-Automatic Precipitation Gauge ,

4. Jonathan Hornbaker, David Jump, and Mitch Roberts FA17Senior Design Project: Semi-Automatic Precipitation Gauge

Page 17: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

5. Austin Worden, Chad Wachsmann, and Josh Olsen FA15-SP16Senior Design Project: Mosquito Chasers

6. Cameron Bloom FA13-SP14Senior Design Project: Medical Center of the Rockies LifeBoard

7. Justin Fritzler SP13-FA13Senior Design Project: Electronic ID system for inventory management

8. Melissa Vetterling and Luke Engelbert-Fenton FA12-SP13Senior Design Project: Electronic ID system for inventory management

9. Dave Anderson and Sean Byers FA11-SP12Senior Design Project: TUBE: Transistor Utility for Blonde EmulationJointly with Mahmood Azimi-Sadjadi

10. Mujtaba AlHashim, Moayed AlMaily, Saeid Bahmanpour, Lee Chuah FA10-SP11Senior Design Project: Compressive Sampling and A/D ConversionJointly with Rockey Luo

11. Rashed Al-Mohannadi, Jassim Makki, and Naif Al-Hujilan FA09-SP10Senior Design Project: VHF Meteor Scatter Data LinkJointly with Rockey Luo and John Steininger

Visiting Students:

1. Yuanxin Li, PhD student (advisor: Yuejie Chi), The Ohio State University SUM16

PROFESSIONAL ACTIVITIES AND SOCIETIES:

Editorial Boards:

IEEE Access Sep. 2012-Sep. 2018

Conference Committees and Chair Positions:

Technical Program Committee Member, Thirty-fourth AAAI Conference on Artificial Intelli-gence, New York, NY, Feb. 7-12, 2020.

Technical Program Committee Member, Thirty-sixth International Conference on MachineLearning (ICML), Long Beach, CA, Jun. 10-15, 2019.

Technical Program Committee Member, Thirty-fifth International Conference on MachineLearning (ICML), Stockholm, Sweden, Jul. 1-15, 2018.

Co-Chair and Co-Organizer, NSF Workshop on Geometry for Signal Processing and Ma-chine Learning,, Estes Park, CO, Oct. 12-15, 2016.

Page 18: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Organizer, Special Session on “Geometry of Invariants and Information Limits for Radar,”IEEE Information Theory Workshop, Cambridge, UK, Sept. 11-14, 2016.

Co-Chair and Co-Organizer, Forty-Six Years (and Counting) of Statistical Signal Processing:A workshop in recognition of career contributions of Louis Scharf, Asilomar ConferenceGrounds, Pacific Grove, CA, Nov. 10, 2015 (supported by NSF).

Session Chair, Co-prime Arrays, 49th Annual Asilomar Conf. Signals, Syst., Comput., Pa-cific Grove, CA, Nov. 8-11, 2015.

Technical Program Committee Member, 40th IEEE International Conference on Acoustics,Speech, and Signal Processing (ICASSP),” Brisbane, Australia, Apr. 19-24, 2015.

Technical Program Committee Member, 2nd IEEE Global Conference on Signal and Infor-mation Processing (GlobalSIP), Atlanta, GA, Dec. 3-5, 2014.

Session Chair, “Sparse Learning and Estimation,” 48th Annual Asilomar Conf. Signals, Syst.,Comput., Pacific Grove, CA, Nov. 2-5, 2014.

Technical Program Committee Member, 1st IEEE Global Conference on Signal and Infor-mation Processing (GlobalSIP), Austin, TX, Dec. 3-5, 2013.

Technical Program Committee Member, Signal Processing with Adaptive Sparse StructuredRepresentations (SPARS), EPFL, Lausanne, Switzerland, July 8-11, 2013.

Technical Program Committee Member, 9th IEEE Workshop on Perception Beyond the Visi-ble Spectrum (PBVS), Portland, OR, June 2013.

Technical Program Committee Member, IEEE Statistical Signal Processing Workshop (SSP),”Ann Arbor, MI, Aug. 2012.

Session Chair, “Computer Systems and Networks,” IEEE Statistical Signal Processing Work-shop (SSP),” Ann Arbor, MI, Aug. 2012.

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), Colorado Springs, CO, June2011.

Session Chair, “Compressive Sensing,” 44th Asilomar Conference on Signals, Systems, andComputers,, Pacific Grove, CA, Nov. 2010.

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), San Francisco, CA, June 2010.

Co-organizer, Special Session on “Compressed Sensing, Sparse Approximations, and FrameTheory I-IV,” 44th Annual Conference on Information Sciences and Systems (CISS), Prince-ton, NJ, Mar. 2010.

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), Miami, FL, June 2009.

Page 19: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), Anchorage, AK, June 2008.

Co-organizer, Special Session on “Sparse Representations and Frames I-III,” 42nd AnnualConference on Information Sciences and Systems (CISS), Princeton, NJ, Mar. 2008.

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), Minneapolis, MN, June 2007.

Technical Program Committee Member, IEEE International Workshop on Object Trackingand Classification Beyond the Visible Spectrum (OTCBVS), New York, NY, June 2006.

Session Chair, “Interference Cancelation and Mitigation,” 40th Annual Conference on Infor-mation Sciences and Systems (CISS), Princeton, NJ, Mar. 2006.

Reviewer/Referee:

Journals:

IEEE Transactions on Signal Processing

IEEE Transactions on Information Theory

IEEE Transactions of Aerospace and Electronic Engineering

IEEE Transactions on Wireless Communications

IEEE Transactions on Communications

IEEE Transactions on Neural Networks

IEEE Signal Processing Letters

Applied and Computational Harmonic Analysis

Digital Signal Processing

Signal Processing

Physical Communication

International Journal of Computer Vision

Computer Vision and Image Understanding

Computer Networks

Page 20: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Conferences:

IEEE International Symposium on Information Theory (ISIT), Paris, France, Jul. 7-12, 2019.

International Conference on Machine Learning (ICML), Long Beach, CA, Jun. 10-15, 2019.

Thirty-second Conference on Neural Information Processing Systems (NIPS), Montreal,Canada, Dec. 3-8, 2018.

International Conference on Machine Learning (ICML), Stockholm, Sweden, Jul. 10-15,2018.

IEEE Statistical Signal Processing Workshop (SSP), Freiburg, Germany, Jun. 10-13, 2018.

IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, Jun. 25-30, 2017.

Signal Processing with Adaptive Sparse Structured Representations (SPARS), Lisbon, Por-tugal, Jun. 5-8, 2017.

IEEE Information Theory Workshop, Cambridge (ITW), UK, Sept. 11-14, 2016.

IEEE International Symposium on Information Theory, Hong Kong, Jul. 14-19, 2015.

2nd IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta,GA, Dec. 3-5, 2014.

1st IEEE Global Conference on Signal and Information Processing (GlobalSIP), Austin, TX,Dec. 3-5, 2013.

Signal Processing with Adaptive Sparse Structured Representations (SPARS), EPFL, Lau-sanne, Switzerland, July 8-11, 2013.

9th IEEE Workshop on Perception Beyond the Visible Spectrum (PBVS), Portland, OR, June2013.

American Control Conference, Washington DC, Jun. 17-19 , 2012.

IEEE Statistical Signal Processing Workshop, Ann Arbor, MI, Aug. 5-8, 2012.

IEEE International Symposium on Information Theory (ISIT), Seoul, Korea, June 28-July 3,2009.

IEEE International Joint Conference on Neural Networks (IJCNN), Atlanta, GA, July 2009.

IEEE International Workshop on Object Tracking and Classification Beyond the VisibleSpectrum (OTCBVS), Miami, FL, June 2009.

Sixth International Conference on Wireless On-demand Network Systems and Services (WONS),Snowbird, UT, February 2-4, 2009.

IEEE International Workshop on Object Tracking and Classification Beyond the VisibleSpectrum (OTCBVS), Anchorage, AK, June 2008.

Page 21: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

42nd Annual Conference on Information Sciences and Systems (CISS), Princeton Univer-sity, Princeton, NJ, March 2008.

IEEE World Congress on Computational Intelligence (WCCI), Hong Kong, June 2008.

IEEE International Joint Conference on Neural Networks (IJCNN), Orlando, FL, USA, Au-gust 2007.

IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS),Sydney, New South Wales, Australia, November 2006.

40th Annual Conference on Information Sciences and Systems (CISS), Princeton University,Princeton, NJ, March 2006.

Sixth IEEE International Workshop on Signal Processing Advances for Wireless Communi-cations (SPAWC), New York City, NY, June 2005.

University CommitteesISTeC Education Advisory Committee FA13-present

ECE Representative to Faculty Council FA18

SBME Search Committee FA14-SP15

CoE Representative to the Faculty Council Committee on Faculty Governance SP14

ECE Department Advisory Committee FA10-FA13

College of Engineering Dean’s Think Tank Committee SP10-SP12

Professional Societies:

Member, IEEE

Member, AMS

Page 22: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Ali Pezeshki: Research Highlights

My research group is focused on the analysis and optimization of statistical inference problems that arisein machine learning, sensing, and imaging. We are interested in understanding fundamental limits of per-formance in parametric inference, and in developing optimal measurement design and inference/learningalgorithms. Our emphasis is on developing general theories that are applicable to a wide range of applica-tions. To date, our research program has attracted more than $2.2M in funding from various federal agencies.In the following sections, we highlight a subset of our results from select projects. The unifying theme isthe pursue of our two key interests: understanding fundamental limits and developing optimal measurementdesign and learning principles for inference.

1 Submodularity and Approximate Dynamic Programming

Funding Source: NSF

Collaborators: Yajing Liu (CSU; now with NREL), Zhenliang Zhang (CSU; now with Alibaba Group),Edwin Chong (CSU), and William Moran (The University of Melbourne, Australia)

Selected Papers: [1]–[7]

Contributions: In sequential decision making, adaptive sensing, and optimal control, we are frequentlyfaced with optimally choosing a string (sequence) of actions over a finite horizon to maximize an objectivefunction. In stochastic settings, these problems are often formulated as stochastic optimal control prob-lems in the form of Markov decision processes (MDPs) or partially observable Markov decision processes(POMDPs). However, computing the optimal strategy (optimal sequence of actions) is often difficult.

A general approach is to use dynamic programming via Bellman’s principle for optimality. Bellman’sprinciple tells us that the objective function to be optimized at each decision epoch must capture both theimmediate reward as well as the (expected) long-term net reward associated with each candidate action. Thisembodies a rigorous notion of delayed gratification, common to all nontrivial optimal dynamic decision-making policies. However, the computational complexity of this approach grows exponentially with thesize of the action space and the decision horizon. Because of this inherent complexity, for years, there hasbeen interest in developing approximation methods for solving dynamic programming problems, leading toa wide range of approximate dynamic programming (ADP) schemes. These techniques all aim to replacethe expected-value-to-go (EVTG) term in Bellman’s principle, whose computation is intractable, with com-putationally tractable approximations. However, in general, it is difficult to tell, without doing extensivesimulation and testing, if a given ADP scheme has good performance, and even then it is hard to say howfar from optimal it is.

Many approximation schemes are variations of greedy strategies, which are suboptimal but easy to com-pute because they only involve finding an action at each stage to maximize the step-wise gain in the objectivefunction. But how does the greedy strategy compare to the optimal strategy in terms of the objective value?We have answered this question by extending the notions of submodularity and curvature for set functionsto string functions.

Submodularity of functions over finite sets plays an important role in discrete optimization. It has beenshown that, under submodularity, the greedy strategy provides at least a constant-factor approximation to theoptimal strategy. For example, the celebrated result of Nemhauser et al. (1978) states that for maximizinga monotone submodular function over a uniform matroid, the value of the greedy strategy is no less than afactor (1� e�1) of that of the optimal strategy. This is a powerful result. But a drawback is that submodularfunctions studied in most of the existing work are defined on the power set of a given finite set. In contrast,we are interested in problems where we have to choose a string (ordered set) of actions sequentially. This

Page 23: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

is because in most adaptive sensing and optimal control problems the objective function depends on thetrajectory taken in the state-action space. Hence, we cannot apply the result of Nemhauser et al. or itsrelated results on submodularity over finite sets.

To address this issue, we have introduced (see, e.g., [1] and [3]) the notion of string-submodularity, whichextends the notion of set submodularity. We have shown that, under string-submodularity, any greedy strat-egy is suboptimal by a factor of at worst (1�e�1), entirely consistent with the result of Nemhauser et al. Ourframework also introduces several notions of curvature for string-submodular functions, which roughly cor-respond to the quantitative “degree” of submodularity. Using these, we have derived suboptimality boundsfor greedy strategies that are strictly better than (1 � e�1); the smaller the curvature the better the bound.We have also studied two canonical applications, task assignment [1] and adaptive measurement selection[1], [2], where submodularity can be used to bound the performance of greedy strategies.

More recently, we have extended our results in this context to develop a framework for bounding theperformance of general (not just greedy) ADP schemes. Examples of such ADP schemes include MonteCarlo approximation, rollout policies, reinforcement learning with neural networks, hindsight optimization,and foresight optimization. To develop our bounding framework, we first investigated a broad family ofcontrol strategies called path-dependent action optimization (PDAO), where every control decision is treatedas the solution to an optimization problem with a path-dependent objective function. Here a path means asequence of states or a state trajectory. We developed a framework for bounding the performance of PDAOschemes, based on the theory of submodular functions and their associated greedy solutions. Again, by abound we mean a guarantee of the form that the performance of a given PDAO scheme relative to the optimalsolution is at least some known factor. We then established that any ADP scheme is a PDAO scheme forsome surrogate objective function that coincides in its optimal value with that of the original optimal controlproblem. This surrogate objective, of course, depends on the specific approximation to the EVTG used inthe ADP scheme. If the surrogate objective is submodular and/or has small curvature, then the ADP schemeyields to the PDAO bounds mentioned above. This bounding framework, in principle, can be used as a wayto check that an ADP scheme is “good” — to wit, an ADP scheme is good if the surrogate objective functionof the corresponding PDAO scheme is submodular or has a small curvature, and hence is guaranteed to beat least a known factor of optimal. A key point to emphasize here is that we do not require the objectivefunction of the original optimal control problem to be submodular. This idea has been reported in [3].

During the course of our research on string submodularity and bounding ADP schemes, we have alsomade a number of important contributions to set submodularity theory (see, e.g., [4]–[7]). One example isthe extension of Nemhauser et al. results from single-step greedy strategies to batch-greedy strategies [4].Starting with the empty set, a k-batch greedy strategy iteratively adds to the current solution set a batch ofk � 1 elements (actions) that results in the largest gain in the objective function. This extension is by nomeans trivial and required a different proof technique than the one introduced by Nemhauser et al., becauseof matroid constraints involved in the problem. Another example is the extension of submodularity resultsto bounding Nash equilibria in game-theoretic utility maximization problems, where a set of users wish tomake decisions according to their own set of feasible strategies, resulting in an overall social utility value,such as profit, coverage, achieved data rate, and quality of service [5].

2 Compressive Sampling, Sparse Recovery, and Image Inversion

Funding Source: NSF and DARPA/DSO

Collaborators: Pooria Pakrooh (CSU; now with Qualcomm) , Yuejie Chi (Carnegie Mellon University),Louis Scharf (CSU), Robert Calderbank (Duke University), Edwin Chong (CSU), Doug Cochran (ArizonaState University), and Stephen Howard (DSTO, Australia)

Selected Papers: [8]–[13]

Page 24: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

Contributions: In a great number of fields of science and engineering, the problem confronting the designeris to invert an image, acquired from a sensor suite, for the underlying field that produced the image. Andtypically the desired resolution for the underlying field exceeds the temporal or spatial resolution of theimage itself. Here we give image its most general meaning to encompass a times series, a space series, aspace-time series, and a 2-D image. Similarly, we give field its most general meaning to encompass complex-exponential modes, radiating modes, coded modulations, multipath components, and the like. Broadlyspeaking there are two classical principles for inverting the kinds of images that are measured in optics,electromagnetics and acoustics. The first principle is one of matched filtering, wherein a sequence of rank-one (or multi-rank) subspaces, or test images, is matched to the measured image by filtering or correlatingor phasing. The second principle is one of parameter estimation in a separable linear model, wherein asparse modal representation for the field is posited and estimates of linear parameters (complex amplitudesof modes) and nonlinear mode parameters (frequency, wavenumber, delay, and/or doppler) are extracted,usually based on maximum likelihood, or some variation on linear prediction. One important limitation ofthe classical principles is that any subsampling of the measured image has consequences for resolution (orbias) and for variability (or variance).

The relatively recent advent of compressed sensing and sparse recovery theory has revolutionized ourview of imaging, as it demonstrates that subsampling has manageable consequences for image inversion,provided that the image is sparse in an apriori known basis. For imaging problems in spectrum analysis(estimating complex exponential modes), and passive and active radar/sonar (estimating Doppler and angleof arrival), this basis is usually taken to be a Fourier basis (actually a DFT basis) constructed for resolutionof 2⇡/N , with N a window length, array length, or pulse-to-pulse processing length. However, in realityno physical field is sparse in the DFT basis or in any apriori known basis. No matter how finely we gridthe parameter space the sources may not lie in the center of the grid cells and consequently there is alwaysmismatch between the assumed and the actual bases (or frames) for sparsity. But what is the sensitivityof sparse recovery to mismatch between the physical model that generated the data and the mathematicalmodel that is assumed in the inversion algorithm?

In [8], [9], we posed the above question for the first time and derived theorems that bound the performanceof sparse recovery for image inversion under the inevitable conditions of model mismatch. Our mathematicalanalysis and numerical experiments indicate that the performance of sparse recovery for approximating asparse physical field degrades considerably in the presence of basis mismatch and suggest that extra careis needed in applying the theory for high-resolution image inversion. To date, our work in this context hasbeen cited more than 600 times and has resulted in a large number of follow-up articles by others on variousaspects of “off-grid” sparse recovery.

But can model mismatch sensitivities of sparse recovery be mitigated by over-resolving the mathematicalmodel to ensure that mathematical modes are close to physical modes? This was a natural question that wesubsequently studied in [10]. One might expect that over-resolution of a mathematical basis would producea performance that only bottoms out at the quantization variance of the presumed frame for sparsity. Butin fact, aggressive over resolution in a frame can actually produce worse performance at high SNR than aframe with lower resolution. The consequence of over-resolution is that performance follows the Cramer-Rao bound more closely at low SNR, but at high SNR it departs more dramatically from the Cramer-Raobound. This result matches intuition that has been gained from more conventional spectrum analysis wherethere is a qualitatively similar trade-off between bias and variance. That is, bias may be reduced with frameexpansion, but there is a penalty to be paid in variance.

In a more recent paper [11], we have analyzed the impact of random compression on the Fisher informa-tion, the Cramer-Rao Bound (CRB), and the Kullback-Leibler (KL) divergence for estimating p unknownparameters in the mean value function of a complex multivariate normal distribution. We have considered

Page 25: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

the class of m-by-n (m < n) complex random matrices whose distribution is right-unitarily invariant. Thecompression matrix whose elements are i.i.d. standard normal random variables is one such matrix. Wehave shown that for all such compression matrices, the normalized Fisher information matrix has a complexmatrix Beta distribution, CBp(m,n�m). We have also derived the distribution of CRB and KL divergence.These analytical distributions can be used to quantify the amount of performance loss due to compression.Also, they can be used as guidelines for choosing a suitable compression ratio based on a tolerable loss inthe CRB. Importantly, the distribution of the ratios of CRBs before and after compression, depends only onthe number p of parameters, the number m of measurements, and the ambient dimension n. The distributionis invariant to the underlying signal-to-noise model, in the sense that it is invariant to the underlying (beforecompression) Fisher information matrix.

Subsequently, in [12] and [13], we investigated threshold effects associated with the swapping of signaland noise subspaces in estimating signal parameters from compressed noisy data. The term threshold effectrefers to a sharp departure of mean-squared error from the CRB when the signal-to-noise ratio falls belowa threshold SNR. In many cases, the threshold effect is caused by a subspace swap event, when the mea-sured data (or its sample covariance) is better approximated by a subset of components of an orthogonalsubspace than by the components of the signal subspace. We have derived analytical lower bounds on theprobability of a subspace swap in compressively measured noisy data in two canonical models: a first-ordermodel and a second-order model. In the first-order model, the parameters to be estimated modulate themean of a complex multivariate normal set of measurements. In the second-order model, the parametersmodulate the covariance of complex multivariate measurements. In both cases, the probability bounds aretail probabilities of F -distributions, and they apply to any linear compression scheme. The ambient di-mension n, the compressed dimension m, the number p of parameters, and the number M of snapshotsdetermine the degrees of freedom of the F -distributions. The choice of the compression matrix affects thenon-centrality parameter of the F -distribution in the parameterized mean case, and the left tail probabilityof a central F (or a generalized F ) distribution in the parameterized covariance case. The derived boundsare not asymptotic and are valid in finite snapshot regimes. They can be used to quantify the increase inthreshold SNR as a function of a compression ratio C = n/m. We have demonstrated numerically that thisincrease in threshold SNR is roughly 10 log10C dB, which is consistent with the performance loss that onewould expect when measurements in Gaussian noise are compressed by a factor C. As a case study, wehave investigated threshold effects in maximum likelihood (ML) estimation of directions of arrival of twoclosely-spaced sources using co-prime subsampling and uniformly at random subsampling. Our MSE plotsvalidate the increases in threshold SNRs.

3 Decision and Learning in Large Networks

Funding Source: AFOSR

Collaborators: Zhenliang Zhang (CSU; now with Alibaba Group), Edwin Chong (CSU), and WilliamMoran (The University of Melbourne, Australia)

Selected Papers: [14]–[18]

Contributions: Our research here involves the analysis of performance, stability, robustness, and opti-mization of learning in complex networks, where a large set of interconnected agents (or nodes) wish tocollectively learn the true state of the world. Our focus has been on two particular and yet general networktopologies: the feedforward structure [14] and the hierarchical tree structure [15]–[18]. In the feedforwardstructure, each of a set of nodes in the network makes a decision in sequence, based on its own privatemeasurement and the decisions of some or all of its predecessor nodes using a likelihood ratio test, andthen passes its decision forward to its successors. In hierarchical tree structure, each node makes a decision

Page 26: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

based on its own private measurement and the decisions of its descendant nodes in the tree using a locallikelihood ratio test, and passes its decision up the tree to its parent node. The final decision is made at theroot of the tree. These are two prominent network structures considered in social learning. The feedforwardstructure is popular for modeling the interactions of investors in asset markets and sequential polling. Thehierarchical tree structure is common in enterprises, military hierarchies, political structures, online socialnetworks, and engineering systems, where organizational layering either naturally arises or is enforced forreducing management complexity. For the hierarchical case, we have focused on the case of unbounded-

height trees, which is better suited for analysis of social learning, where it is reasonable to assume that eachnonleaf node in the tree has a finite number of child nodes and thus the tree height grows unboundedly asthe number of nodes goes to infinity. Our recent papers [14]–[18], collectively, study the following generalbut fundamental questions in the hierarchical and feedforward structures:

Asymptotic learning: What are necessary and sufficient conditions under which the network asymp-totically learns the true state of the world? More specifically, what are the necessary and sufficientconditions under which a decision strategy, composed of a sequence of likelihood ratio tests, existsfor which the error probabilities in estimating the true state go to zero as the number nodes goes toinfinity? Under what conditions does herding to the wrong decision occur? How fast do the errorprobabilities converge to zero as functions of the network size and communication failure rates?

Non-asymptotic learning: What are tight lower and upper bounds on the error probability as explicitfunctions of the number of nodes for different decision strategies? How do these non-asymptoticbounds change for different types of random errors in the network?

To address these questions, we have utilized and extended a diverse set of results from statistics andoptimization theory, including results from stability of dynamical systems, convergence theory of infinitesequences, martingale processes, and submodularity theory. Our analysis of hierarchical tree networks [15]–[17] involves studying the evolution of the pair of Type I (false alarm) and Type II (miss detection) errorprobabilities as a dynamical system. This approach is quite unique and powerful, and has enabled us toderive tight non-asymptotic bounds (which are quite scarce in the literature) for detection error probability.These non-asymptotic bounds then guided us in establishing tight results on the rate of learning in suchnetworks. In [18], we investigated the problem of finding a near-optimal sequence of fusion rules for eachlevel in the tree network to maximize the reduction in the total error probability between the leaf nodesand the fusion center. We formulated this problem as a deterministic dynamic program and expressed theoptimal strategy in terms of Bellman’s equation. Moreover, we showed that the reduction in the total errorprobability is a string-submodular function (described in Section 1). In the feedforward structure [14], weutilized results from martingales theory in establishing tight bounds on the rate of learning and characterizingthe conditions under which herding or information cascade occurs.

4 Waveform Design for Active Sensing

Funding Source: NSF

Collaborators: Wenbing Dang (CSU; now with Argo AI), Stephen Howard (DSTO, Australia), WilliamMoran (The University of Melbourne, Australia), and Robert Calderbank (Duke University)

Selected Papers: [19]–[22]

Contribution: Recent and anticipated advances in active sensing technology promise increased capabili-ties for radar transmit and receive signal processing. Modern radars are increasingly being equipped witharbitrary waveform generators, which enable generation of different wavefields across aperture, time, fre-quency, and polarization on a pulse-by-pulse basis. The emergence of the waveform diverse radars (also

Page 27: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

called MIMO radars) brings the promise of achieving improved detectability, imaging resolution, and track-ing performance.

In [19]–[21] and more recently [22], we have shown that waveform diversity can be used with greatsuccess to address a long-standing problem in radar pulse compression concerning the sensitivity of phased-coded waveforms to Doppler effect. In radar, localization in time or ranging is typically performed bymatched filtering the received signal with the transmitted waveform. Therefore, waveforms with impulse-like autocorrelation functions are of great value. Pulse compression is a common technique for generatinglong waveforms with impulse-like autocorrelation functions. In this technique, typically the waveform isconstructed by phase coding a narrow pulse by a long unimodular sequence. The width of the narrowpulse (chip interval) determines the mainlobe range resolution and the choice of the unimodular sequencedetermines the sidelobes of the autocorrelation function of the waveform. Many phase codes with perfect ornear-perfect autocorrelation functions have been proposed over the years. However, the near-perfect auto-correlation property of all such codes is extremely sensitive to Doppler shift. That is, off the zero-Doppleraxis the ambiguity functions of such waveforms have large sidelobes in range. This problem persists evenwhen a long pulse train of such waveforms is constructed. In consequence, a weak target that is located inrange near a strong reflector with a different Doppler frequency may be completely masked by the rangesidelobes of the radar ambiguity function centered at the delay-Doppler position of the stronger reflector.

We have shown that by properly sequencing a pair Golay complementary waveforms across time we canconstruct transmit pulse trains and receive filters for which the radar point-spread function (cross-ambiguityfunction) is essentially free of range sidelobes inside an interval around the zero-Doppler axis. Our approachis simple. We construct the transmit pulse train by coordinating the transmission of Golay complementarywaveforms according to zeros and ones in a binary sequence P . We refer to this pulse train as the P -pulsetrain. The pulse train used in the receive filter is constructed in a similar way, in terms of sequencing theGolay waveforms, but each waveform in the pulse train is weighted according to an element of a sequence Q.We call this pulse train the Q-pulse train. The cross-ambiguity function of the P - and Q-pulse trains gives theradar point-spread function. The size of the range sidelobes of this cross-ambiguity function is controlledby the spectrum of the product of P and Q sequences. By selecting sequences for which the spectrumof their product has a high-order null around zero Doppler, we can annihilate the range sidelobe of thecross ambiguity function inside an interval around the zero-Doppler axis and solve the Doppler sensitivityproblem. The joint design of the transmit-receive sequences (P , Q), enables a trade-off between the orderof this spectral null for annihilating range sidelobes (and therefore clutter power) and the output SNR fordetection in white noise. We have extended this design principle to multi-channel active sensing as wellto design sequences of paraunitary waveform matrices, which maintain their paraunitary property in thepresence of Doppler effect. These designs can also be used to maintain a desired spatial correlation (notnecessarily orthogonal) across Doppler for transmit beamforming in a waveform diverse phased-array radar.

References

[1] Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “String submodular function with curvatureconstraints,” IEEE Trans. Automatic Control, vol. 61, no. 3, pp. 601-616, Mar. 2016.

[2] Z. Zhang, Y. Wang, E. K. P. Chong, A. Pezeshki, and L. L. Scharf, “Subspace selection for projectionmaximization with matroid constraints,” IEEE Trans. on Signal Processing, vol. 65, no. 5, pp. 1339-1351, Mar. 2017.

[3] Y. Liu, E. K. P. Chong, A. Pezeshki, and Z. Zhang, “A general framework for bounding approximatedynamic programming schemes,” arXiv:1809.05249v4.

[4] Y. Liu, Z. Zhang, E. K. P. Chong, and A. Pezeshki, “Performance bounds with curvature for batchedgreedy optimization,” J. Optimization Theory and Applications, vol. 177, no. 2, pp. 535-562, 2018.

Page 28: Ali Pezeshki - Walter Scott, Jr. College of Engineeringpezeshki/PDFs/AliPezeshki_CV_RH.pdfAli Pezeshki Electrical and Computer Engineering Colorado State University, Fort Collins,

[5] Y. Liu, E. K. P. Chong, and A. Pezeshki, “Performance bounds for Nash equilibria in submodularutility systems with user groups” J. Control and Decision, vol. 5, no. 1, pp. 1-18, 2018.

[6] Y. Liu, E. K. P. Chong, and A. Pezeshki, “Improved bounds for greedy strategy in optimization prob-lems with curvature,” J. Combinatorial Optimization, vol. 37, no. 4, pp. 1126-1149, May 2019.

[7] Y. Liu, E. K. P. Chong, and A. Pezeshki, “Submodular optimization problems and greedy strategies: Asurvey,” Discrete Event Dynamic Systems, submitted May 2019, arXiv:1905.03308v1.

[8] Y. Chi, A. Pezeshki, L. L. Scharf, and R. Calderbank, “Sensitivity to basis mismatch of compressedsensing,” Proc. IEEE Int. Conf. Acoust., Speech, Signal Process, Dallas, TX, Mar. 14-19, 2010.

[9] Y. Chi, L. L. Scharf, A. Pezeshki, and R. Calderbank, “Sensitivity to basis mismatch in compressedsensing,” IEEE Trans. Signal Processing, vol. 59, no. 5, pp. 2182-2195, May 2011.

[10] A. Pezeshki, Y. Chi, L. L. Scharf, and E. K. P. Chong, “Compressed sensing, sparse inversion, andmodel mismatch,” in Compressed Sensing and Its Applications, New York, NY: Springer, ch. 3, pp.75–95, 2015.

[11] P. Pakrooh, A. Pezeshki, L. L. Scharf, D. Cochran, and S. D. Howard, “Analysis of Fisher Informa-tion and the Cramer-Rao bound for nonlinear parameter estimation after random compression,” IEEE

Trans. Signal Processing, vol. 63, no. 23, pp. 6423–6428, Dec. 2015.[12] P. Pakrooh, A. Pezeshki, and L. L. Scharf, “Threshold effects in parameter estimation from compressed

data,” Proc. 1st IEEE Global Conf. on Signal and Information Processing, Austin, TX, Dec. 3-5, 2013(invited paper).

[13] P. Pakrooh, L. L. Scharf, and A. Pezeshki, “Threshold effects in parameter estimation from compresseddata,” IEEE Trans. Signal Processing, vol. 64, no. 9, pp. 2345-2354, May 2016.

[14] Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Hypothesis testing in feedforward networkswith broadcast failures,” IEEE J. Selected Topics in Signal Processing, special issue on Learning-Based

Decision Making in Dynamic Systems under Uncertainty, vol. 7, no. 5, pp. 797–810, Oct. 2013.[15] Z. Zhang, A. Pezeshki, W. Moran, S. D. Howard, and E. K. Chong, “Error probability bounds for

balanced binary relay trees,” IEEE Trans. Information Theory, vol. 58, no. 6, pp. 35483563, Jun. 2012.[16] Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Detection performance for

balanced binary relay trees with node and link failures,” IEEE Trans. Signal Processing, vol. 61, no. 9,pp. 2165-2177, May 2013.

[17] Z. Zhang, E. K. P. Chong, A. Pezeshki, W. Moran, and S. D. Howard, “Learning in hierarchical socialnetworks,” IEEE J. Selected Topics in Signal Processing, special issue on Adaptation and Learning

over Complex Networks, vol. 7, no. 2, pp. 305-317, April 2013.[18] Z. Zhang, E. K. P. Chong, A. Pezeshki, and W. Moran, “Near-optimal distributed detection in balanced

binary relay trees,” IEEE Trans. on Control of Network Systems, vol. 4, no. 4, pp. 826-837, Dec. 2017.[19] A. Pezeshki, A. R. Calderbank, W. Moran, and S. D. Howard, “Doppler resilient Golay complementary

waveforms,” IEEE Trans. Information Theory, vol. 54, no. 9, pp. 4254–4266, Sep. 2008.[20] W. Dang, A. Pezeshki, S. D. Howard, W. Moran, and R. Calderbank, “Coordinating complementary

waveforms for sidelobe suppression,” Proc. Forty-fifth Asilomar Conf. Signals, Syst., Comput., PacificGrove, CA, Nov. 6-9, 2011.

[21] W. Dang, A. Pezeshki, S. D. Howard, and W. Moran, “Coordinating complementary waveforms acrosstime and frequency,” Proc. IEEE Statistical Signal Processing Workshop, Ann Arbor, MI, Aug. 2012.

[22] W. Dang, A. Pezeshki, S. D. Howard, W. Moran, and R. Calderbank, “Coordinating complementarywaveforms for suppressing range sidelobes in a doppler band,” IEEE Trans. Signal Processing, sub-mitted Aug. 2019.