Topology, Algebra, and Geometry in Computer Vision
A Virtual Workshop at the 31st International Conference on Computer Vision (ICCV 2021), October 17, 2021
Overview
Much of the data that is fueling the current rapid advances in data science and computer vision is very high dimensional or complex. This poses challenges both in terms of building algorithms that can capture meaningful structure and also building analytical techniques that help to understand what that structure means. Mathematicians working in topology, algebra, and geometry have more than a hundred year’s worth of finely-developed machinery whose purpose is to give structure to, help build intuition about, and generally better understand spaces beyond those that we can easily visualize. This workshop will show-case work which brings methods from topology, algebra, and geometry and uses them to answer challenging questions in computer vision while often posing new questions in mathematics in the process. We interpret mathematics broadly and welcome submissions ranging from manifold methods to optimal transport to topological data analysis to mathematically informed deep learning. We envision our session as an opportunity for the researchers building state-of-the-art methods to connect with researchers who have challenging computer vision problems for which standard off-the-shelf techniques do not work.
With this workshop we hope to create a vehicle for disseminating computer vision techniques that utilize rich mathematics and address core challenges in computer vision as described in the ICCV call for papers. Our intention is to build community and facilitate increased exposure of innovative approaches rooted in mathematical theory and understanding. We expect the approaches to address a specific challenge and demonstrate utility on interesting datasets while lowering the barrier for entry with respect to comparison to other approaches or across multiple datasets. Mathematically derived techniques address a specific problem and while they may provide invaluable insights on novel real-world datasets they may not yield strong performance gains on many data-rich benchmarking datasets. Through intellectual cross-pollination between data-driven and mathematically-inspired communities we believe this workshop will support the continued development of both groups and enable new solutions to problems in computer vision.
Topic Areas of Interest Include, but are not limited to:
Geometric Deep Learning
Optimal Transport
Topological Data Analysis
Graph-based Methods
Manifold Methods
Abstract algebra in computer vision
The Keynote Speakers
Dr. Justin Solomon
Massachusetts Institute of Technology
Justin Solomon is an associate professor of Electrical Engineering and Computer Science at MIT. He directs the Geometric Data Processing Group in the MIT Computer Science and Artificial Intelligence Laboratory, which studies problems at the intersection of geometry, optimization, and applications like graphics, vision, and learning.
Dr. Zhizhen Zhao
University of Illinois at Urbana-Champaign
Zhizhen Zhao is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She joined University of Illinois in 2016. From 2014 to 2016, she was a Courant Instructor at the Courant Institute of Mathematical Sciences, New York University. She received the B.A. and M.Sc. degrees in physics from Trinity College, Cambridge University in 2008, and the Ph.D. degree in physics from Princeton University in 2013. She is a recipient of Alfred P. Sloan Research Fellowship (2020--2022). Her research interests include geometric data analysis, signal processing, and machine learning, with applications to imaging sciences and inverse problems, including cryo-electron microscopy image processing and data-driven methods for dynamical systems.
Dr. Henry Adams
Colorado State University
Professor Adams' research interests are in computational topology and geometry, quantitative topology, and topology applied to data analysis. His theoretical work has illuminated the structure of Vietoris-Rips simplicial complexes, a popular tool for approximating the shape of a dataset via persistent homology. He has applied topology to machine learning, computer vision, coverage problems in minimal sensing, collective motion models, and energy landscapes arising in chemistry. Professor Adams is the co-director of the Applied Algebraic Topology Research Network.
Dr. Richard Baraniuk
Rice University
Richard G. Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University, a member of the Digital Signal Processing (DSP) and Machine Learning research groups, and Founder/Director of OpenStax. Dr. Baraniuk is a Fellow of the American Academy of Arts and Sciences, National Academy of Inventors, American Association for the Advancement of Science, and IEEE. He has received the DOD Vannevar Bush Faculty Fellow Award (National Security Science and Engineering Faculty Fellowship), the IEEE Signal Processing Society Technical Achievement Award, and the IEEE James H. Mulligan, Jr. Education Medal. He holds 35 US and 4 foreign patents, 6 of which have been licensed to Siemens to radically speed up magnetic resonance imaging (MRI) scans. Notable inventions/co-inventions include: the single-pixel camera, FlatCam, FlatScope, compressive radar imaging, distributed compressive sensing, several ultrawideband analog-to-information converters, and the SPARFA learning and content analytics framework. Dr. Baraniuk's research interests in signal processing and machine learning lie primarily in new theory and algorithms involving low-dimensional models.
The Organizers
Dr. Tegan Emerson
Pacific Northwest National Laboratory
Colorado State University
University of Texas: El Paso
Tegan Emerson received her PhD in Mathematics from Colorado State University. She was a Jerome and Isabella Karle Distinguished Scholar Fellow in optical sciences at the Naval Research Laboratory from 2017-2019. In 2014 she had the honor of being a member of the American delegation at the Heidelberg Laureate Forum. Dr. Emerson is now a Senior Data Scientist and Team Leader in the Data Sciences and Analytics Group at Pacific Northwest Laboratory. In addition to her role at Pacific Northwest National Laboratories, Dr. Emerson also holds Joint Appointments as Affiliate Faculty in the Departments of Mathematics at Colorado State University and the University of Texas, El Paso. Her research interests include geometric and topological data analysis, dimensionality reduction, algorithms for image processing, deep learning, and optimization.
Dr. Henry Kvinge
Pacific Northwest National Laboratory
University of Washington
Henry Kvinge received his PhD in Mathematics from UC Davis where his research focused on the intersection of representation theory, algebraic combinatorics, and category theory. After two years as a postdoc in the Department of Mathematics at Colorado State University where he worked on the compressive sensing-based algorithms underlying single-pixel cameras, he joined PNNL as a senior data scientist. These days his work focuses on leveraging ideas from geometry, and representation theory to build more robust and adaptive deep learning models and frameworks.
Dr. Tim Doster
Pacific Northwest National Laboratory
Tim Doster is a Senior Data Scientist at the Pacific Northwest National Laboratory. He received the B.S. degree in computational mathematics from the Rochester Institute of Technology in 2008 and the Ph.D. degree in applied mathematics and scientific computing from the University of Maryland, College Park, in 2014. From 2014 to 2016, he was a Jerome and Isabella Karle Distinguished Scholar Fellow before becoming a Permanent Research Scientist in the Applied Optics division with the U.S. Naval Research Laboratory. During his time with the U.S. Naval Research Laboratory he won the prestigious DoD Laboratory University Collaboration Initiative (LUCI) grant. His research interests include machine learning, harmonic analysis, manifold learning, remote sensing, few-shot learning, and adversarial machine learning.
Dr. James Murphy
Tufts University
James M. Murphy is an assistant professor of mathematics and adjunct assistant professor of electrical and computer engineering at Tufts University. He earned a B.S. in mathematics from the University of Chicago (2011) and a Ph.D. in mathematics from the University of Maryland, College Park (2015). Before arriving at Tufts in 2018, he held postdoctoral positions at Duke University (2015-2016) and Johns Hopkins University (2016-2018). His work is primarily in applied harmonic analysis, statistical and machine learning, and data science. He works on problems in unsupervised and semisupervised learning, anomaly detection, and image processing using methods of high-dimensional statistics, spectral graph theory, and dictionary learning. He also designs and implements fast algorithms and develops methodologies for scientific applications. He is particularly interested in applied problems in remote sensing, computational chemistry, and network science.
Dr. Soumik Pal
University of Washington
Soumik Pal is the Robert B. Warfield Jr. Endowed Professor of Mathematics and adjunct Professor of Applied Mathematics at the University of Washington, Seattle. He did his Ph.D. at Columbia University and postdoc at Cornell University before joining UW. His primary research are is in probability theory covering such diverse topics such as interacting Brownian particle systems, random graphs and random matrices, stochastic portfolio theory, information geometry, and evolving random trees. His current interest is in Monge-Kantorovich optimal transport problems and its applications. He is a founding member of the Kantorovich Initiative (kantorovich.org), a recent effort towards research and dissemination of modern mathematics of optimal transport towards a wide audience of researchers, students, industry, policy makers and the general public.
The Program Committee
Shuchin Aeron (Tufts University)
Adam Attarian (Pacific Northwest National Laboratory)
Sofya Chepushtanova (Wilkes University)
Nicholas Courts (Pacific Northwest National Laboratory)
Paul Escande (Institut de Mathématiques de Marseille)
Grayson Jorgenson (Pacific Northwest National Laboratory)
Samson Koelle (University of Washington)
Elizabeth Newman (Emory University)
George Stantchev (U.S. Naval Research Laboratory)
Abiy Tasissa (Tufts University)
Sarah Tymochko (Michigan State University)
Accepted Papers and Schedule
Chuan-Shen Hu, Austin Lawson, Yu-Min Chung, and Kaitlin Keegan, Two-parameter Persistence for Images via Distance Transform
Xiaofeng Ma, Michael Kirby, and Chris Peterson, The Flag Manifold as a Tool for Analyzing and Comparing Sets of Data Sets
Stephen Zhang, A unified framework for non-negative matrix and tensor factorisations with a smoothed Wasserstein loss
Amit Efraim and Joseph M Francos, Dual Transformation and Manifold Distances Voting for Outlier Rejection in Point Cloud Registration,
Yuval Haitman, Joseph M Francos, and Louis L. Scharf, Grassmannian Dimensionality Reduction for Optimized Universal Manifold Embedding Representation of 3D Point Clouds
Henry Kvinge, Brett Jefferson, Cliff Joslyn, and Emilie Purvine, Sheaves as a Framework for Understanding and Interpreting Model Fit
Yuliang Cai, Alexander Cloninger, Srinjoy Das, Sumit Mohan, Nilesh Jain, and Adithya M Niranjan, A Manifold Learning based Video Prediction approach for Deep Motion Transfer
Mark Blumstein and Henry Kvinge, Multi-Dimensional Scaling on Groups