persistence diagram cardinality and its applications

84
University of Tennessee, Knoxville University of Tennessee, Knoxville TRACE: Tennessee Research and Creative TRACE: Tennessee Research and Creative Exchange Exchange Doctoral Dissertations Graduate School 5-2020 Persistence Diagram Cardinality and its Applications Persistence Diagram Cardinality and its Applications Cassie Putman Micucci University of Tennessee, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Recommended Citation Recommended Citation Micucci, Cassie Putman, "Persistence Diagram Cardinality and its Applications. " PhD diss., University of Tennessee, 2020. https://trace.tennessee.edu/utk_graddiss/5893 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected].

Upload: others

Post on 03-Jun-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Persistence Diagram Cardinality and its Applications

University of Tennessee, Knoxville University of Tennessee, Knoxville

TRACE: Tennessee Research and Creative TRACE: Tennessee Research and Creative

Exchange Exchange

Doctoral Dissertations Graduate School

5-2020

Persistence Diagram Cardinality and its Applications Persistence Diagram Cardinality and its Applications

Cassie Putman Micucci University of Tennessee, [email protected]

Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss

Recommended Citation Recommended Citation Micucci, Cassie Putman, "Persistence Diagram Cardinality and its Applications. " PhD diss., University of Tennessee, 2020. https://trace.tennessee.edu/utk_graddiss/5893

This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected].

Page 2: Persistence Diagram Cardinality and its Applications

To the Graduate Council:

I am submitting herewith a dissertation written by Cassie Putman Micucci entitled "Persistence

Diagram Cardinality and its Applications." I have examined the final electronic copy of this

dissertation for form and content and recommend that it be accepted in partial fulfillment of the

requirements for the degree of Doctor of Philosophy, with a major in Mathematics.

Vasileios Maroulas, Major Professor

We have read this dissertation and recommend its acceptance:

Jan Rosinski, Kenneth Stephenson, Haileab Hilafu

Accepted for the Council:

Dixie L. Thompson

Vice Provost and Dean of the Graduate School

(Original signatures are on file with official student records.)

Page 3: Persistence Diagram Cardinality and its Applications

Persistence Diagram Cardinality and

its Applications

A Dissertation Presented for the

Doctor of Philosophy

Degree

The University of Tennessee, Knoxville

Cassie Putman Micucci

May 2020

Page 4: Persistence Diagram Cardinality and its Applications

c© by Cassie Putman Micucci, 2020

All Rights Reserved.

ii

Page 5: Persistence Diagram Cardinality and its Applications

To my family

iii

Page 6: Persistence Diagram Cardinality and its Applications

Acknowledgments

First, I must acknowledge the guidance and extensive feedback received on many occasions

from my advisor, Professor Vasileios Maroulas. This, along with frequent pushes toward

growth, greatly contributed to my success within the Ph.D. program and beyond. I also

greatly appreciate Dr. Haileab Hilafu, Professor Ken Stephenson, and Professor Jan Rosinski

for serving on my dissertation committee. My academic family (specifically Chris Oballe,

Adam Spannaus, Dr. Farzana Nasrin, Ephy Love, Dr. Andy Marchese, Dr. Josh Mike) have

also contributed with many helpful discussions and suggestions. Additionally, I would like

to acknowledge that this work has been partially supported by ARO Grant # W911NF-17-

1-0313.

I would like to thank Pam Armentrout, whose listening ear and genuine concern for my

welfare helped me significantly on many occasions. My family have also been an invaluable

resource during throughout my studies as well. I would like to thank my parents for their

continued optimism and my grandparents, extended family, and in-laws for their consistent

encouragement. My husband has also provided key support during all the ups and downs of

my time in graduate school.

iv

Page 7: Persistence Diagram Cardinality and its Applications

Abstract

This dissertation studies persistence diagrams and their usefulness in machine learning.

Persistence diagrams are summaries of underlying topological structure present within data.

These diagrams are especially applicable for analyzing data whose shape is a relevant

descriptor, as they provide unique information as sets instead of vectors. Although there

are methods for vectorizing persistence diagrams, we focus instead on statistical learning

schemes that use persistence diagrams directly.

In particular, the cardinality of the diagrams proves itself a useful indicator, although this

cardinality is variable at higher dimensions. To better understand and use the cardinality

of persistence diagrams, we prove that the cardinality is bounded using both statistics and

geometry. We also prove stability of a cardinality-based persistence diagram distance in a

continuous fashion. These results are then applied to analyze persistence diagrams generated

from structures of materials.

We also develop a Bayesian framework for modeling the cardinality and spatial

distributions of points within persistence diagrams using i.i.d. cluster point processes. From

Gaussian mixture and binomial priors, we derive equations for the posterior cardinality

and spatial distributions. We also introduce a distribution to account for noise typical of

persistence diagrams. This framework provides a means of classifying biochemical networks

present within cells using Bayes factors. We also provide a favorable comparison of the

Bayesian classification of networks with several other methods.

v

Page 8: Persistence Diagram Cardinality and its Applications

Table of Contents

1 Introduction to Statistical Topological Data Analysis 1

2 Persistent Homology Background 6

2.1 Basic Topological Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Simplicial Homology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Persistent Homology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 A Stable Cardinality Distance for Persistence Diagrams 15

3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2 Stability Properties of Persistence Diagram Distances . . . . . . . . . . . . . 17

3.3 Distance-Based Machine Learning for Materials Data . . . . . . . . . . . . . 24

3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Cardinality-Driven Bayesian Framework for Persistence Diagrams 30

4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.1.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.1.2 I.I.D Cluster Point Processes . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 Bayesian Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.2.1 Model Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.2.2 Posterior Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.2.3 Conjugate Family of Priors . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.3.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

vi

Page 9: Persistence Diagram Cardinality and its Applications

4.4 Classification of Actin Filament Networks . . . . . . . . . . . . . . . . . . . 49

4.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.4.2 Bayesian Classification Method . . . . . . . . . . . . . . . . . . . . . 52

4.4.3 Methods for Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.4.4 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Bibliography 58

Vita 71

vii

Page 10: Persistence Diagram Cardinality and its Applications

List of Tables

4.1 List of parameters for (M1) used in Example 1. For Cases I and II, we take

into account two types of prior intensities: (i) informative and (ii) unimodal

uninformative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.2 List of parameters for (M2) and (M3) used in Example 1. For Case-I, we

consider the 1-dimensional persistence features obtained from the point clouds

with Gaussian noise having variance 0.01I2 (see Table 4.3) and for Case-II

those obtained from point clouds with noise variance 0.1I2 (see Table 4.4).

To perform a sensitivity analysis, we implement different parameters for (M2)

and (M3), which are listed here. . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.3 Posterior intensities and cardinalities for Example 1 Case I obtained by using

Proposition 4.2.1. We consider informative and unimodal uninformative priors

for the intensity in columns 2 and 3 respectively and informative and uniform

priors for the cardinality in rows 2 and 3 respectively. The list of associated

parameters is described in Example 1. Posteriors computed from all of these

priors can estimate the 1−dimensional holes accurately for a list of varying

parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.4 Posterior intensities and cardinalities for Example 1 Case II obtained by using

Proposition 4.2.1. We consider informative and unimodal uninformative priors

for the intensity in columns 2 and 3 respectively and informative and uniform

priors for the cardinality in rows 2 and 3 respectively. The list of associated

parameters is described in Example 1. . . . . . . . . . . . . . . . . . . . . . 51

4.5 Comparison of accuracy results for the actin network classification. . . . . . 56

viii

Page 11: Persistence Diagram Cardinality and its Applications

List of Figures

2.1 The yellow triangles in (a) make up a 2-chain with boundary given in (b). . . 9

2.2 This simplicial complex has β0 = 1 since it has one connected component. It

also has β1 = 1 since there is one loop. . . . . . . . . . . . . . . . . . . . . . 10

2.3 (a) Functional approximation of the points in (c) using a kernel density

estimator. (b) Persistence diagram created using sublevel set persistence. (c)

Point cloud. (d) Vietoris-Rips complex (orange points and blue line segment)

for the shown radius. (e)A 1-dimensional hole is formed in the Vietoris-Rips

complex. (f) The persistence diagram created from Vietoris-Rips complexes. 12

2.4 (a) shows the persistence diagram from Figure 2.3 (f), and (b) shows the same

diagram transformed by (birth, death) 7→ (birth, persistence). . . . . . . . . 13

3.1 Consider two persistence diagrams, one given by the green squares and another

by the purple circles. (a) The bottleneck distance only considers the largest

matching. (b) The Wasserstein distance imposes a cost of 0.1 to the extra

purple point (the `∞ distance to the diagonal). (c) The dcp distance imposes

a penalty c on the point instead. . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2 An example of 8-point arrangements to visualize the proof of Proposition 3.9.

(a) A 3- hole configuration vs. (b) a 2-hole configuration. . . . . . . . . . . . 22

3.3 Example of body-centered cubic, (BCC), (a) and face-centered cubic, (FCC),

(b) unit cells without additive noise or sparsity. Notice there is an essential

topological difference between the two structures: The body-centered cubic

structure has one atom at its center, whereas the face-centered cubic is hollow

in its center, but has one atom in the middle of each of its faces. . . . . . . . 24

ix

Page 12: Persistence Diagram Cardinality and its Applications

3.4 Example of persistence diagrams generated by (a) a BCC lattice, and (b)

FCC lattice. The data has a noise standard deviation of τ = 0.75 and 67%

of the atoms are missing. Note that the BCC diagram has two prominent

(far from the diagonal) points representing 1-dim holes and fewer connected

components and 1-dim holes than does the FCC diagram. . . . . . . . . . . 25

3.5 Top: Number of connected components (in this case atoms), b0, against the

number of 1-dim homological features, b1, of the persistence diagrams. One

can see the presence of heteroscedasticity since the variance of b1 increases

as b0 increases. The diagrams were generated from the crystal structures

with N (0, 0.752) noise and 67% of atoms missing. Bottom: Same as in top

but using a quadratic transformation of the predictor variable, along with

the weighted least squares fit line and 95% prediction intervals provided by

Proposition 3.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.6 Accuracy scores for dcp (red), Wasserstein (blue), and counting (green)

classifiers, plotted against different standard deviations, τ , of the normally

distributed noise of the atomic positions. In each instance, the sparsity has

been fixed at 67% of the atoms missing, as in a true APT experiment. . . . . 28

4.1 (a) is an electron micrograph of an actin filament. (b), (c) and (d) are samples

from three classes with number of cross-linking proteins N = 825, 1650, and

3300 respectively. Notice there is an essential topological difference between

the networks in (b), (c) and (d): the loops in the networks in (c) and (d) are

more prominent than those in (b). . . . . . . . . . . . . . . . . . . . . . . . 32

4.2 (a) A sample DX (triangles) from the prior point process DX and an

observed persistence diagram DY (dots) generated from DY . (b-c) Color-

coded representations of possible configurations for the observed and vanished

features in the prior, along with the marked point process (DXO,DYO) and the

unexpected features of the data DYU . (d-e) show the effect of DXOand DYU

on the cardinality distribution of the observation DY . . . . . . . . . . . . . . 38

x

Page 13: Persistence Diagram Cardinality and its Applications

4.3 (a), (b), and (c) give an example of a persistence diagram generated from

networks in classes 1, 2, and 3 respectively. . . . . . . . . . . . . . . . . . . . 53

4.4 (a) An example filament network from class 1. (b) The network in (a)

converted to a raster image. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

xi

Page 14: Persistence Diagram Cardinality and its Applications

Chapter 1

Introduction to Statistical Topological

Data Analysis

This dissertation studies the effect and usefulness of cardinality for persistence diagrams and

applications. Part of the field of applied topology, persistence diagrams are summaries of

the shape of data. The inherent shape properties of many datasets are key to the creation of

meaningful, accurate machine learning schemes. Integrating the shape of data into machine

learning has seen success through the ideas of manifold learning [68, 90, 58], density-based

clustering [51, 50, 24], and low-dimensional embeddings [102, 29, 6]. These methods capture

local geometry and other structural properties well but do not explicitly describe loops,

holes, and voids within the underlying data. Without the mathematical theory of homology,

which algebraically studies holes and connectedness, rich descriptors of these datasets remain

unrealized.

Considering point cloud data, that is, finite point sets in Rn, another problem arises:

the data is discrete, but we would like to infer information about shape and homology,

concepts difficult to extract from discrete structures. To address this question of homology

for discrete data, several early researchers in computational topology [36, 114, 16] developed

topological data analysis (TDA) to use the machinery of topology to describe data. These

studies went beyond earlier ideas [45, 94] to create summaries of the homology present in

data. They introduced the concept of persistent homology, which studies the evolution of

homological features in data as some threshold is varied. Persistent homology furnishes a

1

Page 15: Persistence Diagram Cardinality and its Applications

summary of the connectedness and holes (or empty space) of data, which indirectly gives

information about the shape of the data as well. This analysis quantifies the significance

of a homological feature and provides a tool to contend with noisy data. The persistence

diagram (introduced in [36]) provides a visualization of persistent homology and is created

from continuous structures built upon discrete data. The appearance and disappearance of

each homological feature as these structures evolve is calculated and recorded in a persistence

diagram. Practical algorithms for these computations have been developed [36] and improved

upon [80, 110, 62, 28, 81], so that libraries for computation in R, C, and Python are available

[73, 40, 5, 106, 2].

Successful applications of TDA have been recorded in a diverse body of fields. Researchers

[7] [17] have studied image analysis through the lens of persistent homology. Periodicity and

quasiperiodicity have been measured and quantified in video data [107]. Certain human

activities can also be classified using persistence diagrams [109]. TDA also provides a means

to study and describe signals [47, 71, 87, 49] and sensor networks [32].

There are several applications in chemistry and astronomy as well. For example,

persistent homology can be used to identify astrophysical structures in the cosmic web

[98]. Persistent homology also clearly describes phase transitions [33]. TDA also gives a

way to measure similarity of pore geometries in nanoporous materials [67]. Several studies

[55] showed that the atomic structure of amorphous solids is elucidated through persistence

diagrams. Additionally, persistence diagrams are able to predict binding energies for a large

database of molecules [105].

There have also been many applications within biology. Several studies have used

persistent homology to describe the structure and shape of the brain [8, 66, 30]. TDA has

also been successful in describing relevant changes in shape during the process of membrane

fusion [60]. Protein structure and folding is another application well-suited for persistent

homology [112, 15, 14]. Also, the tracking of subcellular particles and their trajectories can

be achieved through TDA [96].

In this work, we focus specifically on persistence diagrams and their applications to

materials science and biochemistry. Persistence diagrams are planar multisets instead of

vectors, since they have no fixed cardinality (size) or ordering. For each homological

2

Page 16: Persistence Diagram Cardinality and its Applications

dimension, k = 0, 1, 2, . . ., a persistence diagram can be computed for a given point cloud.

In most applications, only the first few dimensions are necessary for obtaining features for

machine learning. Diagrams can be compared using several distances, such as the Hausdorff,

bottleneck, and Wasserstein distances; additionally, they are stable with respect to noise

[26, 21]. Their stability is a key factor in their success in applications with realistic data

involving noise.

Once persistence diagrams are computed, they can be used in machine learning schemes

in place of the original data used to create the diagrams. There are many different ways to

use the diagrams for regression, classification, or clustering. In one approach, persistence

diagrams are converted from multisets to vectors in Rn. This can be accomplished through

statistical summaries of features of the persistence diagram, such as the birth, death,

persistence, and cardinality [39, 88] or by the use of Gaussian kernels [63]. Another

persistence diagram representation using different kernels is given in [93]. Distance matrices

also provide a way to represent persistence diagrams [19]. Persistence landscapes transform

the diagrams into piecewise functions [12, 13], and persistence images transform a persistence

diagram to a planar image, which can then be converted to a vector [1]. Persistence

landscapes and persistence images have the advantage of functional representations in a

separable Banach space. Persistence landscapes do not have unique means but are invertible;

however, persistence images are more efficient to compute.

Statistics and probability can also be performed directly on the persistence diagrams. A

paradigm for considering probability measures for persistence diagrams has been introduced

in [78], which considers the data underlying the diagrams. Similarly, the persistent homology

of random fields can also be studied [3]. Another paper proves a strong law of large

numbers for persistence diagrams that are defined on stationary point processes [56]. Rates

of convergence for the expected bottleneck distance between persistence diagrams can be

calculated, as shown in [23]. Studies [108, 71] have provided ways to calculate Frechet

means of persistence diagrams. Also, confidence sets can be defined for persistence diagrams

[41], as can kernel density estimators [75]. The persistent homology of data represented as

functions can also be estimated via kernels [9]. Summary statistics of persistence diagrams

including cardinality, birth, and death are used for parametric inference in [38]. Additionally,

3

Page 17: Persistence Diagram Cardinality and its Applications

hypothesis testing can be performed on persistence diagrams using permutation tests, as

shown in [95]. Distances on the space of persistence diagrams also enable distance-based

statistical learning frameworks such as clustering [72, 64]. A framework for neural networks

used directly on persistence diagrams has also been created [18].

This work considers two methods for machine learning for persistence diagrams. One

method uses a distance directly on persistence diagrams. The other builds a Bayesian

paradigm for the diagrams. This dissertation is organized into four chapters, which explain

these two approaches. This first chapter gives an overview of statistical topological data

analysis, its history, and the key themes that will appear in later chapters. Chapter 2 provides

the necessary theorems, definitions, and background information in persistent homology

necessary to understand the work presented in later chapters. Chapter 3 discusses the

dcp distance for persistence diagrams, which was introduced in [71]. This distance directly

incorporates the cardinality of the persistence diagrams to compare them. We prove a

stability theorem for persistence diagrams under this distance, as well as give probabilistic

bounds on fluctuations of the distance. Additionally, we provide bounds for the cardinality of

persistence diagrams. One proof uses weighted least squares to derive probabilistic bounds,

and the other considers geometric properties of the Vietoris-Rips complex to generate a

theoretical bound. Then we show an application of the distance to analyzing materials

science data.

In Chapter 4, we introduce a novel Bayesian paradigm for persistence diagrams modeled

as finite point processes. This framework allows for the modeling of both the spatial

distribution of points in the diagram and the cardinality of the diagram simultaneously.

Previous studies [76] ignored the cardinality component by considering a Poisson point

process instead of the more general i.i.d cluster processes used herein. The Poisson framework

reduces the computational burden, but it approximates the cardinality distribution from the

spatial estimation. Instead, we focus on developing a Bayesian framework that is capable of

estimating the cardinality distribution without relying completely on the spatial distribution.

We use Gaussian mixtures as priors and derive the formulas for the posterior intensity

distribution. This posterior calculation includes terms to account for points that appear or

disappear due to noise. We also derive the posterior distribution of the number of points

4

Page 18: Persistence Diagram Cardinality and its Applications

from a binomial prior distribution. This is very critical as the importance of cardinality

in persistence diagrams has been underlined in problems related to statistics and machine

learning [41, 71, 74, 62]. This Bayesian paradigm then provides a classification method for

biochemical networks, which have never been successfully classified before. Using Bayes’

factors, we classify the networks at 90% accuracy and provide a comparison with other

topological and standard methods.

To summarize, there are five main contributions of this dissertation:

1. Proof of the stability of the dcp distance in a continuous fashion.

2. Theoretical and statistical bounds on the number of 1-dimensional features represented

in a persistence diagram based on the cardinality of the underlying point cloud.

3. A Bayesian framework that simultaneously estimates both the posterior spatial and

the posterior cardinality distribution of persistence diagrams, as well as accounts for

noise specific to persistence diagrams in a novel way.

4. A general closed form expression of both posterior distributions given certain prior

distributions.

5. A Bayesian classification algorithm for actin filament networks of plant cells that

directly incorporates the variations in topological structures of those networks such

as size and numbers of loops.

5

Page 19: Persistence Diagram Cardinality and its Applications

Chapter 2

Persistent Homology Background

In this chapter we introduce the background on persistent homology and persistence diagrams

necessary for the rest of the work. Persistent homology extracts shape information from data

that can be used in machine learning tasks. It is an especially helpful tool for analyzing point

cloud data in Rn. Since point cloud data consists only of discrete points, it has only trivial

topological features. However, inherent in many such datasets are nontrivial topological and

geometric features that are relevant descriptors. Below we explain the concepts of persistent

homology that identify these shape descriptors. We also formally introduce the persistence

diagram, a summary representation of the homology of a dataset. For further details on

persistent homology and computational topology, the reader is referred to [35, 46, 34, 111].

2.1 Basic Topological Notions

We begin with the necessary basic topological notions which may be found in [85]. First, we

define topology.

Definition 2.1. Let T be a set. A topology T on X is a collection of subsets of X such that

1. X and ∅ are in T

2. the union of sets from a subcollection in T is in T

3. the finite intersection of any sets from a subcollection T is also in T .

6

Page 20: Persistence Diagram Cardinality and its Applications

A topological space is a set along with a topology on the set. From this point forward,

any spaces mentioned will be topological spaces, typically Euclidean ones. The next

two definitions provide a way to compare the topology of two spaces through continuous

functions.

Definition 2.2. Suppose that f and g are continuous functions from the X to Y . f and g are

homotopic if there exists a continuous function F : X × [0, 1]→ Y such that F (x, 0) = f(x)

and F (x, 1) = g(x) for x ∈ X.

Definition 2.3. The spaces X and Y are homotopy equivalent if there exist functions f :

X → Y and g : Y → X such that f ◦ g is homotopic to the identity function 1X : X → X

given by 1X(x) = x, and likewise g ◦ f is homotopic to 1Y .

Now we consider X, a finite set of points in Rn under some metric, typically the Euclidean

metric. To extract topological and geometric information from this set of points a method

is needed. Specifically, we would like build an object with significant nontrivial topology out

of the point cloud. The simplicial complex provides such a link from the set of points to a

non-discrete object. The next two definitions are necessary for defining simplices, which are

the building blocks of the simplicial complex.

Definition 2.4. The convex hull of a finite set of points {xi}ni=1 is given by∑n

i=1 αixi, where

αi ≥ 0 for all i and∑n

i=1 αi = 1.

Definition 2.5. The set of points {xi}ni=1 is affinely independent if whenever∑n

i=1 αixi = 0

and∑n

i=1 αi = 0, then αi = 0 for all i.

With these two definitions in hand, we may now give the definition of simplex.

Definition 2.6. A k-simplex is the convex hull of an affinely independent point set of

cardinality k + 1.

For example, a 0-simplex is a point, a 1-simplex is a line-segment, a 2-simplex is a triangle,

and a 3-simplex is a tetrahedron.

Definition 2.7. The convex hull of a nonempty subset of the k points in an k + 1 simplex

is called a face of a simplex.

7

Page 21: Persistence Diagram Cardinality and its Applications

For example, a triangle (including the inside) has 3 1-faces (the edges) and 3 0-faces (the

vertices).

Definition 2.8. An abstract simplicial complex σ is a collection of simplices such that for

every set A in σ and every nonempty set B ⊂ A, we have that B is in σ.

An example of a simplicial complex with 4 0-simplices, 9 1-simplices, and 3 2-simplices

is shown in Figure 2.1 (a).

2.2 Simplicial Homology

We now give the necessary definitions for computing the homology of a simplicial complex.

The definitions and theorems given apply to abstract simplicial complexes in general,

although our primary interest will be in analyzing the homology of complexes built from

point cloud data.

Definition 2.9. Suppose that σ is a simplicial complex. For dimension k, a k-chain is

a formal sum∑

i aiσi of simplices, where the ai are coefficients modulo 2 and the σi are

k-simplices.

Here, a k-chain may be considered a set of k-simplices since we only allow coefficients

modulo 2. See Figure 2.1 for an example. Other coefficients could be used but are

unnecessary for the scope of this work. There is also an intuitive sense of a boundary

for a k-chain, given in the following definition.

Definition 2.10. For a k-simplex, the boundary operator ∂k returns the formal sum of its

(k − 1)-dimensional faces, called the boundary.

Definition 2.11. A k-boundary is a k-chain which is the boundary of a k + 1-chain.

We note that ∂k(σ) =∑ai∂k(σi) and that ∂k is an additive map. Thus the k-boundaries

form a group Bk(X) under the component-wise addition operation on simplices.

Definition 2.12. A k-cycle is a k-chain c that has ∂(c) = 0.

8

Page 22: Persistence Diagram Cardinality and its Applications

(a) Simplicial complex (b) 1-boundary of 2-chain

Figure 2.1: The yellow triangles in (a) make up a 2-chain with boundary given in (b).

An example of a 1-cycle is shown in Figure 2.2. These k-cycles form another group,

denoted Zk(X), and given explicitly by ker∂k since ∂k is a homomorphism. The next lemma

gives that Bk is a subgroup of Zk.

Lemma 2.13 ([35]). For a k-chain c, we have that ∂k−1∂kc = 0.

Now we finally define a homology group as the quotient group of the two groups defined

previously.

Definition 2.14. The k-dimensional homology group Hk(X) is given by the quotient group

Zk/Bk, and its rank is called the k-th Betti number, denoted βk(X).

A simplicial complex, along with a description of its simplicial homology is shown in

Figure 2.2.

2.3 Persistent Homology

The following two abstract simplicial complexes are key to persistent homology. They enable

the analysis of point cloud data through homology by creating a simplicial complex based

on the shape of the underlying data.

Definition 2.15. The Cech complex for threshold ε, denoted Cech(ε), is the collection of

simplices determined in the following way: a k-simplex with vertices determined by k + 1

9

Page 23: Persistence Diagram Cardinality and its Applications

Figure 2.2: This simplicial complex has β0 = 1 since it has one connected component. Italso has β1 = 1 since there is one loop.

points in X is included in Cech(ε) whenever ε/2 balls placed at the points have a common

point of intersection.

Definition 2.16. The Vietoris-Rips complex for threshold ε > 0, denoted VR(ε), is the

collection of simplices determined in the following way: a k-simplex with vertices determined

by k + 1 points in X is included in V R(ε) whenever ε/2 balls placed at the points all have

pairwise intersections.

The Cech complex is a reasonable representation of the topology of the underlying point

set X, as shown in [52, 4, 35] via the Nerve Theorem.

Definition 2.17. Suppose that Y is a finite collection of convex sets in Euclidean space.

The nerve of Y is given by Nerve(Y ) = {y ⊆ Y |⋂y 6= ∅}.

Theorem 2.18 (Nerve Theorem [52, 4, 35]). Suppose that Y is a finite collection of convex

sets in Euclidean space. Then Nerve(Y ) and the union of all sets in Y are homotopy

equivalent.

The Vietoris-Rips complex is more computationally tractable because instead of checking

for intersections, one need only check pairwise distances. The Vietoris-Rips complex contains

the Cech complex, and the following theorem gives a reverse inclusion:

10

Page 24: Persistence Diagram Cardinality and its Applications

Theorem 2.19 (Vietoris-Rips Lemma [35]). For ε > 0 and X as above, VR(ε) ⊆ Cech(√

2ε).

Having formed a meaningful simplicial complex on the data points X, we may now begin

to form summaries of the homology suitable for machine learning. Consider a function f

from a simplicial complex σ to R. Assume that f is monotonic in the sense that if A,B ∈ σ

with A ⊆ B, then f(A) ≤ f(B). Thus f−1(−∞, t], the sublevel sets of f , form an increasing

sequence of complexes as t increases. Corresponding to this evolution of simplicial complexes

is an evolution of their simplicial homology. This yields a sequence of homology groups

Hk(f−1(−∞, t1]) ⊆ Hk(f

−1(−∞, t2]) ⊆ . . . ⊆ Hk(f−1(−∞, ti]), . . .. Thus homology classes

are created and destroyed as the complexes evolve.

Such shifts in homology classes can be tracked, summarized, and recorded in a persistence

diagram. The persistence diagram displays the values of t at which the homology of

f−1(−∞, t] changes. The value of t at which the homology class first appears is called

the birth. The value of t at which the homology class disappears is called the death. Thus

for each dimension k a persistence diagram is a multiset in R2 where the coordinates of each

element represent the birth and death of a k-dim homology class from an underlying set of

data points. It may happen that two or more homology classes merge into one class within

the same sublevel set. In such a case, one applies the Elder Rule, which states that the class

that was born first continues and the younger class dies.

Given a finite dataset in Rn, the function f can be chosen to act as an approximation

to the data. Typical functions chosen for this framework are kernel density estimators as

in [41] and the distance to measure function as in [22]. In the rest of this work, we use only

the Vietoris-Rips complexes for a given dataset to construct persistence diagrams. In this

case, the monotonic function is induced by the distance metric on the space. Note Figure

2.3 which shows the difference between the two ways of constructing persistence diagrams.

Note that by definition, birth ≤ death for each homological feature. Thus all points

in a persistence diagram are above the diagonal, as pictured in Figure 2.3. Also, since

the Vietoris-Rips complexes are created from pairwise distances, there is no change in the

homology of f−1(−∞, t] for t ≤ 0, and all birth times in the persistence diagram are at least

0. Additionally, for dimension 0, the persistence diagram has the same size as the original

11

Page 25: Persistence Diagram Cardinality and its Applications

(a) (b)

(c) (d)

(e) (f)

Figure 2.3: (a) Functional approximation of the points in (c) using a kernel densityestimator. (b) Persistence diagram created using sublevel set persistence. (c) Point cloud.(d) Vietoris-Rips complex (orange points and blue line segment) for the shown radius. (e)A1-dimensional hole is formed in the Vietoris-Rips complex. (f) The persistence diagramcreated from Vietoris-Rips complexes.

12

Page 26: Persistence Diagram Cardinality and its Applications

(a) (b)

Figure 2.4: (a) shows the persistence diagram from Figure 2.3 (f), and (b) shows the samediagram transformed by (birth, death) 7→ (birth, persistence).

dataset, if it is finite. This is because each point is itself a connected component at birth

time 0. This also gives that the birth = 0 for all 0-dimensional points.

For some applications it is advantageous to convert the persistence diagram from

(birth, death) coordinates to (birth, persistence) coordinates, where persistence = death−

birth. This transformation is shown in Figure 2.4.

As mentioned in Chapter 1, there are several ways of converting a persistence diagram

to a vector.

Definition 2.20 ([12]). Given a persistence diagram D = {(bi, di)}, the corresponding

persistence landscape function λ : N × R → R is given by λ(k, t) = kmax{f(bi,di)(t)},

where f(bi,di)(t) = max{0,min(bi + t, di − t)}, kmax denotes the k-th largest element, and

i = 1, 2, . . . , n for n, the cardinality of D.

This gives a representation of the diagram in a Banach space. Sampling over a finite grid

creates a vector in Rn.

Definition 2.21 ([1]). Given a persistence diagram in (birth, persistence) coordinates, D =

{(bi, pi)}, the corresponding persistence surface ρD : R2 → R is given by

ρD(x, y) =∑

(bi,pi)∈D

f(bi, pi)1

2πσ2e−[(x−bi)2+(y−pi)2]/2σ2

,

13

Page 27: Persistence Diagram Cardinality and its Applications

where f : R2 → [0,∞) is a weighting function which is 0 along the x−axis, piecewise

differentiable, and continuous.

To create a vector, one can then integrate over each box (or pixel) within a grid in R2.

This representation is called a persistence image.

14

Page 28: Persistence Diagram Cardinality and its Applications

Chapter 3

A Stable Cardinality Distance for

Persistence Diagrams

Distances on the space of persistence diagrams yield a means of comparison between diagrams

and enable distance-based machine learning methods. These distances formalize the notion of

similarity between persistence diagrams. We begin by introducing two traditional distances

for persistence diagrams. The content of this chapter was included as part of [74].

3.1 Background

Definition 3.1. The bottleneck distance between two persistence diagrams X and Y is

given by W∞(X, Y ) = (infη:X→Y supx∈X ‖x− η(x)‖∞), where the infimum is taken over

all bijections η, and the points of the diagonal are added with infinite multiplicity to each

diagram.

Definition 3.2. The p-Wasserstein distance between two persistence diagrams X and Y

is given by Wp(X, Y ) =(infη:X→Y

∑x∈X ‖x− η(x)‖p∞

) 1p , where the infimum is taken over

all bijections η, and the points of the diagonal are added with infinite multiplicity to each

diagram.

We note that the bottleneck distance between diagrams X and Y is the Wasserstein

distance as p→∞. The Wasserstein and bottleneck distances yield the penalty of matched

15

Page 29: Persistence Diagram Cardinality and its Applications

points under the optimal bijection. In the case of the bottleneck distance, this is calculated

by considering the largest matching between a point from X and a point from Y under the

best bijection. For the Wasserstein distance, this is calculated by taking a sum of all the

matchings raised to a power. An example of this calculation is shown in Figure 3.1.

Points can be matched to the diagonal of each persistence diagram, which is assumed to

have infinitely many points with infinite multiplicity; this confirms that a bijection between

X and Y actually exists, since X and Y may not have the same cardinality. In other words,

the Wasserstein distance gives no explicit penalty for differences in cardinality between two

diagrams. Instead, the Wasserstein distance penalizes unmatched points by using their

distance to the diagonal. However, cardinality differences may play a key role in machine

learning problems, and to that end, several authors [71] proposed the dcp distance given below.

Definition 3.3. Let X and Y be two persistence diagrams with cardinalities n and m

respectively such that n ≤ m. Let X = {x1, . . . , xn}, Y = {y1, . . . , ym}, and c > 0 and

1 ≤ p <∞ be fixed parameters. The dcp distance between two persistence diagrams X and Y

is

dcp(X, Y ) =

(1

m

(minπ∈Πm

n∑`=1

min(c, ‖x` − yπ(`)‖∞)p + cp|m− n|

)) 1p

, (3.1)

where Πm is the set of permutations of (1, . . . ,m). If m < n, define dcp(X, Y ) := dcp(Y,X).

It has been shown [71] that the space of persistence diagrams is separable and complete

under this distance. Thus Frechet means and variances may be defined using the dcp distance.

This distance has successfully classified persistence diagrams generated from different classes

of acoustic signals.

Remark 3.1.1. Note that this distance can be applied to arbitrary point clouds with

finite cardinality as well. As shown in [71], a smaller c in Equation (3.1) accounts for

local geometric differences, while a larger c focuses on global geometry. It is precisely by

considering differences in cardinality that the dcp distance can distinguish between features

of the point cloud that other distances may miss. Also in Equation (3.1), if X is fixed and

m→∞, then dcp(X, Y )→ c.

16

Page 30: Persistence Diagram Cardinality and its Applications

(a) (b) (c)

Figure 3.1: Consider two persistence diagrams, one given by the green squares and anotherby the purple circles. (a) The bottleneck distance only considers the largest matching. (b)The Wasserstein distance imposes a cost of 0.1 to the extra purple point (the `∞ distance tothe diagonal). (c) The dcp distance imposes a penalty c on the point instead.

3.2 Stability Properties of Persistence Diagram Dis-

tances

We now show the stability of the dcp distance. Stability of the distance under investigation

means that small perturbations in the underlying space result in small perturbations of the

generated persistence diagrams. Adopting the approach of estimating a point cloud via a

pertinent function, e.g. a kernel density estimator as in [41], persistence diagrams may be

constructed using sublevel sets as in Figure 2.3. Their differences can be computed using the

Wasserstein and bottleneck distances. Using this functional representation, stability of the

Wasserstein and bottleneck distances has been shown in [27] and [26] by verifying Lipschitz

(and respectively Holder) continuity of the mapping from the underlying function of the data

to its persistence diagram in the bottleneck and Wasserstein distances.

Such stricter stability does not hold for the dcp distance. For functions over the reals, it can

be seen that small perturbations can be added to a function which create new homological

features over the sublevel sets but maintain the same L∞ distance between the functions.

These create arbitrary differences in cardinality of the persistence diagrams that prevent

the dcp distance from having the same Lipschitz property. Considering discrete point clouds

17

Page 31: Persistence Diagram Cardinality and its Applications

whose distances shrink to zero, Theorem 3.4 shows that the distance between persistence

diagrams goes to zero as well.

Theorem 3.4 (Stability Theorem). Consider c > 0 and 1 ≤ p < ∞. Let A be a finite

nonempty point cloud in Rd. Suppose that {Ai}i∈N is a sequence of finite nonempty point

clouds such that dcp(A,Ai)→ 0 as i→∞. Let Xkand Xki be the k-dim persistence diagrams

created from the Vietoris-Rips complex for A and Ai respectively. Then dcp(Xk, Xk

i )→ 0 as

i→∞.

Note that Theorem 3.4 does not depend on a function created from the points such as

a kernel density estimator as in [41], but simply on the points themselves and the Vietoris-

Rips complex generated from these points. Another way of stating Theorem 3.4 is that

limi→∞Ai = A in the space of point clouds under the dcp metric implies that limn→∞Xδi = Xδ

in the space of persistence diagrams endowed with the dcp metric. In fact, Theorem 3.4 shows

that the mapping from a point cloud to the persistence diagram of its Vietoris-Rips complex

is continuous under the dcp distance. This continuous-type stability result is weaker than

Lipschitz stability. In order to prove Theorem 3.4, we first show that if the dcp distance

between the underlying point clouds goes to 0, then eventually the size of the point clouds

must be the same.

Lemma 3.5. Let A and Ai be as in Theorem 3.4 such that dcp(A,Ai)→ 0 as i→∞. Then

Ai and A have the same number of points for i ≥ N0 for some N0 ∈ N.

Proof. Denote by |A| the number of points in the point cloud A. Suppose that |Ai| 6= |A|

infinitely often. Since dcp(A,Ai) → 0, for every ε > 0, there is an N ∈ N such that i ≥ N

implies that dcp(A,Ai) < ε. Let ε = c|A|+1

, noting that |A| is fixed. By assumption |Ai| < |A|,

|Ai| > |A|, or both, infinitely often. If |A| < |Ai|, then by Def. 3.3

dcp(A,Ai) ≥(cp|Ai| − |A||Ai|

) 1p

≥ c|Ai| − |A||Ai|

. (3.2)

The function h : N→ R given by h(z) = z−|A|z

is strictly increasing. Whenever |A| < |Ai|,

we have |Ai| ≥ |A| + 1. The restriction of h to {|A| + 1, |A| + 2, |A| + 3, . . .} achieves its

minimum at |A|+ 1. This shows that the RHS of Equation (3.2) is greater than or equal to

18

Page 32: Persistence Diagram Cardinality and its Applications

c|A|+1

, whenever |A| < |Ai|, which by assumption happens infinitely often. This contradicts

dcp(A,Ai) < ε for all i ≥ N . The case where |A| > |Ai| follows similarly.

Lemma 3.6. Let A and Ai be as in Theorem 3.4. Suppose the points of each point cloud

Ai are ordered so that Ai = {aπi(1), aπi(2), . . . , aπi(|A|)}, where πi is the permutation used

to calculate the dcp distance between Ai and A as in Equation (3.1). Let DA and DAibe

the distance matrices for the points of A and Ai respectively, i.e., the kl-th entry of DA is

‖ak − al‖d. Then,

1. ‖DA −DAi‖∞ → 0 as i→∞, and

2. for some N1 ∈ N, the order of the entries of the upper triangular portion of DA and

DAiis the same for i ≥ N1, up to permutation when either DA or DAi

have duplicate

entries.

Proof. (i) Let A = {a1, . . . ak}, Ai = {ai1, . . . aik}, and λiα = ‖aα − aiπi(α)‖d for the

permutation πi in the dcp distance between Ai and A. Suppose that dcp(A,Ai) → 0.

Note that since c is fixed, then by Lemma 3.5, there is some Nc such that eventually

dcp(Ai, A) =(

1|A| minπi∈

∏|A|

∑|A|`=1 ‖a` − aπi(`)‖

pd

) 1p

for i ≥ Nc. By assumption dcp(A,Ai)→ 0,

which shows that |A|−1p‖λ‖p → 0 as i→∞. Thus ‖λi‖p → 0 as i→∞.

Now, let E = DA −DAi.

‖E‖∞ = maxk,l

∣∣‖ak − al‖d − ‖aik − ail‖d∣∣= max

k,l

∣∣‖ak − al‖d + ‖al − aik‖d − ‖al − aik‖d − ‖aik − ail‖d∣∣

≤∣∣‖ak − al‖d − ‖al − aik‖d∣∣+

∣∣‖aik − ail‖d − ‖al − aik‖d∣∣≤ ‖ak − aik‖d + ‖al − ail‖d

This last sum goes to 0 as i→∞, proving (i).

(ii) Suppose that the m distinct upper triangular entries of DA are ordered from smallest

to largest, say dA1 < dA2 < · · · dAm, where m ≤ |A|(|A| − 1)/2. For η ∈ {1, . . . ,m + 1} let

hη ⊂ [0,∞) be a sequence such that h1 < dA1 < h2 < dA2 < · · · < hm < dAm < hm+1. Let

‖DA − DAi‖∞ < h

2, where h = minη1,η2∈{1,...,m+1}{|hη1 − hη2|}. We show that there exists

19

Page 33: Persistence Diagram Cardinality and its Applications

a sequence gη such that |hη − gη| < 2h for each η ∈ {1, . . . ,m + 1} and hη < dAj < hη+1

implies gη < dAij ≤ gη+1. Let hη < dAj < hη+1, and suppose that it is not the case that

hη < dAij ≤ hη+1. Since ‖DA−DAi

‖∞ < h2, then either dAi

j ∈ (hη−1, hη] or dAij ∈ (hη+1, hη+2].

If the first case is true, then take gη = dAj − h2. If the second, then take gη = dAj + h

2. This

proves the existence of the sequence. Now proceeding by contradiction, if the lemma does

not hold for some entries dAj ∈ DA and dAij ∈ DAi

, then take ‖DA−DAi‖∞ < 1

2|dAj −d

Aij |.

Proof of Theorem 3.4. By Lemma 3.5, take |Ai| = |A| without loss of generality. By Lemma

3.6 (i), ‖DA −DAi‖∞ → 0 as i→∞. If the Vietoris-Rips complex were computed at every

threshold value in [0,∞), then the birth and death times of all features of all dimensions

would be distances between points in the underlying point cloud (including the birth time

of 0 in the 0-dim diagram). Since the order of the entries of DA and DAimay be taken to be

the same from Lemma 3.6 (ii), the same number of simplices are formed in the complexes

for A and Ai for each dimension of simplex. Also, the labels of the simplices according to

the points of A and Ai are given from the permutation πi in Lemma 3.6 (i).

Now, for 0-dim it is clear that for the cardinalities of the persistence diagrams, |X0| = |X0i |

since for the sizes of their associated point clouds, |Ai| = |A|. For a higher dimensional feature

(k ≥ 1) to appear in the complex, we must have that a certain number of the distances are

less than or equal to the threshold ε and a certain number of the distances are greater than

ε. Lemma 3.6 (ii) shows that although the thresholds where the features are created may be

different, the same number of features are formed in the Vietoris-Rips complexes of A and

Ai, and these features are formed in the same order and with the points that correspond

under πi.

If Xk = {x1, x2, . . . , x|Xk|} and Xki = {x1, x2, . . . , x|Xk

i |}, then we have that |Xk| = |Xki |

and that dcp(Xk, Xk

i ) < 2h. Thus dcp(Xk, Xk

i )→ 0 as i→∞.

To provide a practical way to control c in computing the dcp distance of Equation (3.1)

and consequently compute the possible fluctuations of the dcp distance, a probabilistic upper

bound, which relies on least squares, is provided. Specifically, the following analysis gives

predictions on the number of 1-dim holes represented in the persistence diagram, which

we denote by b1. The parameter b1 relies on the number of connected components (or

20

Page 34: Persistence Diagram Cardinality and its Applications

equivalently the number of points in the point cloud) represented in the persistence diagram,

denoted by b0.

Definition 3.7 ([89]). The kissing number in Rd is the maximum number of nonoverlapping

unit spheres that can be arranged so that each touches another common central unit sphere.

Lemma 3.8 ([48]). For a finite point cloud with no more than ρ points in Rd under the

Euclidean distance, let Md(ρ) denote the maximum possible number of 1-dim holes in the

Vietoris-Rips complex for the point cloud for a given threshold. Then

Md(ρ) ≤ (Kd − 1)ρ. (3.3)

Proposition 3.9. Consider a point cloud in Rd with ρ points and its associated persistence

diagram. Let B1 denote the possible range of the number of 1-dim holes b1. Then B1 is such

that {0, 1, . . . , bρ2c− 1} ⊆ B1 ⊆ {0, 1, . . . , 1

2(Kd− 1)ρ2(ρ− 1)}, i.e. the possible range of b1 is

expanding as the number of points, b0, in the point cloud increases.

Proof. We first show the inclusion {0, 1, . . . , bρ2c − 1} ⊆ B1. To form a point cloud with

ρ points that has b1 = 0, simply take the ρ points and arrange them on a line. To form

a point cloud with ρ points that has b1 = bρ2c − 1, arrange the ρ points in two rows each

with bρ2c points. Set the spacing between adjacent points in each of the rows to be 1 and

then place the two rows directly beside each other so that for each point in the first row,

there is exactly one point in the second row at a distance of 1. Adding edges appropriately

creates b1 = bρ2c − 1 squares with side length 1. Thus, creating the Vietoris-Rips complex

and corresponding diagram gives b1 = bρ2c − 1. For an illustration of the arrangement, see

Figure 3.2a.

To form a point cloud with ρ points that has b1 ∈ {1, 2, . . . bρ2c − 2}, arrange 2(b1 + 1)

points in two rows as in Figure 3.2a. Arrange the other ρ − 2(b1 + 1) points in a line with

the minimum distance from any points in the line to points of the two rows such that it is

greater than or equal to 1. Then exactly b1 holes are formed from the two rows, with no

holes formed by the line. For an illustration, see Figure 3.2b.

21

Page 35: Persistence Diagram Cardinality and its Applications

(a) (b)

Figure 3.2: An example of 8-point arrangements to visualize the proof of Proposition 3.9.(a) A 3- hole configuration vs. (b) a 2-hole configuration.

Next, we verify the inclusion B1 ⊆ {0, 1, . . . , 12(Kd − 1)ρ2(ρ − 1)}. By Lemma 3.8, the

number of 1-dim holes in the Vietoris-Rips complex for a fixed radius ε for the point cloud is

bounded above by (Kd − 1)ρ. The homology of the Vietoris-Rips complex changes at most(ρ2

)times as the radius ε increases due to the maximum of

(ρ2

)distinct distances between

points in the point cloud. Therefore, there can be at most 12(Kd − 1)ρ2(ρ − 1) 1-dim holes

formed over the entire evolution of the Vietoris-Rips complex. This gives the desired bound

of b1 ≤ 12(Kd − 1)ρ2(ρ− 1).

Now, letN point clouds be generated from some process, andN corresponding persistence

diagrams be created. For each persistence diagram Xki , k ∈ {0, 1}, i = 1, . . . , N , record the

cardinality bi0 of the 0-dim diagram and the cardinality bi1 of the 1-dim diagram. Let b0 ∈

RN×2 be the predictor matrix whose rows are [1 bi0] and b1 ∈ RN be the vector of responses

with entries bi1. Proposition 3.9 gives that the possible range of b1 is increasing as b0 grows,

which yields that an increase in variance as b0 grows may be present, i.e., heteroscedasticity

exists. Thus the analysis of the change in number of 1-dim holes as the size of the point

cloud changes needs to account for heteroscedasticity in order to capture the non-constant

variance behavior.

Therefore to estimate the number of 1-dim holes, we use weighted least squares as in [37].

If W ∈ RN×N is the weight matrix W = diag(a1, . . . , aN), then a weighted least-squares

regression can be found for b1 = b0γ + ε, where εi ∼ N (0, σ2i ). The approximation is then

given by b0γ = b1, with γ = (b0TWb0)−1b0

TWb1. In turn, Proposition 3.10 provides

bounds from prediction intervals using weighted least squares for the dcp distance. Note that

22

Page 36: Persistence Diagram Cardinality and its Applications

the same analysis could be used for a transformation of the predictor b0, as we see later in

Figure 3.5.

Proposition 3.10. Suppose N point clouds are generated from a process, and N correspond-

ing persistence diagrams are created. For each persistence diagram Xki , k ∈ {0, 1}, record the

cardinality of the 0-dim diagram bi0 and of the 1-dim diagram bi1. Let b0 ∈ RN×2 be the

predictor matrix whose rows are [1 bi0] and b1 ∈ RN be the vector of responses of bi1. Assume

the model b1 = b0γ + ε, where εi ∼ N (0, σ2i ) depends on the value of the input bi0. Let

X1 and Y 1 be persistence diagrams generated from the same process as b0 with |X0| = µ.

Considering the (1 − α) · 100%-level prediction interval for b1, the distance dcp(X1, Y 1) is

bounded above by

(minπ∈Πm

n∑`=1

min(c, ‖x1` − y1

π(`)‖∞)p + cp2t1−α,N−2s

√[1 µ](b0

TWb0)−1[1 µ]T + µ

) 1p

.

Proof. Prediction intervals can be constructed for the cardinality of a 1-dim diagram

for an instance of point cloud size b0∗ using standard results on weighted least squares.

Specifically, for level (1 − α) · 100% a prediction interval for the new response b1

∗is

sought. To calculate this interval for a new response from the mean predicted response

b1

∗= γb0

∗, note that b1

∗− b1

∗ has the distributionb1∗−b∗1

Var(b1∗−b1∗)

∼ tN−2. Also, Var(b1

∗− b1

∗) =

Var(ε)[1 b0∗](b0

TWb0)−1[1 b0∗]T + Var(ε)

w∗ , where w∗ = 1b∗0

, the weight corresponding to b0∗.

Prediction intervals for b∗1 are thus b1

∗± t1−α/2,N−2s

√[1 b0

∗](b0Tb0)−1[1 b0

∗]T + b0∗, where

s2 = εTWεN−2

, the unbiased estimator for Var(ε), using the residuals ε. Thus the cardinality

difference term in the calculation of the dcp distance as in Equation (3.1) is bounded above

by the length of the prediction interval with (1 − α) · 100%-level confidence. Substituting

this length into Equation (3.1) gives the result.

23

Page 37: Persistence Diagram Cardinality and its Applications

(a) BCC cell (b) FCC cell

Figure 3.3: Example of body-centered cubic, (BCC), (a) and face-centered cubic, (FCC),(b) unit cells without additive noise or sparsity. Notice there is an essential topologicaldifference between the two structures: The body-centered cubic structure has one atom atits center, whereas the face-centered cubic is hollow in its center, but has one atom in themiddle of each of its faces.

3.3 Distance-Based Machine Learning for Materials

Data

Here we describe an application of TDA and the dcp distance to classification of crystal

structures of high entropy alloys (HEAs) using atom probe tomography (APT) experiments.

For HEAs atom probe tomography (APT) gives a snapshot of the local atomic

environment. APT has two main drawbacks: experimental noise and missing data.

Approximately 65% of the atoms in a sample are not registered in a typical experiment,

and those atoms that are captured have their spatial coordinates corrupted by experimental

noise. As noted by [61] and [79], APT has a spatial resolution of approximately the length of

the unit cell we consider, as seen in Figure 3.3. Hence the process is unable to see the finer

details of a material, making the determination of a lattice structure a challenging problem.

Existing algorithms for detecting the crystal structure ([25, 53, 57, 65, 83, 104]) are not able

to establish the crystal lattice of an APT dataset.

24

Page 38: Persistence Diagram Cardinality and its Applications

(a) (b)

Figure 3.4: Example of persistence diagrams generated by (a) a BCC lattice, and (b)FCC lattice. The data has a noise standard deviation of τ = 0.75 and 67% of the atomsare missing. Note that the BCC diagram has two prominent (far from the diagonal) pointsrepresenting 1-dim holes and fewer connected components and 1-dim holes than does theFCC diagram.

We consider such materials that are either body-centered cubic (BCC) or face-centered

cubic (FCC), as these lattice structures are the essential building blocks of HEAs ([113]) and

have fundamental differences that set them apart in the case of noise-free, complete materials

data. The BCC structure has a single atom in the center of the cube, while the FCC has a

void in its center but has atoms on the center of the cubes’ faces, see Figure 3.3. These two

crystal structures are distinct when viewed through the lens of TDA. Differentiating between

the holes and connectedness of these two lattice structures allows us to create an accurate

classification rule. This fundamental distinction between BCC and FCC point clouds is

captured well by persistent homology.

However, topology alone is insufficient to distinguish between noisy and sparse BCC

and FCC lattice structures accurately. If we count the number of atoms in a unit cell (see

Figs. 3.3a, 3.3b) one may see that a BCC unit cell has two atoms, one at the center and

1/8th of an atom at the unit cell’s corners, as it shares part of these corner atoms with its

neighboring cells. Similarly, an FCC unit cell has four atoms; the same 1/8th of the corner

atoms plus one-half of each of the six atoms on the cell’s faces. In both cases, the atoms on

the faces and lattice points are shared with the cell’s neighbors and are only counted as a

proportion contributing to the unit cell.

25

Page 39: Persistence Diagram Cardinality and its Applications

Another way to see this difference in cardinality is by plotting the number of connected

components against the number of holes for both BCC and FCC crystal structures. For these

comparisons we use 1000 atomic neighborhoods (500 FCC and 500 BCC) extracted from a

sample containing approximately 10,000 atoms with atoms removed to create sparsity and

Gaussian noise added to mirror the levels found in true APT experimental data. To create

these neighborhoods, we consider a fixed volume around each atom in the perturbed sample

and those atoms within the volume are recorded. Observe in Figure 3.4 that the number

of connected components and 1-dim holes are greater in the FCC diagrams than the BCC

diagrams. Also, Figs. 3.5c, 3.5d depict that FCC structures have larger point clouds, and

consequently, a greater number of connected components. Consequently, we must account

for more than just 1-dim homological differences when considering persistence diagrams

derived from these atomic neighborhoods. Variability in the size of the underlying point

clouds must be considered, as verified in Proposition 3.10. Given the salient topological

and cardinality differences between these two crystal structures, we seek to classify their

associated persistence diagrams via these essential differences. To that end, we consider the

dcp distance given in Equation (3.1).

As demonstrated in Proposition 3.10, there is a relationship between the number of

connected components, b0, (number of atoms in this case) and the number of 1-dim

homological features, b1, in the persistence diagrams. Figs. 3.5a, 3.5b demonstrate this

relationship, as well as the presence of heteroscedasticity between b0 and b1, also verified by

the Breusch-Pagan test for heteroscedasticity ([11]) with a p−value of 9.3 × 10−54 for FCC

cells and a p−value of 2.01×10−47 for BCC cells. Figs. 3.5a, 3.5b also provide 95% prediction

intervals for b1 based on the weighted least squares regression analysis of Proposition 3.10.

To that end, this exact fine balance between the number of atoms in a neighborhood and

the associated topology created by the positions of these atoms in the cubic cell is captured

by the dcp distance.

As shown in [74], a classifier using the dcp distance outperformed the same classifier

using the Wasserstein distance on the classification of this synthetic dataset of 1000 atomic

neighborhoods. Details of the performance are shown in Figure 3.6.

26

Page 40: Persistence Diagram Cardinality and its Applications

(a) (b)

(c) (d)

(e)

Figure 3.5: Top: Number of connected components (in this case atoms), b0, againstthe number of 1-dim homological features, b1, of the persistence diagrams. One can see thepresence of heteroscedasticity since the variance of b1 increases as b0 increases. The diagramswere generated from the crystal structures with N (0, 0.752) noise and 67% of atoms missing.Bottom: Same as in top but using a quadratic transformation of the predictor variable, alongwith the weighted least squares fit line and 95% prediction intervals provided by Proposition3.10.

27

Page 41: Persistence Diagram Cardinality and its Applications

0.0 0.2 0.4 0.6 0.8 1.0

Standard Deviation

0.75

0.80

0.85

0.90

0.95

1.00

Accuracy

Wasserstein, p = 2

dcp, p = 2, c = 0.05

Counting

Figure 3.6: Accuracy scores for dcp (red), Wasserstein (blue), and counting (green)classifiers, plotted against different standard deviations, τ , of the normally distributed noiseof the atomic positions. In each instance, the sparsity has been fixed at 67% of the atomsmissing, as in a true APT experiment.

28

Page 42: Persistence Diagram Cardinality and its Applications

3.4 Conclusion

This chapter has combined statistical learning and topology to analyze the crystal

structure of high entropy alloys using atom probe tomography (APT) experiments. These

APT experiments produce a noisy and sparse dataset, from which we extract atomic

neighborhoods, i.e., atoms within a fixed volume forming a point cloud, and apply the

machinery of TDA to these point clouds. Viewed through the lens of TDA, these point

clouds are a rich source of topological information. The analysis was based on features

derived from the new distance on persistence diagrams, denoted herein by dcp. This distance

is different from all other existing distances on persistence diagrams in that it explicitly

penalizes differences in cardinality between diagrams.

We have proved a stability result for the dcp distance, demonstrating that small

perturbations of the underlying point clouds resulted in small changes to the dcp distance.

We also provided guidance for the choice of the c parameter by looking at confidence bounds

using a function of the cardinalities of the persistence diagrams. The results presented herein

could aid materials science researchers by providing a previously unavailable representation

of the local atomic environment of high entropy alloys from APT data.

29

Page 43: Persistence Diagram Cardinality and its Applications

Chapter 4

Cardinality-Driven Bayesian

Framework for Persistence Diagrams

This chapter focuses on a generalized Bayesian framework for persistence diagrams that

provides posterior spatial and cardinality distributions for the diagrams. We propose a

generalized Bayesian framework that adopts an independent and identically distributed

cluster point process characterization of persistence diagrams. This framework provides

the flexibility to estimate the posterior cardinality of points in a persistence diagram

and their posterior spatial distribution simultaneously. We present a closed form of the

posterior spatial and cardinality distributions under the assumption of a conjugate family of

Gaussian mixtures for the spatial distribution. Based on this form, we apply a Bayes factor

classification algorithm on filament network data of plant cells. The content of this chapter

is in preparation for publication [86].

4.1 Background

4.1.1 Problem Description

The actively functioning transportation of various particles though intracellular movements

is a vital process for the cells of living organism [92]. Such transportation must be intricately

organized due to the tightly packed nature of the inside of the cell at the molecular level

30

Page 44: Persistence Diagram Cardinality and its Applications

[10]. The actin cytoskeleton, which consists of actin filaments cross-linked with myosin

motor proteins along with other pertinent binding proteins, is an important component

in plant cells that determines the structure of the cell and provides transport of cellular

components [42, 10]. Although researchers have investigated the molecular features of actin

cytoskeleta [99, 97, 42, 82], the underlying process that determines their structures and how

these structures are linked to intracellular transport remains undetermined [103, 69].

The inherent variation in the size, density, and positioning of actin filaments produces

local geometric signatures in the cytoskeleton’s network, e.g., a higher number of cross-

linking proteins produces thicker actin cables [101]. In this article, we classify actin filament

networks to identify the effect of the number of cross-linking proteins on the network. For

example Fig. 4.1 (a) presents an electron micrograph of actin filaments and highlights several

sections of the filament network to show the variation in number of cross-liking proteins. The

green section shows brighter regions than the rest, as this includes thicker actin cables. Due

to the challenges in investigating actin filaments in vivo, simulations of actin cytoskeleton

networks provide useful tools for studying such structures [91]. We focus on classifying

simulated networks with varying numbers of cross-linking proteins (see Fig. 4.1 (b-d)).

These classes have fundamental topological differences, e.g., loops in the networks in

Fig. 4.1 (c) and (d) are more prominent than those in Fig. 4.1 (b). This difference in

classes reflects the role of binding proteins in linking actin filaments together into bundles

and networks. With more cross-linking proteins available, the cell has networks with many

binding locations, which create thicker cables and larger loops within the whole structure

of the actin cytoskeleton. When viewed through the lens of TDA, which has already been

established as an efficient technique for studying cells [96], these networks are distinct due to

the presence and size of loops. Differentiating between the empty space and connectedness of

these networks allows us to differentiate them using topological methods. When combined

with Bayesian inference, a tool successfully used for studying structures within cells [77],

TDA gives an accurate classifier.

In this chapter, we quantify the variability of persistence diagrams through a novel

Bayesian framework by considering persistence diagrams as a collection of points distributed

on a pertinent domain space, where the distribution of the number of points is also an

31

Page 45: Persistence Diagram Cardinality and its Applications

(a) (b) (c) (d)

Figure 4.1: (a) is an electron micrograph of an actin filament. (b), (c) and (d) are samplesfrom three classes with number of cross-linking proteinsN = 825, 1650, and 3300 respectively.Notice there is an essential topological difference between the networks in (b), (c) and (d):the loops in the networks in (c) and (d) are more prominent than those in (b).

important feature. This setting leads us to view a persistence diagram through the lens

of an independent and identically distributed (i.i.d.) cluster point process [31]. An i.i.d.

cluster point process consists of points that are i.i.d. according to a probability density

but have an arbitrary cardinality distribution. For example, an i.i.d. cluster point process

is reduced to the classical Poisson point process if the points in a persistence diagram are

spatially distributed according to a Poisson distribution. Furthermore, the cardinality of the

points is implicitly given by integrating the intensity of the Poisson point process. This also

yields that the variance will be equal to the mean in the Poisson setting, and leads to an

estimation of cardinality with high variance whenever the number of points in a persistence

diagram is high. Modeling persistence diagrams as i.i.d. cluster point processes allows us to

estimate the spatial and the cardinality components of the distribution simultaneously. This

is very critical as the importance of cardinality in persistence diagrams has been underlined

in problems related to statistics and machine learning [41, 71, 74, 62].

Our Bayesian framework, relying on the theory of marked point processes, yields

a posterior distribution of persistence diagrams. In particular, we quantify the prior

uncertainty given spatial and cardinality distributions of an i.i.d. cluster point process.

In applications, one may select an informative prior based on expert opinion or alternatively

choose an uninformative prior with little information. The likelihood in our Bayesian

model represents the level of belief that observed diagrams are representative of the entire

32

Page 46: Persistence Diagram Cardinality and its Applications

population and is defined through marked point processes. This framework estimates the

posterior cardinality and spatial distributions simultaneously, which provides a complete

knowledge of the posterior distribution. However, from a computational perspective, the

estimation of both of these components is complex due to fewer assumptions about the

probability distribution of the number of points in a persistence diagram. For example, the

authors of [76] ignored the cardinality component by considering a Poisson point process

instead of an i.i.d. cluster point process. This framework can reduce the computational

burden, but it approximates the cardinality distribution from the intensity estimation.

Herein, we focus on developing a Bayesian framework that is capable of estimating the

cardinality distribution without relying on the intensity.

Another key contribution of this chapter is the derivation of a closed form of the

posterior spatial distribution, which relies on a conjugate family of Gaussian mixture priors,

and a closed form for the posterior cardinality, which uses binomial distributions. The

direct benefits of this closed form solution of the posterior distribution are two-fold: (i)

it demonstrates the computational tractability of the proposed Bayesian model and (ii) it

provides a means to develop a robust classification scheme through Bayes factors, which

provides a measure of confidence in a particular classification outcome. To showcase the

applicability of our Bayesian paradigm for persistence diagrams, we explore the classification

problem for filament networks in plant cells.

4.1.2 I.I.D Cluster Point Processes

This section contains basic definitions and fundamental theorems related to i.i.d. cluster

point processes. Detailed treatments of i.i.d. cluster point processes can be found in [31]

and references therein. Throughout this section, we let X be a Polish space and X be its

Borel σ−algebra.

Definition 4.1. A finite point process ({pn}, {Pn(•)}) consists of a cardinality distribution

pn with∑∞

n=0 pn = 1 and a symmetric probability measure Pn on X n, where X 0 is the trivial

σ-algebra.

33

Page 47: Persistence Diagram Cardinality and its Applications

To sample from a point process, first one draws an integer n from the cardinality

distribution pn. Then the n points (x1, . . . , xn) are spatially distributed according to a draw

from Pn. Since point processes model unordered collections of points, we need to ensure that

Pn assigns equal weights to all n ! permutations of (x1, . . . , xn). The requirement in Def.

4.1 that Pn be symmetric ensures this. A natural way to work with a random collection

of points is the Janossy measure, which combines the cardinality and spatial distributions,

while disregarding the order of the points.

Definition 4.2. For disjoint rectangles A1, . . . , An, the Janossy measure for a finite point

process is given by Jn(A1 × · · · × An) = n!pnPn(A1 × · · · × An).

Next we define the i.i.d. cluster point process that is considered in this article to model

the uncertainty of persistence diagrams.

Definition 4.3. An i.i.d. cluster point process Ψ is a finite point process which has points

that: (i) are located in X = Rd, (ii) have a cardinality distribution pn with∑∞

n=0 pn = 1, and

(iii) are distributed according to some common probability measure F (·) on the Borel set X .

We consider Janossy measures JΦn for the point process Φ that admit densities jn with

respect to a reference measure on X due to their intuitive interpretation. In particular,

for an i.i.d. cluster point process Ψ if F (A) =∫Af(x)dx for any A ∈ X n, then

jn(x1, . . . , xn) = pnn!f(x1) · · · f(xn) determines the probability density of finding the n points

at their respective locations according to F . The n! term gives the number of ways the points

could be at these positions. For a finite measure Λ on X that admits the density λ, we also

have f(x) = λ(x)Λ(X)

. The intensity is the point process analog of the first order moment of a

random variable. In particular, the intensity density λ is the density of the expected number

of points per unit volume. Hereafter, we sufficiently characterize our i.i.d. cluster point

processes with intensity and cardinality measures.

Next, we define the marked point process, which provides a formulation for the likelihood

model used in our Bayesian setting. Let M be a Polish space that represents the mark space,

and let its Borel σ− algebra be M.

Definition 4.4. A marked i.i.d. cluster point process ΨM is a finite point process on X×M

such that: (i) Ψ = ({pn} , {Pn(•)}) is an i.i.d. cluster point process on X, and (ii) for a

34

Page 48: Persistence Diagram Cardinality and its Applications

realization (x,m) ∈ X × M, the marks mi of each xi ∈ x are drawn independently from a

given stochastic kernel `(•|xi).

Remark 4.1.1. A marked point process ΨM is a bivariate point process where one point

process is parameterized by the other. Therefore, if the cardinalities of x and m are equal,

then the conditional density for m is `(m|x) = 1n!

∑π∈Sn

∏ni=1 `(mi|xπ(i)). Otherwise, the

density can be taken as 0.

To calculate the posterior distributions for the Bayesian analysis, the probability

generating functional (PGFL) is used. The PGFL is a point process analogue of the

probability generating function (PGF) of random variables. Intuitively, the point process

can be characterized by the functional derivatives of the PGFL [84].

Definition 4.5. Let Ψ be a finite point process on X and H be the Banach space of all

bounded measurable complex valued functions ζ on X. For a symmetric function, i.e. a

function in which the order of the arguments is irrelevant, ζ(x) = ζ(x1) · · · ζ(xn) and x ∈ X,

the probability generating functional (PGFL) of Ψ is given by

G[ζ] =∞∑n=0

1

n!

∫Xn

(n∏j=1

ζ(xj)

)Jn(dx1 . . . dxn), (4.1)

For a marked point process (Ψ,ΨM) on (X,M) the PGFL is given by

G(ΨM ,Ψ)[η, ζ] =∑n,k≥0

1

k!n!

∫Xn

∫Mk

(k∏i=1

η(mi)

)(n∏j=1

ζ(xj)

)J(ΨM |Ψ)k (dm)JΨ

n (dx), (4.2)

where η is a symmetric measurable complex valued function on M. For ease of notations we

write dm = dm1 . . . dmk and dx = dx1 . . . dxn. For an i.i.d. cluster process the PGFL has

the form:

G[ζ] = gN

(∫Xζ(x)f(x)dx

), (4.3)

where gN is the PGF of N and ζ has the same form as in Def. 4.5.

The final set of tools we need includes the definition and pertinent properties of functional

derivatives that allow us to recover intensity and cardinality of the posterior i.i.d. cluster

point process.

35

Page 49: Persistence Diagram Cardinality and its Applications

Definition 4.6. For a probability generating functional G as in Eq. 4.3, the gradient

derivative of G in the direction of η evaluated at ζ is given by δG[ζ; γ] = limε→0G[ζ+εγ]−G[ζ]

ε.

For γ = δx, the Dirac delta function centered at x, the gradient derivative is called the

functional derivative in the direction of x, denoted δG[ζ;x].

Remark 4.1.2. The functional derivative satisfies the familiar product and chain rules [70].

For X = {x1, · · ·xm} and a subset X of X the product rule for functional derivatives has

the following form

δG1.G2[ζ;X] =∑X

δG1[ζ;X \ X].δG2[ζ; X]. (4.4)

Also, for a linear functional f [ζ] =∫X ζ(x)f(x)dx, the functional derivative in the direction

of z is given by δf [ζ; z] = f(z). Using the chain rule, the functional derivative of the PGFL

of an i.i.d. cluster point process is given by

δ(m)G[ζ;X] = g(m)N (f [ζ])f(x1) · · · f(xm). (4.5)

The following theorem gives the form of the PGFL for a conditional point process and

thus for a marked point process. The proof can be found in [100].

Theorem 4.7. Consider the PGFL for a marked process G(X,Y )[η, ζ] in Eqn. (4.2) and a

finite point process Y = {y1, · · · , ym} ∈ M. Then the PGFL of the conditional point process

(X|Y ) is given by

G(X,Y )[ζ] =δ(m,0)G(X,Y )[0; y1 · · · ym, ζ;∅]

δ(m,0)G(X,Y )[0; y1 · · · ym, 1;∅], (4.6)

where δ(m,0)G(X,Y ) represents the functional derivative of G with respect to the first argument

η in m directions {y1, · · · , ym} and no derivative with respect to the second argument ζ.

The following definition will be needed for the characterization of the posterior.

Definition 4.8. The elementary symmetric function eK,k is given by eK,k(ν1, · · · , νK) =∑0≤i1<···<ik≤K νi1 · · · νik with e0,k = 1 by convention.

36

Page 50: Persistence Diagram Cardinality and its Applications

4.2 Bayesian Inference

In this section, we present the Bayesian framework for persistence diagrams modeled as i.i.d.

cluster point processes. The goal is to provide formulas for the estimation of the posterior

intensity and cardinality using this framework. Necessary modeling assumptions are given

in Subsection 4.2.1, followed by the characterization of the posterior in Subsection 4.2.2.

4.2.1 Model Characterization

An effective characterization of posterior point process modeling of persistence diagrams

must account for the distribution of the points on the domain space, as well as the probability

distribution of the number of points. This is achieved through the i.i.d. cluster point process

model as it provides a comprehensive representation of the posterior (spatial and cardinality

distributions). According to Bayes’ theorem, the posterior density is proportional to the

product of a likelihood function and a prior.

Prior: We consider the prior uncertainty of a persistence diagram DX as generated by

an i.i.d. cluster point process DX with intensity λDXand cardinality pDX

. However, a point

x ∈ DX may not be observed in an actual persistence diagram due to the presence of noise,

sparsity, or other unexpected scenarios. We address this instance by defining a probability

function α(x). In particular, (1 − α(x)) describes the probability of not observing x, and

similarly α(x) is the probability of x being observed.

Data/Likelihood Model: The theory of marked point processes provides a likelihood

function for Bayesian inference. For a marked point process, the joint intensity probability

density is computed using a stochastic kernel. More specifically, (x,m) ∈ X×XM implies that

the points m(xi) ∈ m are marks of xi ∈ x that are drawn independently from a stochastic

kernel ` : X×XM → R+∪{0}. This kernel, in turn, provides the conditional density `(m|x),

which is nonzero only if the cardinalities of x and y are equal as discussed in Remark 4.1.1.

In order to describe the structure of persistence diagrams fully, we must define one last point

process that models the topological noise in the observed data. Intuitively, this point process

consists of the points in the observed diagram that fail to associate with the prior. We define

this as an i.i.d. cluster point process DYU with intensity λDYUand cardinality pDYU

.

37

Page 51: Persistence Diagram Cardinality and its Applications

(a) (b) (c)

(d) (e)

Figure 4.2: (a) A sample DX (triangles) from the prior point process DX and an observedpersistence diagram DY (dots) generated from DY . (b-c) Color-coded representations ofpossible configurations for the observed and vanished features in the prior, along with themarked point process (DXO

,DYO) and the unexpected features of the data DYU . (d-e) showthe effect of DXO

and DYU on the cardinality distribution of the observation DY .

38

Page 52: Persistence Diagram Cardinality and its Applications

In Figure 4.2 we give a visual representation to illustrate the contribution of the prior and

of the observations for the Bayesian framework. For this, we superimpose two persistence

diagrams: one is a sample from the prior and another is an observed persistence diagram

(Figure 4.2). Figure 4.2 (b) and (c) show different possible scenarios for the relationship of

the prior to the data likelihood. As in the prior point process, any point x in the persistence

diagram is equipped with a probability of being observed and not being observed. We present

these two cases as triangles inside of blue and brown boxes respectively in Figure 4.2 (b) and

(c), and denote them as DXO(observed) and DXV

(vanished), respectively.

DY , consisting of solid dots, instantiates the actual data for the model. Presumably, any

point x ∈ DXOhas an association with one feature y in the observed persistence diagram

DY , and consequently, all of these associations together constitute the marked point process

which admits a stochastic kernel `(y|x). This indicates that the point x may have any

point y ∈ DY as its mark, but intuitively some marks should be more likely than others.

In Figure 4.2 (b) and (c) we give examples of these associations, which are indicated by

pairs inside of the blue box. We denote the marked point process consisting of such pairs

by (DXO, DYO). However, some of the blue boxes include pairs with y closer to x (Figure

4.2 (b)) and some with y farther from x (Figure 4.2 (c)). Definitely, the marks of the

former cases are more likely, and this is captured by the stochastic kernel `. The remaining

features in the observed persistence diagram DY represent unexpected features generated

from noise or unanticipated geometry in the prior DX . This is instantiated in Figure 4.2

(b) and (c) by DYU , which is represented as dots inside of red boxes. Framing the model in

this aforementioned way allows for the quantification of multiple levels of uncertainty and

generates a more accurate representation of the nature of persistence diagrams. It should

be noted that persistence diagrams generated for different homological dimensions from the

same source are considered pairwise independent of each other.

4.2.2 Posterior Estimation

Using the previous model characterization, we can now derive the posterior expressions which

explicitly give the Bayesian update of the intensity and the cardinality.

39

Page 53: Persistence Diagram Cardinality and its Applications

Theorem 4.9. Denote the prior intensity and cardinality by λDXand pDX

, respectively. If

α(x) is the probability of observing a prior feature and `(y|x) is the stochastic kernel that links

DYO with DXOand DYU , as discussed in Section 4.2.1, then for a set of independent samples

DY1:m = {DY1 , · · · , DYm} from DY with cardinalities K1, · · · , Km, we have the following

posterior intensity and cardinality:

λDX |DY1:m(x) =

1

m

m∑i=1

[(1− α(x))λDX

(x)G(∅) +∑y∈DYi

α(x)`(y|x)λDX(x)

λDYU(y)

G(y)

](4.7)

pDX |DY1:m(n) =

1

m

m∑i=1

pDX(n)Γ0,0

DYi(n)

〈pDX,Γ0,0

DYi〉, (4.8)

where

Γa,bDYi(τ)=

(min{Ki−a,τ}∑

k=0

(Ki − k − a)!P τk+b pDYU

(Ki − k − a)(λDX[1− α])τ−k−beKi−a,k(DYi)

),

G(∅) =〈pDX

,Γ0,1DYi〉

〈pDX,Γ0,0

DYi〉, G(y) =

〈pDX,Γ1,1

DYi\y〉

〈pDX,Γ0,0

DYi〉, eKi,k(DYi) =

∑SYi⊆DYi

,|SYi|=k

∏y∈SYi

λDX[α`(y|x)]

λDYU(y)

,

f [ζ] =∫X ζ(x)f(x)dx is a linear functional, P n

i is the permutation coefficient, and the sum

in Γ0,0DYi

(ji) of Equation (4.8) goes from 0 to Ki.

Proof [86]. The theorem states that the persistence diagrams DY1:m = DY1 , · · · , DYm are

independent samples from the point process DY with cardinality K1, · · · , Km respectively.

Now for independent and identical copies DiX of the i.i.d. cluster point process DX , we have

intensity λDX= 1

mλDXi

and cardinality pDX= 1

mpDXi

. Hence without loss of generality,

λDX |DY 1:m =1

m

m∑i=1

λDXi |DY iand pDX |DY 1:m =

1

m

m∑i=1

pDXi |DY i. (4.9)

40

Page 54: Persistence Diagram Cardinality and its Applications

So it is sufficient to compute λDXi |DY iand pDXi |DY i

for fixed i. From Eqn. (4.2) we have,

G(DXi ,DYi) =

∑Ki,n≥0

1

Ki!n!

∫Xn

∫MKi

(Ki∏l=1

η(yl)

)(n∏j=1

ζ(xj)

)JDY1:m

|DX

Ki(dy)JDX

n (dx)

=∑n≥0

1

n!

∫Xn

(n∏j=1

ζ(xj)

)G[η|DX ]JDX

n (dx). (4.10)

The second expression is achieved by writing

G[η|DX ] =∑Ki≥0

1

Ki!

∫MKi

(Ki∏l=1

η(yl)

)JDYi|DX

K (dy),

which is the PGFL with respect to JDYi|DX

Ki. Now, to understand the conditional point

process DYi |DX we need to consider the augmented space W ′ = W ∪ {∆} where ∆ is a

dummy set that will be used for labeling points in DYU [76]. Therefore the random set

H = {(x, y) ∈ (DXO,DYO)} ∪ {(∆, y) | y ∈ DYU} is a marked i.i.d. cluster point process on

W ′ ×W . The independence condition in Def. 4.4 for marks in W thus leads to G[η|DX ] =

G[η|x1] · · · G[η|xn]G[η|∆]. As DYU is an i.i.d. cluster point process and has no association

with DX , from Eqn. 4.3 we get G[η|∆] = S(fDYU[η]), where S is the PGF of the cardinality

distribution of DYU . To be consistent with the probability α(x) defined earlier, our Bayesian

model deals with two scenarios: either a feature x will not appear in DYi with probability

(1 − α(x)) or each DYi contains draws from `(y|x) associated to a single sample x of DXwith probability α(x). Also, by using the fact that Janossy densities have j1(x) = `(x|yi)

and jn = 0 for n 6= 1 and using the linearity of the integral, we get G[η|xj] = 1 − α(xj) +

α(xj)∫

M η(y)`(y|xj)dy. Hence Eqn. (4.10) leads to

G[η, ζ] = S(fDYU[η])L(λDX

[ζ(1− α + α`g)]). (4.11)

Here, we denote `g(xj) =∫

M η(y)`(y|xj)dy and λDX[ζ] =

∫X ζ(x)λDX

(x)dx for simplicity of

notation. Also, L is the PGF associated to the PGFL G[ζ(1 − α + α`g)]. Notice that we

have the PGFL G as a product of two PGFs. This format helps us to find the functional

derivatives in an efficient way so that we obtain the PGFL of the conditional point process

41

Page 55: Persistence Diagram Cardinality and its Applications

DX |DYi as in Eqn. (4.6). Hence, by using linearity of integral and chain rule of functional

derivatives (Eqn. (4.4)), we obtain

G(DX |DYi)[ζ] =

∑Ki

k=0 S(Ki−k)(0).L(k)(λDX

[ζ(1− α)]).eKi,k

(λDX[ζα`(y1|x)]

λDYU(y1)

· · · λDX[ζα`(yKi

|x)]

λDYU(yKi

)

)∑Ki

k=0 S(Ki−k)(0).L(k)(λDX

[1− α]).eKi,k

(λDX[α`(y1|x)]

λDYU(y1)

· · · λDX[α`(yKi

|x)]

λDYU(yKi

)

) .

(4.12)

For simplifying notation we denote by EKi,k(DYi) = eKi,k

(λDX[α`(y1|x)]

λDYU(y1)

· · · λDX[α`(yKi

|x)]

λDYU(yKi

)

)the

elementary symmetric function. If ζ ≡ z is a constant function, then we obtain the PGF of

the posterior cardinality distribution as

G(DX |DYi)(z) =

∑Ki

k=0 zkS(Ki−k)(0).L(k)(zλDX

[1− α]).EKi,k(DYi)∑Ki

k=0 S(Ki−k)(0).L(k)(λDX

[1− α]).EKi,k(DYi). (4.13)

We derive the cardinality expression first by utilizing the well-known property of the PGF

that the probability distribution can be recovered by means of derivatives and by applying

the product rule in (4.4) for acquiring the n-th derivative as

pDXi |DY i(n) =

∑Ki

k=0 S(Ki−k)(0). 1

(n−k)!L(k)(n−k)(0).(λDX

[1− α])n−kEKi,k(DYi)∑Ki

k=0 S(Ki−k)(0).L(k)(λDX

[1− α]).EKi,k(DYi). (4.14)

As S and L are the PGFs of the number of points in DYU and DX respectively, by utilizing

well-known properties of the PGF we can write

S(i)(0) = i!pDYU(i) and L(i)(x) =

∞∑k=i

P ki pDX

(i).xk−i, (4.15)

where P is the permutation coefficient. Elementary computation thus leads Eqn. (4.14) to

the desired form of posterior cardinality as in Eqn. (4.8).

As is proved in [84], the intensity measure Λ of a point process can be obtained by

differentiating the corresponding probability generating functional G, i.e., Λ(A) = G′(1; 1A),

where 1A is the indicator function for any A ∈ X . Generally speaking, one obtains the

intensity measure for a general point process through Λ(A) = limh→1G′(h; 1A), but the

preceding identity suffices for our purposes since we only consider point processes for which

42

Page 56: Persistence Diagram Cardinality and its Applications

Eqn. (4.12) is defined for all bounded h. Hence, we find the required derivative of Eqn.

(4.12) as

δG(DX |DY1:m)[1;x] =

m∑i=1

[∑Ki

k=0 S(Ki−k)(0).L(k+1)(λDX

[1− α]).EKi,k(DYi)∑Kk=0 S

(Ki−k)(0).L(k)(λDX[1− α]).EKi,k(DYi)

(1− α(x))λDX(x)

+∑y∈DYi

α(x)`(y|x)λDX(x)

λDYU(y)

∑Ki−1k=0 S(Ki−k−1)(0).L(k+1)(λDX

[1− α]).EKi−1,k

(DYi

)∑Ki

k=1 S(Ki−k)(0).L(k)(λDX

[1− α]).EKi,k(DYi)

]

Similarly, using Eqn. 4.15 gives the format of the posterior intensity λDX |DY1:mand this

completes the proof.

In the posterior intensity expression, the two terms reflect the decomposition of the prior

intensity. The first term is for the vanished features DXV, where the intensity is weighted

by (1 − α(x)). The second term corresponds to the observed part DXO, which is similarly

weighted by α(x). Due to the arbitrary cardinality distribution assumption for i.i.d. cluster

point processes, the first and the second terms are also weighted by two factors G(∅) and

G(y) respectively. The term G(∅) is encountered due to the fact that there is no y ∈ DYi to

represent the vanished features DXV. On the other hand, G(y) depends on specific y ∈ DYi

to account for the associations between the features in DXOwith those in DYi . In particular,

if x ∈ DX is observed, it can be associated with any of the y ∈ DYi and the rest DYi \ y

are considered to either be observed from the rest of the features in DX or originated as an

unexpected feature from DYU . The posterior cardinality, given in Equation (4.8), takes the

form of a Bayesian update, specifically, a product of prior intensity and likelihood is divided

by a normalizing constant.

4.2.3 Conjugate Family of Priors

In this section, we present an analytical implementation of the posterior intensity and

cardinality equation of Section 4.2.2 using a conjugate family of priors. A conjugate family

of priors are chosen for the intensity so that the prior-to-posterior updates yield posterior

distributions of the same family. We use Gaussian mixture densities for modeling the prior

intensity. Likewise, we adopt a binomial distribution for the cardinality, which is often used

43

Page 57: Persistence Diagram Cardinality and its Applications

for modeling the probability mass function (pmf) of the number of points that fall in a given

region.

The prior is considered as a random persistence diagram DX , and the observations of

diagramsDY1 , . . . ,DYm are considered as independent samples of the point processDY . Below

we specify the necessary components of Thm. 4.9 so that we can derive the posterior intensity

and cardinality using the conjugate family of priors.

(M1) The expressions for the prior intensity λDXand cardinality pDX

are:

λDX(x) =

N∑l=1

cDXl N

∗(x;µDXl , σDX

l I) and pDX(n) =

(N0

n

)pnx(1− px)No−n, (4.16)

where(N0

n

)is the binomial coefficient, N is the number of components, µDX is the

mean, and σDXI is the covariance matrix of the Gaussian mixture. Since persistence

diagrams are modeled as point processes on the space W not on R2, the Gaussian

densities are restricted to W as N ∗(z; υ, σI) := N (z; υ, σI)1W(z), with mean v and

covariance matrix σI, and 1W is the indicator function of W. N0 ∈ N is the maximum

number of points in the prior point process and px ∈ [0, 1] is the probability of one

point to fall in the wedge W. We note that other distributions could also be used for

the prior cardinality, as will be demonstrated in Section 4.3.1.

(M2) The likelihood function `(y|x), which is the stochastic kernel of the marked i.i.d. cluster

point process (DXO,DYO) as discussed in Section 4.2.1, takes the form

`(y|x) = N ∗(y;x, σDYO I), (4.17)

where σDYO is the covariance coefficient. This parameter quantifies the level of

confidence in the observations.

(M3) The i.i.d. cluster point process DYU consisting of the unexpected features in the

observation has the intensity λDYUand cardinality pDYU

. To define the intensity λDYU

we use an exponential distribution as

44

Page 58: Persistence Diagram Cardinality and its Applications

λDYU(ybirth, ypers) = µ2

DYUe−(µDYU

ybirth+µDYUypers), (4.18)

where µDYUcontrols the rate of decay as p→∞. This distribution for λDYU

considers

points closer to the origin to be more likely to be unexpected features. Points close to

the origin in these 1-dimensional persistence diagrams are often created from spacing

between the point clouds due to sampling. Points with higher persistence or higher

birth represent significant 1-dimensional homological features, so these are counted as

less likely to be noise. The cardinality distribution is

pDYU(n) =

(M0

n

)pny (1− py)M0−n, (4.19)

whereM0 ∈ N is the maximum number of points in the point processDYU and py ∈ [0, 1]

is the probability of one point to fall in the wedge W. We present examples of this

model in Section 4.3. Note that Equation (4.18) simplifies the definition of DYU by

relying only on one parameter, µDYU.

Proposition 4.2.1. Suppose that λDX, pDX

, `(y|x), λDYU, and pDYU

take the forms of

assumptions (M1)–(M2), and α is fixed. Then the posterior intensity and cardinality of

Thm. 4.9 are given by:

λDX |DY1:m(x) =

1

m

m∑i=1

[(1− α)λDX

(x)G(∅) +∑y∈DYi

N∑l=1

hx|yi N ∗(x;µ

x|yl , σ

x|yl I)

](4.20)

pDX |DY1:m(n) =

1

m

m∑i=1

pDX(n)Φ0,0

DYi(n)

〈pDX,Φ0,0

DYi〉, (4.21)

where

45

Page 59: Persistence Diagram Cardinality and its Applications

Φa,bDYi

(τ) =

(min{Ki−b,τ}∑

k=0

(Ki − k − a)!P τk+b pDYU

(Ki − k − a)(γDX)τ−k−beKi−a,k(DYi)

);

eKi,k(DYi) =∑

SYi⊆DYi

,|SYi|=k

∏y∈SYi

α〈cDX , q(y)〉λDYU

(y); ql(y) = N (y;µDX

l , (σDYO + σDXl )I); (4.22)

G(∅) =〈pDX

,Φ0,1DYi〉

〈pDX,Φ0,0

DYi〉; h

x|yl =

G(y) αcDXl ql(y)

λDYU(y)

;

G(y) =〈pDX

,Φ1,1DYi\y〉

〈pDX,Φ0,0

DYi〉

; γDX= (1− α)

N∑l=1

cDXl

∫WN (x;µDX

l , σDXl I)dx;

µx|yl =

σDXl y + σDYOµDX

l

σDXl + σDYO

; and σx|yl =

σDYO σDXl

σDXl + σDYO

.

The sum in Φ0,0DYi

(ji) of Equation (4.21) goes from 0 to Ki.

The proof of Proposition 4.2.1 follows from Thm. 4.9 and standard results for multivariate

Gaussian mixture densities.

Lemma 4.10. For H,R, P are p× p matrices, m and d are p× 1 vector, and given that R

and P are positive definite,∫N (y;Hx+ d,R)N (x;m,P )dx = N (y;Hm+ d,R+HPHT ).

Lemma 4.11. Let H,R, P be p × p matrices, m be a p × 1 vector, and suppose that

R and P are positive definite, N (y;Hx,R)N (x;m,P ) = q(y)N (x; m, P ), where q(y) =

N (y;Hm,R +HPHT ), m = m+K(y −Hm), P = (I −KH)P and K = PHT (HPHT +

R)−1.

Proof of Proposition 4.2.1. To obtain the intensity and cardinality formulas in the propo-

sition, we apply Theorem 4.9 using the model given in Section 4.2.1. We start with the

estimation of λDX[α`(yi|x)] = α

∫W λDX

(x)`(yi|x)dx using Lemma 4.10. Observing that,

H = I, R = σDYO I,m = µDXl , and P = σDX

l I, we write

α

∫WλDX

(x)`(yi|x)dx = αN∑l=1

cDXl N (y;µDX

l , (σDYO + σDXl )) = α〈cDX , q(yi)〉

The only portion of the formula that is not immediate is the form of `(y|x)λDX. Using

Lemma 4.11 with H = I, R = σDYO I,m = µDXl , and P = σDX

l I, we have that `(y|x)λDX=

46

Page 60: Persistence Diagram Cardinality and its Applications

∑Nl=1 c

DXl ql(y)N ∗(x;µ

x|yl , σ

x|yl I), with µ

x|yl =

σDXl y+σ

DYO µDXl

σDXl +σ

DYO; and σ

x|yl =

σDYO σ

DXl

σDXl +σ

DYOas

required for Cx|yl .

One can see that the intensity estimation in Equation (4.20) is in the form of a Gaussian

mixture. A detailed example of this estimation is provided in Section 4.3. The cardinality

distribution in Equation (4.21) is computed for infinitely many values of n, which is

unattainable. Hence, for the practical application, we must truncate n at some Nmax such

that Nmax is sufficiently larger than the number of points in the prior point process. Without

loss of generality, we can choose Nmax = N0.

4.3 Sensitivity Analysis

To illustrate the estimation of the posterior using Equations (4.20) and (4.21), as well as

the effect of the choice of parameters on the posterior distributions, we give the following

numerical example. Example 1 discusses the effect of the parameters on the posterior

intensity and cardinality.

4.3.1 Example 1

This example exhibits the ability of our method to predict the posterior intensity and

cardinality distributions via the Bayesian framework by examining several parameters. To

reproduce these results, the interested reader may download our R-package BayesTDA.

We commence by defining an i.i.d cluster point process with prior intensity modeled by a

Gaussian mixture as discussed in Section 4.2.3 (for example see Table 4.3). We consider point

clouds generated from two circles joined at a point and focus on 1-dimensional features from

their persistence diagrams as they are the prominent homological features of these shapes.

This viewpoint is utilized to determine the informative prior. This gives two prominent

1-dimensional features with higher persistence. We present the list of parameters used to

define the prior point process in Tables 4.1 and 4.2. The prior intensity and cardinality for

both types are shown in Table 4.3.

47

Page 61: Persistence Diagram Cardinality and its Applications

Table 4.1: List of parameters for (M1) used in Example 1. For Cases I and II, we take intoaccount two types of prior intensities: (i) informative and (ii) unimodal uninformative.

Parameters for M1

Prior µDXi σDX

i cDXi N0

Informative(1, 0.8)

(1.6, 1.9)0.010.01

11 7

Unimodal Uninformative (1.5, 2) 1.5 1 7

Table 4.2: List of parameters for (M2) and (M3) used in Example 1. For Case-I, we considerthe 1-dimensional persistence features obtained from the point clouds with Gaussian noisehaving variance 0.01I2 (see Table 4.3) and for Case-II those obtained from point clouds withnoise variance 0.1I2 (see Table 4.4). To perform a sensitivity analysis, we implement differentparameters for (M2) and (M3), which are listed here.

Parameters for M2 Parameters for M3

Case σDYO µDYU M0 py α

Case-I 0.05 4 3 13

0.95

Case-II 0.5 2 3 23

0.95

48

Page 62: Persistence Diagram Cardinality and its Applications

The observed persistence diagrams are generated from point clouds sampled uniformly

from the two circles with varying levels of Gaussian noise with variances 0.01I2 and 0.1I2

as illustrated in the top left of Tables 4.3 and 4.3 respectively. These diagrams exhibit

distinctive characteristics such as two prominent features with high persistence and very

few spurious features (Table 4.3), and two prominent features with medium persistence and

several spurious features (Table 4.4).

Case-I: We consider informative and unimodal uninformative priors for the intensity and

informative and uniform priors for the cardinality as presented in Table 4.3 row 1 and column

1. The observed persistence diagram considered here is shown in Table 4.3. The parameters

for defining the prior densities and associated to the observed persistence diagram are listed

in Tables 4.1 and 4.2. The obtained posteriors are presented in Table 4.3. For each observed

persistence diagram, persistence features are presented as triangles on their corresponding

posterior intensity plots. Since the observed persistence diagrams consist of less spurious

features and the variability of the observed persistence diagram is chosen to be low, all of

the priors are able to estimate the prominent holes correctly, while ignoring the point that

appeared due to noise.

Case-II: We consider both priors and the observed persistence diagram as in Case-I. Here

we investigate the effect of the observed data by increasing σDYO. With this change, we

consider greater variability in the observed data. We observe that the posterior intensities

still estimate correctly the 1-dim features corresponding to the holes and the cardinality

estimates this probability accurately; see Table 4.4. Additionally, when given an informative

cardinality (second row of Table 4.4), the model finds the correct cardinality as well.

4.4 Classification of Actin Filament Networks

4.4.1 Data

To demonstrate the effectiveness of our Bayesian method we classify actin filament networks

in plant cells. Such filaments are key in the study of intracellular transportation in plant

cells, as these bundles and networks make up the actin cytoskeleton, which determines the

49

Page 63: Persistence Diagram Cardinality and its Applications

Table 4.3: Posterior intensities and cardinalities for Example 1 Case I obtained byusing Proposition 4.2.1. We consider informative and unimodal uninformative priors forthe intensity in columns 2 and 3 respectively and informative and uniform priors for thecardinality in rows 2 and 3 respectively. The list of associated parameters is described inExample 1. Posteriors computed from all of these priors can estimate the 1−dimensionalholes accurately for a list of varying parameters.

50

Page 64: Persistence Diagram Cardinality and its Applications

Table 4.4: Posterior intensities and cardinalities for Example 1 Case II obtained byusing Proposition 4.2.1. We consider informative and unimodal uninformative priors forthe intensity in columns 2 and 3 respectively and informative and uniform priors for thecardinality in rows 2 and 3 respectively. The list of associated parameters is described inExample 1.

51

Page 65: Persistence Diagram Cardinality and its Applications

structure of the cell and enables cellular motion. We examine the classification scheme using

three classes of filament networks designated by their respective number of crosslinking

proteins. Higher numbers of cross-linking proteins produce thicker actin cables [101], and in

turn, indicate local geometric signatures. However, the differences are not always notable

due to the presence of noise and sparsity in the data. To bypass this, we explore these

networks by means of their respective persistence diagrams as they distill salient information

about the network patterns with respect to connectedness and empty space (holes), i.e. we

can differentiate between filament networks by examining their homological features. In

particular, we focus on classifying simulated networks with number of cross-linked proteins

N = 825, 1650, and 3300, which are denoted as C1, C2, and C3 respectively. The networks were

created using the AFINES stochastic simulation framework introduced in [43, 42], which

models the assembly of the actin cytoskeleton. From the viewpoint of topology, class C2

and class C3 contain more prominent holes than class C1. Also, their respective persistence

diagrams have different cardinalities. Hence, this topological aspect yields an important

contrast between these three classes. To capture these differences we employ the following

Bayes factor classification approach by relying on the closed form estimation of posterior

distributions discussed in Section 4.2.3.

4.4.2 Bayesian Classification Method

A PD, D, that needs to be classified is a sample from an i.i.d. cluster point process D

with intensity λD and cardinality pD, and its probability density has the form pD(D) =

pD(|D|)∏

d∈D λD(d). For a training set QY k := DY k1:n

for k = 1, · · · , K from K classes of

random diagrams DY k , we obtain the posterior intensities from the Bayesian framework

outlined in Section 4.2. The posterior probability density of D given the training set QY k is

given by

pD|DY k

(D|QY k) = pD(|D|)∏d∈D

λD|QY k

(d), (4.23)

and consequently, the Bayes factor is obtained by the ratio, BF ij(QY i , QY j) =pD|D

Y i(D|QY i )

pD|DY j

(D|QY j )

for a class i, j = 1, · · · , K such that i 6= j. For every pair of (i, j), if BF ij(QY i , QY j) > c, we

52

Page 66: Persistence Diagram Cardinality and its Applications

(a) (b) (c)

Figure 4.3: (a), (b), and (c) give an example of a persistence diagram generated fromnetworks in classes 1, 2, and 3 respectively.

assign one vote to class QY i , or otherwise for BF ij(QY i , QY j) < c. The final assignment of

the class of D to a class is obtained by a majority voting scheme.

4.4.3 Methods for Comparison

We compare this Bayesian method of classification with several other methods, which are

detailed here. For all methods we use 10−fold cross validation to obtain accuracy scores.

This means that the data is partitioned into 10 sets, and each of the 10 sets is used as the

test set while the other 9 serve as training. Then each of the 10 partitions serves as the

test set exactly once. This provides a way to estimate the accuracy of the model and avoid

overfitting.

We compare with two other persistent homology methods, namely, persistence images [1]

and persistence landscapes [12], which are explained in Chapter 2. We use a 50 × 50 grid

and σ2 = 0.1 as in [1], resulting in vectors of length 2500. For calculating the landscapes,

we used 2500 values and a degree of 3. Additionally, we compare our method with one

that does not use any TDA, raster images. For this comparison, we convert each filament

network into a raster image with 2500 pixels. Raster images, commonly used in geographic

information data [20], represent data by using a grid with a value assigned for each pixel.

For this application, we use the R package ’raster’ [54] and a function counts the number of

actin points that appear within each pixel. The number of points determines the value at

53

Page 67: Persistence Diagram Cardinality and its Applications

that pixel. An example of a raster image generated from one of the filament networks in the

dataset is shown in Figure 4.4. The vectors created from these three processes are input into

three different classification algorithms: a random forest with 500 trees, a support vector

machine with radial basis function, and a 2-layer neural network. We explain these three

methods below, and refer the reader to [44] for further details.

Tree classifiers work by partitioning the predictor space into different boxes; Each box

receives one value from the regression, typically the mean of the predictors in that region.

Random forests consist of bagged trees, i.e. trees obtained from repeated uniform sampling

with replacement, but instead of considering all possible splits when creating a partition, a

random sample of√p predictors is used, where p is the total number of predictors.

Support vector machines (SVMs) use hyperplanes, typically in a higher dimensional space

than the predictor space, to separate different classes of data. These classifiers use a soft

margin, allowing some of the observations to be on the incorrect side of the hyperplane. To

determine the classifier, inner products between observations must be calculated. To do this

efficiently, a kernel is used, in this case a radial kernel, K(x, y) = e−γ∑p

i=1(xi−yi)2 .

Neural networks derive features for classification using the sigmoid function 11+e−x

of a linear combination of predictors. These features are again put through a linear

transformation, and the output of this is used as input either to a function for the final

class prediction (for a 1-layer neural network) or to another sequence of transformations (for

a neural network with more than 1 layer). Consequently, neural networks give a very flexible,

nonlinear method of classification.

4.4.4 Result

Bayes factor classification was applied to a dataset of 150 synthetic filament networks that

consist of the coordinates for the actin filaments. From these coordinates, persistence

diagrams with 1-dimensional features (see Figure 4.3 for an example of each class) were

created from Vietoris-Rips complexes, which were then used as input for the Bayes factor

classification scheme of 4.4.2. The corresponding persistence diagrams of these networks

show few consistent patterns, and consequently, we embraced a data-driven scheme for

classification by means of an uninformative flat prior with intensity having the Gaussian

54

Page 68: Persistence Diagram Cardinality and its Applications

(a) (b)

Figure 4.4: (a) An example filament network from class 1. (b) The network in (a) convertedto a raster image.

mixture form N ((1, 2), 62) and cardinality having a binomial form with parameters N0 = 25

and pDX= 1. The persistence diagrams of these network filaments produced a very bulky

dataset that is computationally prohibitive. To alleviate this issue, we subsample the data.

For all cases, we set σDYO = 0.01, µDYU(y) = 1, and pDYU

= 225

. The posterior is calculated

using the training set for each fold and for each class. Then for each instance, we assign the

class by using the majority voting scheme.

We compare our method with persistence landscapes, persistence images, and raster

images, as input for several machine learning methods. The results are listed in Table 4.5.

4.5 Conclusion

This chapter has proposed a Bayesian framework for persistence diagrams that incorporates

prior beliefs about the data for the computation of posterior distributions for persistence

diagrams. Our method perceives persistence diagrams as point processes and provides a

powerful probabilistic descriptor of the diagrams by simultaneously packaging the cardinality

and spatial distributions. As required for a Bayesian paradigm, we incorporate prior

uncertainty by viewing persistence diagrams as i.i.d cluster point processes and persistence

diagrams of noisy observations as marked point processes to model the level of confidence

that observations are representative of the underlying topological state. This framework can

be thought of as a complement to the Bayesian framework in [76] and completes the full

55

Page 69: Persistence Diagram Cardinality and its Applications

Table 4.5: Comparison of accuracy results for the actin network classification.

Method Accuracy

Bayesian Framework 90 %

Random Forest PI 80%

SVM PI 75 %

Neural Net PI 72%

Random Forest PL 68%

SVM PL 55%

Neural Net PL 63%

Random Forest Raster 55 %

SVM Raster 65 %

Neural Net Raster 43 %

56

Page 70: Persistence Diagram Cardinality and its Applications

Bayesian framework for persistent homology in the literature by estimating the cardinality

of the diagrams as well.

I.i.d cluster point processes are a generalization of Poisson point processes and offer

more flexibility in estimating the cardinality distribution, which when combined with

the corresponding intensity distribution, provides complete knowledge of the posterior

distribution. Hence, a prior point process with predefined intensity and cardinality, along

with a pertinent likelihood function, supply a posterior point process with updated intensity

and cardinality by means of the developed Bayesian framework. It is noteworthy that our

Bayesian model directly employs persistence diagrams, which are topological summaries of

data, for defining a substitution likelihood rather than using the entire point cloud. This

deviates from a strict Bayesian model, as we consider the statistics of persistence diagrams

rather than the underlying datasets used to create them; however, our paradigm incorporates

prior knowledge and observed data summaries to create posterior probabilities, analogous to

the notion of substitution likelihood detailed in [59]. Indeed, the idea of utilizing topological

summaries of point clouds in place of the actual point clouds proves to be a powerful tool with

applications in wide-ranging fields. This process incorporates topological descriptors of point

clouds, which simultaneously decipher essential shape peculiarities and avoid unnecessarily

complex geometric features.

We derive a closed form of the posterior for realistic implementation, using Gaussian

mixtures for the prior intensity and binomials for the prior cardinality. Several detailed

examples showcase the posterior intensities and cardinalities for various interesting instances

created by varying parameters within the model. These examples exhibit our method’s

ability to recover the underlying persistence diagram, analogously to the standard Bayesian

paradigm for random sets. Thus, the Bayesian inference developed here opens up new

avenues for machine learning algorithms and data analysis techniques to be applied directly

on the space of persistence diagrams. Indeed, we derive a classification algorithm and

successfully apply it to filament network data, reinforcing the capability of our Bayesian

framework. Enlightening scientists on the biophysical mechanisms that cause transpiration

in plant cells, this classification of cytoskeletal filament networks substantially bolsters the

efficacy of our method.

57

Page 71: Persistence Diagram Cardinality and its Applications

Bibliography

58

Page 72: Persistence Diagram Cardinality and its Applications

[1] Adams, H., Emerson, T., Kirby, M., Neville, R., Peterson, C., Shipman, P.,

Chepushtanova, S., Hanson, E., Motta, F., and Ziegelmeier, L. (2017). Persistence images:

A stable vector representation of persistent homology. The Journal of Machine Learning

Research, 18(1):218–252. 3, 13, 53

[2] Adams, H., Tausz, A., and Vejdemo-Johansson, M. (2014). Javaplex: A research software

package for persistent (co) homology. In International Congress on Mathematical Software,

pages 129–136. Springer. 2

[3] Adler, R. J., Bobrowski, O., Borman, M. S., Subag, E., Weinberger, S., et al. (2010).

Persistent homology for random fields and complexes. In Borrowing strength: theory

powering applications–a Festschrift for Lawrence D. Brown, pages 124–143. Institute of

Mathematical Statistics. 3

[4] Alexandroff, P. (1928). Uber den allgemeinen dimensionsbegriff und seine beziehungen

zur elementaren geometrischen anschauung. Mathematische Annalen, 98(1):617–635. 10

[5] Bauer, U., Kerber, M., Reininghaus, J., and Wagner, H. (2017). Phat–persistent

homology algorithms toolbox. Journal of symbolic computation, 78:76–90. 2

[6] Belkin, M. and Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and

data representation. Neural computation, 15(6):1373–1396. 1

[7] Bendich, P., Edelsbrunner, H., and Kerber, M. (2010). Computing robustness and

persistence for images. IEEE transactions on visualization and computer graphics,

16(6):1251–1260. 2

[8] Bendich, P., Marron, J. S., Miller, E., Pieloch, A., and Skwerer, S. (2016). Persistent

homology analysis of brain artery trees. The annals of applied statistics, 10(1):198. 2

[9] Bobrowski, O., Mukherjee, S., Taylor, J. E., et al. (2017). Topological consistency via

kernel estimation. Bernoulli, 23(1):288–328. 3

[10] Breuer, D., Nowak, J., Ivakov, A., Somssich, M., Persson, S., and Nikoloski, Z.

(2017). System-wide organization of actin cytoskeleton determines organelle transport in

59

Page 73: Persistence Diagram Cardinality and its Applications

hypocotyl plant cells. Proceedings of the National Academy of Sciences, 114(28):E5741–

E5749. 31

[11] Breusch, T. S. and Pagan, A. R. (1979). A simple test for heteroscedasticity and random

coefficient variation. Econometrica: Journal of the Econometric Society, pages 1287–1294.

26

[12] Bubenik, P. (2015). Statistical topological data analysis using persistence landscapes.

Journal of Machine Learning Research, 16:77–102. 3, 13, 53

[13] Bubenik, P. and D lotko, P. (2017). A persistence landscapes toolbox for topological

statistics. Journal of Symbolic Computation, 78:91–114. 3

[14] Cang, Z., Mu, L., Wu, K., Opron, K., Xia, K., and Wei, G.-W. (2015). A topological

approach for protein classification. Computational and Mathematical Biophysics, 3(1). 2

[15] Cang, Z. and Wei, G.-W. (2017). Analysis and prediction of protein folding

energy changes upon mutation by element specific persistent homology. Bioinformatics,

33(22):3549–3557. 2

[16] Carlsson, G. (2009). Topology and data. Bulletin of the American Mathematical Society,

46(2):255–308. 1

[17] Carlsson, G., Ishkhanov, T., De Silva, V., and Zomorodian, A. (2008). On the local

behavior of spaces of natural images. International journal of computer vision, 76(1):1–12.

2

[18] Carriere, M., Chazal, F., Datashape, I. S., Ike, Y., Lacombe, T., Royer, M., and Umeda,

Y. (2019). Perslay: A simple and versatile neural network layer for persistence diagrams.

stat, 1050:5. 4

[19] Carriere, M., Oudot, S. Y., and Ovsjanikov, M. (2015). Stable topological signatures for

points on 3d shapes. In Computer Graphics Forum, volume 34, pages 1–12. Wiley Online

Library. 3

60

Page 74: Persistence Diagram Cardinality and its Applications

[20] Chang, K.-T. (2008). Introduction to geographic information systems, volume 4.

McGraw-Hill Boston. 53

[21] Chazal, F., Cohen-Steiner, D., Glisse, M., Guibas, L. J., and Oudot, S. Y. (2009).

Proximity of persistence modules and their diagrams. In Proceedings of the twenty-fifth

annual symposium on Computational geometry, pages 237–246. 3

[22] Chazal, F., Cohen-Steiner, D., and Merigot, Q. (2011). Geometric inference for

probability measures. Foundations of Computational Mathematics, 11(6):733–751. 11

[23] Chazal, F., Glisse, M., Labruere, C., and Michel, B. (2015). Convergence rates for

persistence diagram estimation in topological data analysis. The Journal of Machine

Learning Research, 16(1):3603–3635. 3

[24] Cheng, Y. (1995). Mean shift, mode seeking, and clustering. IEEE transactions on

pattern analysis and machine intelligence, 17(8):790–799. 1

[25] Chisholm, J. A. and Motherwell, S. (2004). A new algorithm for performing

three-dimensional searches of the cambridge structural database. Journal of applied

crystallography, 37(2):331–334. 24

[26] Cohen-Steiner, D., Edelsbrunner, H., and Harer, J. (2007). Stability of persistence

diagrams. Discrete & Computational Geometry, 37(1):103–120. 3, 17

[27] Cohen-Steiner, D., Edelsbrunner, H., Harer, J., and Mileyko, Y. (2010). Lipschitz

functions have lp-stable persistence. Foundations of Computational Mathematics, 10(2).

17

[28] Cohen-Steiner, D., Edelsbrunner, H., and Morozov, D. (2006). Vines and vineyards by

updating persistence in linear time. In Proceedings of the twenty-second annual symposium

on Computational geometry, pages 119–126. 2

[29] Coifman, R. R. and Lafon, S. (2006). Diffusion maps. Applied and computational

harmonic analysis, 21(1):5–30. 1

61

Page 75: Persistence Diagram Cardinality and its Applications

[30] Dabaghian, Y., Memoli, F., Frank, L., and Carlsson, G. (2012). A topological paradigm

for hippocampal spatial map formation using persistent homology. PLoS computational

biology, 8(8):e1002581. 2

[31] Daley, D. J. and Vere-Jones, D. (1988). An introduction to the theory of point processes.

Springer-Verlag, New York. 32, 33

[32] De Silva, V. and Ghrist, R. (2007). Coverage in sensor networks via persistent homology.

Algebraic & Geometric Topology, 7(1):339–358. 2

[33] Donato, I., Gori, M., Pettini, M., Petri, G., De Nigris, S., Franzosi, R., and Vaccarino, F.

(2016). Persistent homology analysis of phase transitions. Physical Review E, 93(5):052138.

2

[34] Edelsbrunner, H. and Harer, J. (2008). Persistent homology-a survey. Contemporary

mathematics, 453:257–282. 6

[35] Edelsbrunner, H. and Harer, J. (2010). Computational topology: an introduction.

American Mathematical Soc. 6, 9, 10, 11

[36] Edelsbrunner, H., Letscher, D., and Zomorodian, A. (2000). Topological persistence and

simplification. In Proceedings 41st annual symposium on foundations of computer science,

pages 454–463. IEEE. 1, 2

[37] Efron, B. and Hastie, T. (2016). Computer age statistical inference, volume 5.

Cambridge University Press. 22

[38] Emmett, K., Rosenbloom, D., Camara, P., and Rabadan, R. (2014). Parametric

inference using persistence diagrams: A case study in population genetics. arXiv preprint

arXiv:1406.4582. 3

[39] Emrani, S., Gentimis, T., and Krim, H. (2014). Persistent homology of delay embeddings

and its application to wheeze detection. IEEE Signal Processing Letters, 21(4):459–463. 3

[40] Fasy, B. T., Kim, J., Lecci, F., and Maria, C. (2014). Introduction to the r package tda.

arXiv preprint arXiv:1411.1830. 2

62

Page 76: Persistence Diagram Cardinality and its Applications

[41] Fasy, B. T. and et al. (2014). Confidence sets for persistence diagrams. The Annals of

Statistics, 42(6):2301–2339. 3, 5, 11, 17, 18, 32

[42] Freedman, S. L., Banerjee, S., Hocky, G. M., and Dinner, A. R. (2017). A versatile

framework for simulating the dynamic mechanical structure of cytoskeletal networks.

Biophysical journal, 113(2):448–460. 31, 52

[43] Freedman, S. L., Hocky, G. M., Banerjee, S., and Dinner, A. R. (2018). Nonequilibrium

phase diagrams for actomyosin networks. Soft matter, 14(37):7740–7747. 52

[44] Friedman, J., Hastie, T., and Tibshirani, R. (2001). The elements of statistical learning,

volume 1. Springer series in statistics New York. 54

[45] Frosini, P. (1990). A distance for similarity classes of submanifolds of a euclidean space.

Bulletin of the Australian Mathematical Society, 42(3):407–415. 1

[46] Ghrist, R. (2008). Barcodes: the persistent topology of data. Bulletin of the American

Mathematical Society, 45(1):61–75. 6

[47] Gidea, M. and Katz, Y. (2018). Topological data analysis of financial time series:

Landscapes of crashes. Physica A: Statistical Mechanics and its Applications, 491:820–

834. 2

[48] Goff, M. (2011). Extremal betti numbers of vietoris–rips complexes. Discrete &

Computational Geometry, 46(1):132–155. 21

[49] Guillemard, M. and Iske, A. (2011). Signal filtering and persistent homology: an

illustrative example. Proc. Sampling Theory and Applications (SampTA11). 2

[50] Hartigan, J. A. (1975). Clustering algorithms. John Wiley & Sons, Inc. 1

[51] Hartigan, J. A. (1981). Consistency of single linkage for high-density clusters. Journal

of the American Statistical Association, 76(374):388–394. 1

[52] Hatcher, A. (2002). Algebraic Topology. Cambridge University Press. 10

63

Page 77: Persistence Diagram Cardinality and its Applications

[53] Hicks, D., Oses, C., Gossett, E., Gomez, G., Taylor, R. H., Toher, C., Mehl, M. J., Levy,

O., and Curtarolo, S. (2018). Aflow-sym: platform for the complete, automatic and self-

consistent symmetry analysis of crystals. Acta Crystallographica Section A: Foundations

and Advances, 74(3):184–203. 24

[54] Hijmans, R. J. (2019). raster: Geographic Data Analysis and Modeling. R package

version 3.0-2. 53

[55] Hiraoka, Y., Nakamura, T., Hirata, A., Escolar, E. G., Matsue, K., and Nishiura, Y.

(2016). Hierarchical structures of amorphous solids characterized by persistent homology.

Proceedings of the National Academy of Sciences, 113(26):7035–7040. 2

[56] Hiraoka, Y., Shirai, T., Trinh, K. D., et al. (2018). Limit theorems for persistence

diagrams. The Annals of Applied Probability, 28(5):2740–2780. 3

[57] Honeycutt, J. D. and Andersen, H. C. (1987). Molecular dynamics study of melting and

freezing of small lennard-jones clusters. Journal of Physical Chemistry, 91(19):4950–4963.

24

[58] Huo, X., Ni, X. S., Smith, A. K., Liao, T., and Triantaphyllou, E. (2007). A survey

of manifold-based learning methods. Recent advances in data mining of enterprise data,

pages 691–745. 1

[59] Jeffreys, H. (1961). Theory of Probability. Clarendon Press. 57

[60] Kasson, P. M., Zomorodian, A., Park, S., Singhal, N., Guibas, L. J., and Pande, V. S.

(2007). Persistent voids: a new structural metric for membrane fusion. Bioinformatics,

23(14):1753–1759. 2

[61] Kelly, T. F., Miller, M. K., Rajan, K., and Ringer, S. P. (2013). Atomic-scale

tomography: A 2020 vision. Microscopy and Microanalysis, 19(3):652–664. 24

[62] Kerber, M., Morozov, D., and Nigmetov, A. (2017). Geometry helps to compare

persistence diagrams. Journal of Experimental Algorithmics (JEA), 22:1–4. 2, 5, 32

64

Page 78: Persistence Diagram Cardinality and its Applications

[63] Kusano, G., Hiraoka, Y., and Fukumizu, K. (2016). Persistence weighted gaussian

kernel for topological data analysis. In International Conference on Machine Learning,

pages 2004–2013. 3

[64] Lacombe, T., Cuturi, M., and Oudot, S. (2018). Large scale computation of means

and clusters for persistence diagrams using optimal transport. In Advances in Neural

Information Processing Systems, pages 9770–9780. 4

[65] Larsen, P. M., Schmidt, S., and Schiøtz, J. (2016). Robust structural identification

via polyhedral template matching. Modelling and Simulation in Materials Science and

Engineering, 24(5):055007. 24

[66] Lee, H., Kang, H., Chung, M. K., Kim, B.-N., and Lee, D. S. (2012). Persistent brain

network homology from the perspective of dendrogram. IEEE transactions on medical

imaging, 31(12):2267–2277. 2

[67] Lee, Y. and et al. (2017). Quantifying similarity of pore-geometry in nanoporous

materials. Nature Communications, 8:15396. 2

[68] Lin, T. and Zha, H. (2008). Riemannian manifold learning. IEEE Transactions on

Pattern Analysis and Machine Intelligence, 30(5):796–809. 1

[69] Madison, S. L. and Nebenfuhr, A. (2013). Understanding myosin functions in plants:

are we there yet? Current Opinion in Plant Biology, 16(6):710–717. 31

[70] Mahler, R. (2007). Statistical multisource-multitarget information fusion. Artech House,

Boston. 36

[71] Marchese, A. and Maroulas, V. (2018). Signal classification with a point process distance

on the space of persistence diagrams. Advances in Data Analysis and Classification,

12(3):657–682. 2, 3, 4, 5, 16, 32

[72] Marchese, A., Maroulas, V., and Mike, J. (2017). K- means clustering on the space

of persistence diagrams. In Wavelets and Sparsity XVII, volume 10394, page 103940W.

International Society for Optics and Photonics. 4

65

Page 79: Persistence Diagram Cardinality and its Applications

[73] Maria, C., Boissonnat, J.-D., Glisse, M., and Yvinec, M. (2014). The gudhi library:

Simplicial complexes and persistent homology. In International Congress on Mathematical

Software, pages 167–174. Springer. 2

[74] Maroulas, V., Micucci, C. P., and Spannaus, A. (2019a). A stable cardinality distance

for topological classification. Advances in Data Analysis and Classification, pages 1–18.

5, 15, 26, 32

[75] Maroulas, V., Mike, J. L., and Oballe, C. (2019b). Nonparametric estimation of

probability density functions of random persistence diagrams. Journal of Machine

Learning Research, 20(151):1–49. 3

[76] Maroulas, V., Nasrin, F., and Oballe, C. (2020). A bayesian framework for persistent

homology. SIAM Journal on Mathematics of Data Science, 2(1):48–74. 4, 33, 41, 55

[77] Maroulas, V. and Nebenfuhr, A. (2015). Tracking rapid intracellular movements: a

bayesian random set approach. The Annals of Applied Statistics, 9(2):926–949. 31

[78] Mileyko, Y., Mukherjee, S., and Harer, J. (2011). Probability measures on the space of

persistence diagrams. Inverse Problems, 27(12):124007. 3

[79] Miller, M. K., Kelly, T. F., Rajan, K., and Ringer, S. P. (2012). The future of atom

probe tomography. Materials Today, 15(4):158–165. 24

[80] Milosavljevic, N., Morozov, D., and Skraba, P. (2011). Zigzag persistent homology in

matrix multiplication time. In Proceedings of the twenty-seventh annual symposium on

Computational geometry, pages 216–225. 2

[81] Mischaikow, K. and Nanda, V. (2013). Morse theory for filtrations and efficient

computation of persistent homology. Discrete & Computational Geometry, 50(2):330–353.

2

[82] Mlynarczyk, P. J. and Abel, S. M. (2019). First passage of molecular motors on networks

of cytoskeletal filaments. Phys. Rev. E, 99:022406. 31

66

Page 80: Persistence Diagram Cardinality and its Applications

[83] Moody, M. P., Gault, B., Stephenson, L. T., Marceau, R. K., Powles, R. C.,

Ceguerra, A. V., Breen, A. J., and Ringer, S. P. (2011). Lattice rectification in atom

probe tomography: Toward true three-dimensional atomic microscopy. Microscopy and

Microanalysis, 17(2):226–239. 24

[84] Moyal, J. E. (1962). The general theory of stochastic population processes. Acta

Mathematica, 108(1):1–31. 35, 42

[85] Munkres, J. (2014). Topology. Pearson Education. 6

[86] Nasrin, F., Micucci, C., and Maroulas, V. (2020). Bayesian topological learning for

determining the structure of actin cytoskeleton. In preparation. 30, 40

[87] Perea, J. A. and Harer, J. (2015). Sliding windows and persistence: An application

of topological methods to signal analysis. Foundations of Computational Mathematics,

15(3):799–838. 2

[88] Pereira, C. M. and de Mello, R. F. (2015). Persistent homology for time series and

spatial data clustering. Expert Systems with Applications, 42(15-16):6026–6038. 3

[89] Pfender, F. and Ziegler, G. M. (2004). Kissing numbers, sphere packings, and some

unexpected proofs. Notices-American Mathematical Society, 51:873–883. 21

[90] Pless, R. and Souvenir, R. (2009). A survey of manifold learning for images. IPSJ

Transactions on Computer Vision and Applications, 1:83–94. 1

[91] Popov, K., Komianos, J., and Papoian, G. A. (2016). Medyan: Mechanochemical

simulations of contraction and polarity alignment in actomyosin networks. PLOS

Computational Biology, 12(4):1–35. 31

[92] Porter, K. and Day, B. (2016). From filaments to function: the role of the plant actin

cytoskeleton in pathogen perception, signaling and immunity. Journal of integrative plant

biology, 58(4):299–311. 30

67

Page 81: Persistence Diagram Cardinality and its Applications

[93] Reininghaus, J., Huber, S., Bauer, U., and Kwitt, R. (2015). A stable multi-scale kernel

for topological machine learning. In Proceedings of the IEEE conference on computer

vision and pattern recognition, pages 4741–4748. 3

[94] Robins, V. (1999). Towards computing homology from finite approximations. In

Topology proceedings, volume 24, pages 503–532. 1

[95] Robinson, A. and Turner, K. (2017). Hypothesis testing for topological data analysis.

Journal of Applied and Computational Topology, 1(2):241–261. 4

[96] Sgouralis, I., Nebenfuhr, A., and Maroulas, V. (2017). A bayesian topological framework

for the identification and reconstruction of subcellular motion. SIAM Journal on Imaging

Sciences, 10(2):871–899. 2, 31

[97] Shimmen, T. and Yokota, E. (2004). Cytoplasmic streaming in plants. Curr Opin Cell

Biol., 16(1):68–72. 31

[98] Sousbie, T. (2011). The persistent cosmic web and its filamentary structure–i. theory

and implementation. Monthly Notices of the Royal Astronomical Society, 414(1):350–383.

2

[99] Staiger, C., Baluska, F., Volkmann, D., and Barlow, P. (2000). Actin: A Dynamic

Framework for Multiple Plant Cell Functions. Springer, Amsterdam, 1st edition. 31

[100] Streit, R. (2013). The probability generating functional for finite point processes, and

its application to the comparison of phd and intensity filters. Journal of Advances in

Information Fusion, 8(2):119–132. 36

[101] Tang, H., Laporte, D., and Vavylonisa, D. (2014). Actin cable distribution and

dynamics arising from cross-linking, motor pulling, and filament turnover. Mol Biol Cell,

25(19):3006–3016. 31, 52

[102] Tenenbaum, J. B., De Silva, V., and Langford, J. C. (2000). A global geometric

framework for nonlinear dimensionality reduction. science, 290(5500):2319–2323. 1

68

Page 82: Persistence Diagram Cardinality and its Applications

[103] Thomas, C., Tholl, S., Moes, D., Dieterle, M., Papuga, J., Moreau, F., and Steinmetz,

A. (2009). Actin bundling in plants. Cell motility and the cytoskeleton, 66(11):940–957.

31

[104] Togo, A. and Tanaka, I. (2018). Spglib : a software library for crystal symmetry search.

arXiv preprint arXiv:1808.01590. 24

[105] Townsend, J., Micucci, C. P., Hymel, J. H., Maroulas, V., and Vogiatzis, K. (2019).

Representation of molecular structures with persistent homology leads to the discovery of

molecular groups with enhanced co2 binding. ChemRxiv. 2

[106] Tralie, C., Saul, N., and Bar-On, R. (2018). Ripser. py: A lean persistent homology

library for python. Journal of Open Source Software, 3(29):925. 2

[107] Tralie, C. J. and Perea, J. A. (2018). (quasi) periodicity quantification in video data,

using topology. SIAM Journal on Imaging Sciences, 11(2):1049–1077. 2

[108] Turner, K., Mileyko, Y., Mukherjee, S., and Harer, J. (2014). Frechet means for

distributions of persistence diagrams. Discrete & Computational Geometry, 52(1):44–70.

3

[109] Venkataraman, V., Ramamurthy, K. N., and Turaga, P. (2016). Persistent homology

of attractors for action recognition. In 2016 IEEE international conference on image

processing (ICIP), pages 4150–4154. IEEE. 2

[110] Wagner, H., Chen, C., and Vucini, E. (2012). Efficient computation of persistent

homology for cubical data. In Topological methods in data analysis and visualization II,

pages 91–106. Springer. 2

[111] Wasserman, L. (2018). Topological data analysis. Annual Review of Statistics and Its

Application, 5:501–532. 6

[112] Xia, K. and Wei, G.-W. (2014). Persistent homology analysis of protein structure,

flexibility, and folding. International journal for numerical methods in biomedical

engineering, 30(8):814–844. 2

69

Page 83: Persistence Diagram Cardinality and its Applications

[113] Zhang, Y., Zuo, T. T., Tang, Z., Gao, M. C., Dahmen, K. A., Liaw, P. K., and Lu,

Z. P. (2014). Microstructures and properties of high-entropy alloys. Progress in Materials

Science, 61. 25

[114] Zomorodian, A. and Carlsson, G. (2005). Computing persistent homology. Discrete &

Comp. Geom., 33(2):249–274. 1

70

Page 84: Persistence Diagram Cardinality and its Applications

Vita

Cassie Micucci is from the small town of Fayetteville, TN. She graduated from Tennessee

Technological University in 2012 with a B.S. in Mathematics and a minor in music. Cassie

began the mathematics Ph.D. program at the University of Tennessee in 2015, eventually

working in topological data analysis with application in chemistry, biochemistry, and

materials under research advisor Professor Vasileios Maroulas. She also studied machine

learning and data science as part of an M.S. degree in Statistics, also from the University of

Tennessee.

71