Welcome to the Jungle
This Matrix Factorization Jungle page ventures beyond classical decompositions into factorizations that impose structural assumptions on the unknowns—sparsity, low-rank, non-negativity, subspace membership, and more. As sparse recovery solvers matured, a rich ecosystem emerged. Sparse Coding & Dictionary Learning (Olshausen & Field) and NMF (Lee & Seung) were the first of such decompositions widely used with constraints on the matrices used for the factorization. Yet, while those factorizations were used by some practitioners, there was a sense that they "weren't rigorous enough" as little theoretical understanding came with these techniques. The discovery of compressed sensing (Candès, Romberg, Tao, Donoho) and its expansion to matrices i.e. nuclear norms can serve as convex proxies for rank, opened the floodgates to powerful new decomposition techniques.
What distinguishes these methods is that they come with phase transitions—sharp boundaries in parameter space where algorithms succeed or fail. These phase transitions are generally a sign that the algorithms or the factorization itself has hit a hard limit that may be due to the computational complexity of the problem being solved. This document maps that landscape, from video surveillance to recommendation systems, blending randomized linear algebra, nonconvex optimization, and probabilistic modeling.
Traditional Matrix Decompositions: There is a stellar and comprehensive treatment of classical matrix factorization techniques—LU, Cholesky, QR, SVD, Polar, Eigendecomposition, Jordan, Schur, CUR/Skeleton, Hessenberg, Tridiagonal, and Bidiagonal decompositions—in Jun Lu's monographs: "Matrix Decomposition and Applications" and "Numerical Matrix Decomposition and its Modern Applications". Most of the traditional techniques used to not put constraints on the matrix/data to be factorized. In scientific computing however, operators are sometimes approximated with decompositions and factorization to permit the design of better inverse problem solvers (see recent survey "Fast Direct Solvers" by Per-Gunnar Martinsson and Michael O'Neil).
The New Frontier: Additional information about LoRAs used with LLMs came from this survey by Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, Sai Qian Zhang.
"The Sparsest of Them All" — Evolution of phase transitions from Gauss to Krzakala, Mézard, Sausset, Sun, Zdeborová
References
- B.A. Olshausen & D.J. Field, "Emergence of simple-cell receptive field properties by learning a sparse code for natural images", Nature 381, 1996.
- D.D. Lee & H.S. Seung, "Learning the parts of objects by non-negative matrix factorization", Nature 401, 1999.
- J. Mairal, F. Bach, J. Ponce, G. Sapiro, "Sparse Coding and Dictionary Learning for Image Analysis", ICCV 2009 Tutorial.
- E.J. Candès, J. Romberg & T. Tao, "Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information", IEEE Trans. Information Theory, 2006.
- D.L. Donoho, "Compressed Sensing", IEEE Trans. Information Theory, 2006.
- B. Recht, M. Fazel & P.A. Parrilo, "Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization", SIAM Review, 2010.
- J. Lu, "Matrix Decomposition and Applications", arXiv:2201.00145, 2022.
- J. Lu, "Numerical Matrix Decomposition and its Modern Applications", arXiv:2107.02579, 2021.
- P.-G. Martinsson & M. O'Neil, "Fast Direct Solvers", arXiv:2511.07773, 2025.
- "Complexity Zoo" — Algorithmic Complexity Reference.
- "Complexity Zoo Petting Zoo" — Introduction to Complexity Classes.
This page was first announced on Nuit Blanche in August 2011.
Quick Reference for Advanced Matrix Factorization: Structural Assumptions & Unknowns
| FACTORIZATIONS (A = DX, UVT) | ||||||
|---|---|---|---|---|---|---|
| Problem | Form | Loss D(A;DX) | Known | Unknown | Key Constraints | Phase Trans. |
| SVD | A = DX | ||A-DX||F2 | A | D, X | DTD = I, XXT = Λ (diagonal) | — |
| pLSI | A = DX | KL(A;DX) | A | D, X | 1TD1 = 1, dij ≥ 0, 1TX = 1, xij ≥ 0 | — |
| NMF | A = DX | ||A-DX||F2 KL(A;DX) |
A | D, X | D ≥ 0, X ≥ 0 | — |
| Dictionary Learning | A = DX | ||A-DX||F2 | A | D, X | ||dj||2 = 1, X sparse | Yes |
| Sparse PCA | A = DX | ||A-DX||F2 | A | D, X | ||d||2 = 1, D sparse | Yes |
| K-Means | A = DX | ||A-DX||F2 | A | D, X | XXT = I, Xij ∈ {0,1} | Yes |
| K-Medians | A = DX | ||A-DX||1 | A | D, X | XXT = I, Xij ∈ {0,1} | Yes |
| Subspace Clustering | A = AX | ||A-AX||F2 | A | X | diag(X) = 0, sparse/low-rank | — |
| MMV / Comp. Sensing | Y = AX | ||Y-AX||F2 | Y, A | X | X joint row-sparse | Yes |
| BSS / ICA | Y = AX | ||Y-AX||F2 | Y | A, X | rows of X independent | — |
| Recommender Systems | R = UVT | ||R-UVT||F2 | RΩ | U, V | low-rank | — |
| LoRA | W' = W + BA | L(W+BA) | W | B, A | B∈Rd×r, A∈Rr×k, r ≪ d,k | — |
| Autoencoders | X = D(E(X)) | ||X-D(E(X))||F2 | X | E, D | bottleneck dim k; linear → PCA | — |
| Non-Convex Opt. | M = UVT | ||M-UVT||F2 | M | U, V | rank(UVT) ≤ r | Yes |
| Tensor Decomp. | T = ∑ a⊗b⊗c | ||T-∑a⊗b⊗c||F2 | T | factors | CP/Tucker/TT structure | — |
| Graph Matching | A = XBXT | ||A-XBXT||F2 | A | X, B | X permutation matrix | Yes |
| Generalized MF | W⊙L = W⊙UVT | ||W⊙(L-UVT)||F2 | W (mask) | U, V, L | L lowest rank | — |
| Archetypal Analysis | A = DX, D = AB | ||A-DX||F2 | A | D, X, B | X, B ≥ 0, convex hull | — |
| Matrix Comp. Sensing | A(L) = b | ||A(L)-b||22 | A, b | L | L rank-r | Yes |
| Kernel Factorizations | K = ΦΦT | ||K-ΦΦT||F2 | K (kernel) | Φ (features) | K ≥ 0, PSD | — |
| DECOMPOSITIONS (A = L + S) | ||||||
|---|---|---|---|---|---|---|
| Problem | Form | Loss D(A;L,S) | Known | Unknown | Key Constraints | Phase Trans. |
| Robust PCA | A = L + S | ||L||*+λ||S||1 | A | L, S | L low-rank, S sparse | Yes |
| Matrix Completion | PΩ(M) = PΩ(L) | ||L||* | MΩ | L | L low-rank, incoherent | Yes |
| SPCP / Noisy RPCA | A = L + S + N | ||L||*+λ||S||1 +||N||F2 |
A | L, S, N | L low-rank, S sparse, N noise | Yes |
Jun Lu's Taxonomy of Classical Matrix Decompositions
Figure 1: Matrix Decomposition World Map — relationships between classical decompositions
Figure 2: Matrix Decomposition World Map Under Conditions — taxonomy by matrix structure
From Jun Lu, "Matrix Decomposition and Applications" (arXiv:2201.00145)
Applications: Video surveillance, face recognition, Netflix recommendations
Applications: Image denoising, compression, feature learning
Spectral Clustering & K-Means
$\underbrace{A}_{\text{known}} = \underbrace{D}_{\text{unknown}} \underbrace{X}_{\text{unknown}}$ s.t. $XX^T = I$, $X_{ij} \in \{0,1\}$ Phase TransitionMatrix factorization view: Given data A, find unknown centroids D and assignment matrix X. The constraint XXT = I enforces that each data point belongs to exactly one cluster, while Xij ∈ {0,1} makes assignments binary. K-Median uses L1 loss instead of L2. Spectral methods relax the binary constraint and partition via graph Laplacian eigenvectors.
Operators: Random walk, Adjacency, Laplacian, Modularity, Non-Backtracking
K-means/K-median: Empirical probability of integrality for SDP/LP relaxations. Figure from: Awasthi, Bandeira, Charikar, Krishnaswamy, Villar, Ward — "Relax, no need to round: integrality of clustering formulations" (2015)
Subspace Clustering
$\underbrace{A}_{\text{known}} = \underbrace{A}_{\text{known}} \underbrace{X}_{\text{unknown}}$ s.t. $\text{diag}(X) = 0$, $X$ is sparse and/or low-rankMatrix factorization view: Given data matrix A, find unknown coefficient matrix X where each column xi expresses data point ai as linear combination of other points. Constraint diag(X) = 0 prevents trivial self-representation. Additional regularization: ||X||1 (SSC), ||X||* (LRR), or ||X||F (LSR).
Subspace clustering criteria satisfying EBD conditions. Figure from: Elhamifar, Vidal — "Sparse Subspace Clustering: Algorithm, Theory, and Applications" (2013)
Non-Negative Matrix Factorization (NMF)
$\min_{D,X} \|\underbrace{A}_{\text{known}} - \underbrace{D}_{\text{unknown}} \underbrace{X}_{\text{unknown}}\|_F^2$ s.t. $D \in \mathbb{R}^{m \times r}_+$, $X \in \mathbb{R}^{r \times n}_+$, $D, X \geq 0$Matrix factorization view: Given non-negative data matrix A, find unknown dictionary D and coefficients X, both constrained to be element-wise non-negative. The rank r controls the number of parts/topics. Non-negativity yields interpretable, parts-based representations.
Deep NMF (2018+): Multiple layers of factorization discover hierarchical structure. Neural NMF uses backpropagation for end-to-end training.
Matrix Completion
$\min_L \text{rank}(\underbrace{L}_{\text{unknown}})$ s.t. $P_{\underbrace{\Omega}_{\text{known}}}(L) = P_\Omega(\underbrace{M}_{\text{known}})$, $L$ is low-rank Phase TransitionMatrix factorization view: Given partial observations MΩ at index set Ω, recover unknown low-rank matrix L. Convex relaxation: min ||L||* (nuclear norm). Phase transition at m ≈ O(nr log n) samples for rank-r recovery. Made famous by Netflix Prize.
Graph-Based (2019+): GNN-based inductive methods generalize to unseen users/items and even transfer across datasets.
Robust PCA & Stable PCA
$\underbrace{A}_{\text{known}} = \underbrace{L}_{\text{unknown}} + \underbrace{S}_{\text{unknown}}$, solve $\min_{L,S} \|L\|_* + \lambda\|S\|_1$, $L$ is low-rank, $S$ is sparse Phase TransitionMatrix factorization view: Given observed matrix A, decompose into unknown low-rank component L (rank(L) ≤ r) and unknown sparse component S (||S||0 ≤ s). Nuclear norm ||L||* is convex proxy for rank; L1 norm for sparsity. Phase transition: recovery succeeds when rank(L) ≤ ρn and ||S||0 ≤ αn2.
Streaming (2015+): Nearly-linear time and single-pass algorithms now available. GRASTA tracks non-stationary subspaces online.
RPCA success rates comparison. Figure from: Parker, Schniter, Cevher — "Bilinear Generalized Approximate Message Passing" (2014)
Sparse PCA
$\max_v \underbrace{v}_{\text{unknown}}^T \underbrace{A}_{\text{known}}^T A v$ s.t. $\|v\|_2 = 1$, $\|v\|_0 \leq k$, $v$ is sparse Phase TransitionMatrix factorization view: Given data matrix A, find principal components v with at most k non-zero entries. Sparsity constraint ||v||0 ≤ k yields interpretable loadings. NP-hard; convex relaxation via SDP or L1 penalty. Computational-statistical gap: info-theoretic recovery possible below algorithmic threshold.
Dictionary Learning
$\min_{D,X} \|\underbrace{A}_{\text{known}} - \underbrace{D}_{\text{unknown}} \underbrace{X}_{\text{unknown}}\|_F^2 + \lambda\|X\|_1$ s.t. $\|d_j\|_2 = 1$, $X$ is sparse Phase TransitionMatrix factorization view: Given data matrix A, learn unknown dictionary D (columns normalized ||dj||2=1) and unknown sparse codes X (||xi||0 ≤ k). Dictionary can be overcomplete (more atoms than dimensions). Alternates between sparse coding (fix D, solve for X) and dictionary update.
Dictionary learning support recovery phase transition. Figure from: Sulam, Papyan, Romano, Elad — "Learning Simple Thresholded Features with Sparse Support Recovery" (2018)
MMV & Compressive Sensing
$\underbrace{Y}_{\text{known}} = \underbrace{A}_{\text{known}} \underbrace{X}_{\text{unknown}}$ s.t. $\|X\|_{\text{row},0} \leq k$ (joint row-sparsity) Phase TransitionMatrix factorization view: Given measurements Y and known sensing matrix A, recover unknown signal matrix X with at most k non-zero rows (joint sparsity across L measurement vectors). Donoho-Tanner phase transition: sharp boundary in (m/n, k/n) plane. Multiple measurements improve recovery over single-vector case (SMV).
Probabilistic reconstruction. From: Krzakala, Mézard, Sausset, Sun, Zdeborová (2012)
SCoSaMP. From: Blanchard, Tanner (2012)
CGIHT/NIHT. From: Blanchard, Tanner, Wei (2015)
Permutation recovery. From: Pananjady, Wainwright, Courtade (2016)
Blind Source Separation / ICA
$\underbrace{Y}_{\text{known}} = \underbrace{A}_{\text{unknown}} \underbrace{X}_{\text{unknown}}$ s.t. rows of $X$ are independentMatrix factorization view: Given mixed observations Y, recover unknown mixing matrix A and unknown source signals X. Key constraint: rows of X are statistically independent (ICA) or sparse (SCA). Non-Gaussianity of sources enables identification up to permutation and scaling.
Linear & Non-Linear Autoencoders
$\underbrace{X}_{\text{known}} \approx \underbrace{D}_{\text{unknown}}(\underbrace{E}_{\text{unknown}}(X))$, linear case $\equiv$ PCAMatrix factorization view: Given data X, learn encoder E: Rd→Rk and decoder D: Rk→Rd minimizing ||X - D(E(X))||2. Linear case: when E, D are linear maps (matrices), optimal solution recovers PCA—the encoder projects onto top-k principal components. Non-linear: E = f(VX), D = g(WZ) with nonlinearities f, g learn curved manifolds.
Key insight (Baldi & Hornik 1989): Linear autoencoders with bottleneck dimension k learn the subspace spanned by top-k principal components. Deep linear networks still recover PCA but with different learning dynamics.
LoRA & Low-Rank Adaptation (LLMs)
$W' = \underbrace{W}_{\text{frozen}} + \underbrace{B}_{\text{unknown}} \underbrace{A}_{\text{unknown}}$ s.t. $B \in \mathbb{R}^{d \times r}$, $A \in \mathbb{R}^{r \times k}$, $r \ll \min(d,k)$Matrix factorization view: Given frozen pretrained weight W, learn unknown low-rank update ΔW = BA where B∈Rd×r and A∈Rr×k are trainable. Rank r controls parameter count: O(r(d+k)) vs O(dk). Typical r = 4–64 for LLMs with d,k ~ 104.
Key insight: Pretrained weight updates during fine-tuning have low intrinsic rank. LoRA exploits this structure to achieve full fine-tuning quality with <1% trainable parameters.
Tensor Decomposition & Tensor Networks
CP: $\underbrace{\mathcal{T}}_{\text{known}} = \sum_r \underbrace{a_r \otimes b_r \otimes c_r}_{\text{unknown}}$; Tucker: $\mathcal{T} = \underbrace{G}_{\text{unk.}} \times_1 \underbrace{U}_{\text{unk.}} \times_2 \underbrace{V}_{\text{unk.}} \times_3 \underbrace{W}_{\text{unk.}}$Tensor factorization view: Given observed tensor T, decompose into unknown factors. CP: sum of R rank-1 tensors (outer products). Tucker: core tensor G and unknown factor matrices U,V,W. Tensor Train: chain of 3-way cores Gk, storage O(dnr2) vs O(dn). All factors unknown.
Tensor Networks (2011+): Tensor Train (TT) achieves O(dnr²) storage. Tensor Ring (TR, 2016) adds circular structure for permutation invariance. Tensor Wheel (2024) balances TR and fully-connected networks.
Recommender Systems & Neural MF
$\min_{U,V} \sum_{(i,j) \in \Omega} (\underbrace{R_{ij}}_{\text{known}} - \underbrace{u_i}_{\text{unk.}}^T \underbrace{v_j}_{\text{unk.}})^2 + \lambda(\|U\|^2 + \|V\|^2)$Matrix factorization view: Given sparse rating matrix R (observed at Ω), learn unknown user embeddings U∈Rm×k and item embeddings V∈Rn×k. Predict Rij ≈ uiTvj. Regularization prevents overfitting. Implicit feedback: weighted by confidence cij.
NCF Debate (2020): Properly tuned dot product often beats learned MLP similarities. Simple baselines remain competitive.
Graph Matching
$\underbrace{A}_{\text{known}} = \underbrace{X}_{\text{unknown}} \underbrace{B}_{\text{unknown}} X^T$ s.t. $X$ is a permutation matrix Phase TransitionMatrix factorization view: Given adjacency matrix A of a graph, find unknown permutation matrix X and unknown structure B such that A = XBXT. Key constraint: X is a permutation matrix (binary, doubly stochastic). Used for matching nodes across two graphs or recovering hidden structure.
Generalized Matrix Factorization
$\underbrace{W}_{\text{known}} \odot \underbrace{L}_{\text{unknown}} = W \odot \underbrace{U}_{\text{unk.}} \underbrace{V}_{\text{unk.}}^T$, minimize $\text{rank}(L)$Matrix factorization view: Given mask W indicating observed entries, find low-rank matrix L = UVT matching observations. Generalizes matrix completion (binary W) to weighted observations. The Hadamard product W⊙L selects which entries matter.
Archetypal Analysis
$\underbrace{A}_{\text{known}} = \underbrace{D}_{\text{unknown}} \underbrace{X}_{\text{unknown}}$, $D = A\underbrace{B}_{\text{unknown}}$ s.t. $X \geq 0$, $B \geq 0$Matrix factorization view: Given data A, find archetypes D that lie on the convex hull of the data (D = AB with B ≥ 0, columns sum to 1), and coefficients X representing each data point as convex combination of archetypes. Unlike NMF, archetypes are extreme points of the data itself.
Binary Matrix Factorization
$\underbrace{D}_{\text{known}} = \underbrace{T}_{\text{unknown}} \underbrace{A}_{\text{unknown}}$ s.t. $T \in \{0,1\}^{m \times r}$Matrix factorization view: Given data matrix D∈Rm×n, find unknown binary factor T∈{0,1}m×r and coefficient matrix A∈Rr×n such that D = TA. Optional constraint: AT·1r = 1n (columns sum to 1 for convex combinations). Unlike Boolean Matrix Factorization, A is real-valued.
Key insight (Slawski et al. 2013): Despite the combinatorial constraint T∈{0,1}m×r, the problem is tractable when r is small. The algorithm finds all binary vertices of aff(D) using the Littlewood-Offord lemma, achieving O(m·2r-1) complexity.
Applications: DNA methylation unmixing (T = methylation profiles per cell type, A = mixing proportions), binary classification ensembles, topic modeling with hard assignments.
Matrix Compressive Sensing (MCS)
$\underbrace{\mathcal{A}}_{\text{known}}(\underbrace{L}_{\text{unknown}}) = \underbrace{b}_{\text{known}}$ s.t. $L$ is rank-$r$ Phase TransitionMatrix factorization view: Given linear measurements b = A(L) of unknown low-rank matrix L, recover L. Extends compressive sensing from vectors to matrices. Phase transition: O(r(m+n)) measurements suffice for rank-r recovery. RIP for matrices analogous to vector case.
SPCP / Noisy Robust PCA
$\underbrace{A}_{\text{known}} = \underbrace{L}_{\text{unknown}} + \underbrace{S}_{\text{unknown}} + \underbrace{N}_{\text{unknown}}$ s.t. $L$ is low-rank, $S$ is sparse, $N$ is noise Phase TransitionMatrix factorization view: Given noisy observations A, decompose into unknown low-rank L, unknown sparse S, and unknown dense noise N. Extends Robust PCA to handle small dense perturbations. Stable Principal Component Pursuit (SPCP) solves: min ||L||* + λ||S||1 s.t. ||A - L - S||F ≤ ε.
Kernel Factorizations
$\underbrace{K}_{\text{known}} = \underbrace{\Phi}_{\text{unknown}} \Phi^T$ s.t. $K$ is PSDMatrix factorization view: Given kernel matrix K (positive semi-definite), find feature representation Φ such that Kij = φ(xi)Tφ(xj). Kernel PCA, Nyström approximation, and random features all exploit this factorization. Low-rank kernel approximations enable scalable kernel methods.
Non-Convex Optimization & Implicit Regularization
$\min_{U,V} \|\underbrace{U}_{\text{unknown}} \underbrace{V}_{\text{unknown}}^T - \underbrace{M}_{\text{known}}\|_F^2$ s.t. $U \in \mathbb{R}^{m \times r}$, $V \in \mathbb{R}^{n \times r}$ Phase TransitionMatrix factorization view: Given observed matrix M, find unknown factors U, V such that UVT approximates M. The factorization implicitly constrains rank(UVT) ≤ r. Despite non-convexity, all local minima are global when r is large enough.
Key result (Gunasekar et al. 2017): GD on matrix factorization with small init implicitly minimizes nuclear norm. Deep factorization amplifies this bias.
Randomized Algorithms & Sketching (RandNLA)
$\tilde{A} = \Pi A$ where $\Pi \in \mathbb{R}^{k \times m}$, $k \ll m$Matrix factorization view: Randomized numerical linear algebra uses random projections and sketching to compute approximate matrix factorizations efficiently. Key techniques include randomized SVD, Johnson-Lindenstrauss transforms, CountSketch, and streaming algorithms like Frequent Directions. These methods trade exact computation for massive speedups while providing provable approximation guarantees.
Other & Online SVD
Boolean factorization, alignment (RASL/TILT), streaming decompositions, and other specialized methods.
Modern Themes (2015–2024)
Contemporary research directions that cut across classical categories, reflecting the field's evolution toward neural network integration, implicit regularization, and computational-statistical trade-offs.
Computational-Statistical Gaps
Phase TransitionThe "hard phase" regime where information-theoretically recovery is possible but no polynomial-time algorithm is known. Understanding these gaps is central to modern high-dimensional statistics.
Key example: In sparse PCA, the spiked covariance model exhibits a gap: spectral methods require signal strength Θ(n1/4) but information-theoretic limit is Θ(1/√n).
Approximate Message Passing (AMP)
Phase TransitionIterative algorithms with exact asymptotic characterization via state evolution. Predict sharp phase transitions for low-rank recovery, compressed sensing, and community detection.
State Evolution: In high-dimensional limit, AMP error follows deterministic recursion. Correctly predicts success/failure boundary.
AMP phase diagram: density vs noise. Figure from: Donoho, Maleki, Montanari — "Message passing algorithms for compressed sensing" (2009)
Randomized Algorithms & Sketching (RandNLA)
Use random projections to reduce massive matrices into tractable smaller problems while preserving essential structure. Sketching enables single-pass streaming algorithms.
Frameworks & Platforms
Comprehensive software packages integrating multiple factorization methods. GPU acceleration now mainstream via RAPIDS and PyTorch backends.
Preconditioners & Fast Multipole Methods
Direct solvers/FMMFast multipole methods (FMM) can be viewed as hierarchical matrix factorizations. Exploiting low-rank structure of off-diagonal blocks enables O(N) or O(N log N) complexity for dense matrix-vector products and direct solvers. H-matrices, H²-matrices, and butterfly factorizations provide fast approximate inverses and preconditioners.
Matrix Factorization View: The multipole expansion A ≈ B̃C decomposes dense kernel matrices into structured low-rank products. Quadtree/octree decompositions yield hierarchical factorizations with sparse + low-rank blocks.
H²-matrix decomposition: A = S + UKVT (sparse + low-rank). Quadtree structure generalizes to higher dimensions.
Multipole expansion as matrix factorization: A ≈ B̃C. Figure from: Martinsson — FMM Tutorial
Essential References
Matrix Computations
Convex Optimization
Topics in Matrix Analysis
NMF & Tensor Factorizations
Matrix Manifolds
Convex Opt. in SP
Optimization for ML
LAPACK Guide
Phase Transition Atlas (2020s Framing)
Modern papers separate information-theoretic limits from algorithmic limits, and often explain gaps with spectral thresholds or state-evolution analysis.