Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexity
Oksana Sudoma
ORCID: 0009-0009-8469-1382
October 25, 2025
Abstract
We prove an impossibility theorem for universal scalar complexity. Let denote the category of classical Markov and quantum CPTP morphisms. Any scalar that is a resource monotoneFollowing quantum resource theory conventions [8, 9], a resource monotone is a functional that does not increase under free operations (here: morphisms in ). on and is simultaneously monotone with respect to operations from four distinct complexity families (algorithmic, information-theoretic, dynamical, geometric) is impossible. The proof constructs a -internal cycle with one non-invertible noise step that strictly lowers and only isomorphisms otherwise; isomorphism invariance then contradicts the net decrease.
As a companion, we show that no scalar can be simultaneously invariant on isomorphisms and non-decreasing on a set of designated, reversible “complexity-increasing” operations drawn from each pillar. This impossibility result, analogous to Arrow’s theorem in social choice, demonstrates that vectorial reporting is necessary for multi-pillar complexity assessment: . More precisely, there is no single scalar functional on physical systems that simultaneously captures (i) algorithmic/circuit cost, (ii) information content and correlations, (iii) dynamical chaos and scrambling, and (iv) geometric and topological structure, while respecting natural axioms of additivity, monotonicity, continuity, and task-universality. The obstruction arises because these four complexity aspects impose incompatible partial orders: operations that increase one type of complexity need not preserve others, enabling cycles where strict improvements in each pillar return to an equivalent system. Any scalar attempting to track all four pillars is forced into logical contradiction. This extends impossibility results from social choice theory (Arrow’s theorem), optimization (no-free-lunch), and quantum resource theory to complexity quantification. As a corollary, faithful complexity measurement requires vectors, not scalars.
1 Introduction: The Complexity Ranking Problem
Complexity has a way of producing an irresistable urge for a cross-domain unification.
During 2024-2025, we attempted multiple scalar formulations linking complexity to thermal gradients and information-geometric bridges, ultimately discovering fundamental circular dependencies that motivated this formal proof.These empirical attempts included scalar complexity fields coupled to temperature gradients, Fisher information metrics over phase space, spectral density proxies, and discrete transition counting. Each formulation either introduced circular dependencies (fitting complexity to observed transitions), produced unbounded values, or conflated distinct operational pillars. While the broader research program on multi-pillar complexity theory remains ongoing, this impossibility result resolves the empirical frustration encountered in those earlier attempts. The theoretical result establishes why those empirical attempts necessarily failed.
Consider three systems: a quantum computer with 100 qubits running Shor’s algorithm, a hurricane’s velocity field captured at hourly intervals, and a social network graph of one million users. Which is “most complex”?
The quantum computer has high algorithmic complexity (requires many gates to prepare the state) but low topological complexity (qubits arranged in a regular grid). The hurricane has high dynamical complexity (chaotic, unpredictable) but moderate algorithmic complexity (governed by relatively simple differential equations). The social network has high geometric complexity (intricate community structure) but low dynamical complexity (changes slowly over time). We can assign each system a value , but how should these numbers relate?
The universal scalar dream.
Imagine having a single number that captures all aspects of complexity for any system . This would enable direct comparisons: if , then system is unambiguously more complex. Such a measure would simplify resource estimation, optimization, and theoretical predictions across physics, information theory, and computation. The dream is seductive and intuitive.
Why we should be suspicious.
Impossibility results in related domains suggest caution. In social choice theory, Arrow’s theorem [3, 4] shows no preference aggregation can satisfy all fairness criteria simultaneously. This impossibility result parallels Arrow’s theorem in social choice theory in that both demonstrate fundamental incompatibilities between multiple ordering criteria. However, where Arrow’s theorem concerns aggregation of subjective preferences under fairness constraints (independence of irrelevant alternatives, unanimity, non-dictatorship), our result concerns objective structural properties under monotonicity requirements. The structural parallel: both show that multiple incompatible partial orders cannot be compressed into a single total order without contradiction.
In optimization, no-free-lunch theorems [36, 16] prove all algorithms have identical average performance across all problems. In multi-objective optimization [14], conflicting objectives cannot be simultaneously optimized by a single scalar without task-dependent weights. In quantum resource theory [8, 9], different choices of “free operations” lead to incompatible resource orderings, and no finite complete set of monotones exists [34]. Similarly, universal intelligence measures face Arrowian impossibilities [20].
These results share a common structure: multiple incommensurable aspects resist compression into a single number. Complexity, we will show, follows this pattern.
Four pillars of complexity.
We organize widely used complexity notions into four families, each capturing distinct structural properties:
-
Algorithmic/Circuit Complexity (): How many computational steps or gates are required to prepare or describe a system? Examples include Kolmogorov complexity [21] (minimal program length) and quantum circuit depth [1] (minimal gate count).
-
Information-Theoretic Complexity (): How much information is shared between components at different scales? Examples include Shannon entropy [10], multiscale/transfer entropy [27, 37], and partial information decomposition [35].
-
Dynamical Complexity (): How chaotic or scrambling is the system’s temporal evolution? Classical measures include Lyapunov exponents and Kolmogorov–Sinai entropy [11, 23]. Quantum measures include out-of-time-order correlators (OTOCs) and spectral form factor [24, 38].
-
Geometric/Topological Complexity (): What is the richness of the system’s spatial or state-space structure? Examples include persistent homology [12, 7] (counting topological features at multiple scales) and fractal dimension [13].
Each pillar comes with natural operations that increase complexity in that domain: adding gates increases , mixing processes increase , chaotic advection increases , and topological enrichment increases . The question is: can a universal scalar be monotone under all such operations simultaneously?
Our result.
We prove the answer is no. We formalize natural axioms that any universal complexity scalar should satisfy (additivity, monotonicity, continuity, task-universality) and show these axioms are mutually incompatible. The impossibility arises not from measurement limitations but from fundamental structural conflict: the four pillars impose incompatible orderings on systems. You can construct cycles where each step increases complexity in its respective pillar, yet the cycle returns to the starting point, forcing any monotone scalar into logical contradiction.
Consequence: vectorial complexity is mandatory.
The result implies that faithful complexity quantification requires vectors, not scalars: . Task-specific scalarizations (weighted combinations) remain valid when one pillar dominates or when weights reflect problem-specific priorities, but task-universal scalars satisfying all monotonicity axioms cannot exist.
Main contributions.
-
Sys-internal no-go result: Proof uses only Markov/CPTP morphisms (no projections or forgetting operations). Geometry is an observer evaluation functional, not state structure.
-
Explicit 5-step construction: Cycle with one non-invertible noise step (strictly lowers ) and isomorphisms only (preserve while raising pillar observers). Complete constructive proof with concrete morphisms.
-
Reproducible toy implementation: Standalone Python code (disk_annulus_cycle_toy.py) demonstrating diskannulus geometric isomorphism , Arnold cat map (dynamics), noise injection, and complexity measurements. One-command execution with fixed random seed.
-
Dual-theorem presentation: Main theorem (resource-style impossibility on all Sys) complemented by operational corollary (impossibility for designated reversible increase-ops). Captures both orthodox resource theory and original “pillar-monotone” intuition.
-
Necessity of vectors: Corollary establishes that multi-pillar complexity requires vectorial reporting , not scalars. Task-specific weighted sums remain valid; only task-universal scalars satisfying all axioms are impossible.
Roadmap.
Section 2 establishes the mathematical framework: we define the system category, complexity measures, and morphism families formally. Section 3 states axioms for a universal scalar. Section 4 proves the main impossibility theorem. Section 5 discusses consequences and practical alternatives. Appendices provide explicit verifiable examples.
2 Mathematical Framework: Formalizing Complexity
Why we need formalism.
Before proving impossibility, we must define precisely what we mean by “complexity measure,” “improvement operation,” and “system.” The following framework provides the mathematical structure needed for rigorous argumentation. Readers familiar with category theory will recognize this as a functorial approach to complexity; others can treat it as systematic bookkeeping of systems and transformations.
2.1 System Category
Definition 1 (System Category).
Let be the category whose:
-
Objects are triples where:
-
is a measurable space with measure
-
is either a classical state space (probability measures on ) or quantum state space (density operators on a Hilbert space )
-
is a scale parameter
-
-
Morphisms are measure-preserving maps (classical) or completely positive trace-preserving (CPTP) maps (quantum)
Scale Dependence.
The scale parameter determines the observational resolution at which complexity is evaluated (e.g., circuit depth cutoff, correlation length scale, pixel size for images). The impossibility result holds at each fixed —no choice of scale can rescue a scalar complexity measure. While a complete scale-theoretic treatment (complexity families for ) is natural, we fix throughout for notational simplicity. The extension to parametric families is immediate but deferred to future work.
Interpretation.
This definition encompasses both classical systems (e.g., bit strings, probability distributions, dynamical systems) and quantum systems (e.g., qubit states, density matrices). The scale parameter allows complexity to depend on resolution: entropy of an image depends on pixel size, topological features of a network depend on interaction radius. Morphisms are structure-preserving transformations.
2.2 Complexity Measures
Definition 2 (Individual Complexity Measures).
For a system at scale , we define normalized complexity measures:
| (1) | ||||
| (2) | ||||
| (3) | ||||
| (4) |
where is Kolmogorov complexityNormalization constants are scale-dependent. For example, for -bit strings at scale , ; for Shannon entropy, ; for KS entropy, ; for persistent homology, can be taken as the total Betti number sum times maximum feature lifetime at filtration radius . Remark on normalization robustness: The exact form of these normalization constants does not affect the impossibility result. Any bounded normalization ensuring suffices; different choices may alter the threshold values but cannot eliminate the fundamental contradiction since for any positive thresholds. (or circuit length for quantum systems), is multi-information, is Kolmogorov-Sinai entropy (or entanglement entropy growth rate for quantum systems), and is the -th persistent homology module. Denominators are scale-dependent normalization constants chosen to ensure boundedness.
Definition 3 (Complexity Vector).
The complexity vector of system at scale is:
So what?
These definitions formalize the intuition that complexity has multiple facets. Normalization to enables comparison across scales and system types. The key question: can these four components be compressed into a single number without losing essential information?
2.3 Morphism Families and Partial Orders
What are improvement operations?
In each complexity domain, certain operations are known to increase complexity. Adding gates to a quantum circuit increases circuit complexity. Mixing processes increase entropy. Chaotic advection increases dynamical entropy. Topological enrichment adds persistent homology features. We formalize these as morphism families.
Definition 4 (Morphism Families).
A morphism family is a collection of morphisms equipped with:
-
A complexity functional
-
A monotonicity property:
-
A constructive characterization (see below)
We define four fundamental families:
1. Algorithmic morphisms :
-
Characterization: if where is drawn from:
-
Classical: Pseudorandom generators or one-way functions with security parameter bits
-
Quantum: Unitary circuits with T-depth for -qubit systems
-
-
Complexity measure: where is a universal Turing machine (classical) or where is circuit depth with gate set (quantum)
-
Decidable approximation: For quantum systems, circuit complexity with fixed gate set provides a computationally tractable upper bound
2. Information morphisms :
-
Characterization: if implements:
-
Classical: Error-correcting codes (syndrome generation) or mixing transformations that create correlations
-
Quantum: Entangling operations with Schmidt rank increase , or partial measurements creating classical-quantum correlations
-
-
Complexity measure: (multipartite mutual information normalized by system size)
-
Operational test: Correlation increase measurable via mutual information between subsystems
3. Dynamical morphisms :
-
Characterization: if is time evolution under:
-
Classical: Discrete maps with largest Lyapunov exponent (e.g., Arnold cat map, baker’s map)
-
Quantum: Non-integrable Hamiltonians with level spacing ratio indicating chaotic regime (e.g., kicked Ising model)
-
-
Complexity measure:
-
Classical: (normalized Kolmogorov-Sinai entropy)
-
Quantum: where is entanglement entropy (saturation value quantifies scrambling)
-
-
Unified framework: Both measure mixing/scrambling rate in their respective phase spaces
4. Geometric morphisms :
-
Characterization: if implements:
-
Simplicial maps increasing Betti numbers: for some homology dimension
-
Embeddings into higher-dimensional complexes with non-trivial topology (e.g., adding vertices/edges to create non-contractible cycles)
-
-
Complexity measure: where is the -th persistent homology module and is a volume normalization
-
Partial order: Persistence modules are ordered via interleaving distance; increases if it adds features or extends feature lifetimes
Remark 1 (Morphism Family Characterizations).
The characterizations in Definition 4 are descriptive rather than exhaustive formal specifications. The exact membership criteria for each family do not affect the impossibility result—what matters is that:
-
Such families exist (demonstrated constructively in Appendix A)
-
They induce incompatible partial orders on (Lemma 1)
-
Each family has morphisms that strictly increase its associated complexity measure
Any partition of morphisms into four disjoint families satisfying these properties yields the same impossibility. The specific examples (PRGs, Arnold cat, syndrome encoding, disk→annulus) are illustrative instances, not definitional requirements.
Definition 5 (Faithfulness to Operational Pillars).
A scalar complexity measure is faithful to an operational pillar if it satisfies monotonicity (Axiom A2) for all morphisms in . That is, for every :
-
Weak monotonicity:
-
Strict monotonicity for strict improvements: if is a strict improvement (Definition 6), then
A scalar is faithful to all four pillars if it is faithful to each pillar individually.
Remark 2 (Closure Under Composition).
Each morphism family is assumed closed under composition: if , then whenever the composition is defined. This ensures that the relations defined in Definition LABEL:def:partial-orders are transitive and thus genuine partial orders.
Remark 3.
Definition 6 (Strict Improvements).
A morphism is a strict improvement in pillar if:
-
For : for a fixed threshold
-
For : for a fixed threshold
-
For : for a fixed threshold
-
For : for
Remark 4 (Threshold Justification and Uniformity).
The strict improvement thresholds serve three essential purposes:
1. Distinguishing signal from noise: In any finite-precision implementation, complexity changes below some threshold are indistinguishable from numerical error. We require to ensure improvements are measurable.
2. Ensuring cycle non-triviality: The contradiction in Theorem 1 requires . With , infinitesimal improvements could accumulate without contradiction.
3. Uniform vs. adaptive thresholds: While one might argue for system-dependent thresholds , we can always take:
For finite systems or bounded complexity classes, this infimum is positive. For our explicit constructions (Appendix A), we demonstrate concrete values:
-
(adding 2 gates to a circuit)
-
(creating 2 bits of mutual information)
-
(one unit of Lyapunov exponent)
-
(adding one non-trivial cycle)
These values are not special; any positive thresholds yield the impossibility. The key insight is that strict improvements in different pillars can be composed without mutual enhancement, enabling the cycle construction.
Definition 7 (Pillar-Specific Partial Orders).
Each morphism family induces a partial order on :
with strict order when is a strict improvement.
Lemma 1 (Non-Commutativity).
The four partial orders do not commute. Specifically, there exist systems where:
Proof.
Consider , (uniform superposition), . Then (QFT increases circuit complexity) and (GHZ has maximal entanglement), but increasing information doesn’t generally preserve algorithmic simplicity—the composition requires additional gates beyond the sum of individual operations. ∎
The key insight.
These partial orders conflict. You can traverse from system A to B to C to D and back to A, with each step increasing complexity in its respective pillar. This cycle structure will force our contradiction.
Remark 5 (Finite vs. Infinite Systems).
The impossibility holds for both finite and infinite systems:
-
Finite systems: The cycle construction works with -bit strings and -qubit states for any .
-
Infinite systems: The result extends to separable Hilbert spaces and measure spaces via limiting arguments.
-
Discrete systems: Replace continuous morphisms with discrete maps; the incompatibility persists.
The key requirement is that the system class is rich enough to support all four types of morphisms.
2.4 Relationship to Other Complexity Notions
Why focus on these four pillars?
A natural question arises: are there other fundamental complexity types beyond our four pillars? Several important complexity notions have been proposed in the literature. We now examine how they relate to our framework.
Logical Depth (Bennett).
Bennett’s logical depth [5] measures the computational time required by a near-minimal-length program to generate an object. This distinguishes “organized complexity” (high depth, high Kolmogorov complexity) from random strings (low depth, high K) and simple patterns (low depth, low K).
For our framework, logical depth represents a time-extended variant of algorithmic complexity. In quantum systems, circuit depth (number of sequential layers) already captures this temporal dimension. For classical systems, depth corresponds to the time-space product in computation. Since both time and space are computational resources within the algorithmic pillar, logical depth does not constitute an independent fifth pillar but rather a refined view of that distinguishes parallel versus sequential structure.
Effective Complexity (Gell-Mann & Lloyd).
Gell-Mann and Lloyd’s effective complexity [15] separates the algorithmic content of a system’s regularities from its random information: . This measure aims to quantify “meaningful structure” as opposed to noise.
However, effective complexity is fundamentally task-dependent: determining what counts as “regular” versus “random” requires choosing a model class or hypothesis space. Different choices yield different decompositions. For DNA, one might treat coding genes as “regular” and neutral mutations as “random,” but this choice reflects biological function, not intrinsic structure. Since our Axiom (A4) demands task-universality, effective complexity falls outside our framework—it is a valuable measure for specific contexts but not a universal structural property.
Statistical Complexity (Crutchfield & Shalizi).
The statistical complexity of a stochastic process [28] is the Shannon entropy of its -machine’s causal states: . This quantifies the minimum memory required for optimal prediction.
Statistical complexity is closely related to our information-theoretic pillar. The entropy of hidden states can be expressed as conditional mutual information between past and future observations: . This is a specific instance of multiscale information quantification. For our purposes, statistical complexity is subsumed by when we account for information at different temporal scales and include hidden-state structure. It does not require a separate pillar.
Thermodynamic Complexity (Landauer, Bennett, Lloyd).
Physical systems exhibit complexity related to energy dissipation and entropy production. Landauer’s principle [19] shows that erasing information dissipates at least energy per bit. Lloyd [22] derived ultimate physical limits on computation as a function of energy and time.
Thermodynamic complexity measures resource costs (energy, time) rather than intrinsic structural properties. Two systems with identical information content and algorithmic structure can have different thermodynamic costs depending on their physical implementation (reversible vs. irreversible computation). Since our framework targets task-universal structural complexity independent of physical resources, thermodynamic measures are orthogonal to our four pillars. They answer “how much does it cost?” rather than “what is its structure?”—a valid but different question.
Hierarchical & Modular Complexity (Simon).
Simon’s work on hierarchy [30] emphasizes organizational levels and near-decomposability: systems with clearly separated modules and hierarchical structure. Modularity quantifies the strength of intra-module versus inter-module connections.
Hierarchical structure is partially captured by our geometric pillar. Persistent homology at multiple scales naturally identifies hierarchical organization: features appearing at coarse scales correspond to high-level modules, while fine-scale features represent substructure. The Betti numbers at different filtration values encode the depth and breadth of hierarchical decomposition. While persistent homology does not explicitly measure modularity in the graph-theoretic sense, it provides a topological proxy for multi-level organization. We thus view hierarchical complexity as an aspect of rather than a separate pillar.
Summary and Justification.
Our analysis shows that:
-
Logical depth Algorithmic complexity (time-extended view of )
-
Effective complexity is task-dependent, violating Axiom (A4)
-
Statistical complexity Information-theoretic complexity (hidden-state information within )
-
Thermodynamic complexity measures resource costs, not intrinsic structure (orthogonal to our framework)
-
Hierarchical complexity Geometric complexity (captured by multiscale persistent homology)
The four pillars thus form a parsimonious basis for task-universal structural complexity. Other measures either reduce to combinations of these four, require task-specific definitions, or address resource costs rather than structure. This justifies our focus and shows that expanding to five or six pillars would either introduce redundancy or violate task-universality.
3 Axioms for a Universal Complexity Scalar
Deriving Axioms from First Principles.
Our axioms are not arbitrary but emerge from fundamental requirements for any meaningful complexity measure. They draw from established principles in physics, information theory, and computation:
From Physics:
-
Additivity (A1): Extensive quantities in thermodynamics (energy, entropy) add for independent systems [18]. Since complexity should scale with system size, it must be extensive.
-
Continuity (A3): Physical quantities vary continuously under small perturbations (except at phase transitions) [2]. Discontinuous complexity would imply unphysical “jumps.”
From Information Theory:
From Computation:
These axioms thus represent the minimal requirements for a complexity measure to be physically meaningful, information-theoretically sound, and computationally invariant. That they lead to impossibility reveals a fundamental tension between universality and monotonicity.
Formal Axioms.
A universal complexity scalar is a functional intended to summarize . We require:
- (A1) Additivity:
-
For independent composites, where independence means:
-
Classical: with (product measure)
-
Quantum: with (tensor product)
we require .
-
- (A2a) Weak Monotonicity:
-
For every morphism in any pillar family :
- (A2b) Strict Monotonicity:
-
Why both weak and strict monotonicity?
Axiom A2a alone permits degenerate solutions like for all , which satisfies weak monotonicity trivially but provides no information. Axiom A2b (strict monotonicity under improvements) eliminates such degeneracies by requiring that genuine complexity-increasing operations produce measurable scalar increases. Together, A2a and A2b enforce that respects the partial order structure while remaining responsive to operational differences.
- (A3) Scale-Continuity:
-
111Axiom A3 (continuity in ) is not required for Theorem 1, which holds even for discrete parameter spaces (see Proposition 1 and Remark 7). We include it for mathematical completeness and to facilitate future extensions to continuous-scale families.
is continuous with respect to the product topology on where:
-
has the weak-* topology (classical) or trace norm topology (quantum)
-
Scale parameter has the standard topology on
Explicitly: such that implies .
-
- (A4) Task-Universality:
-
is defined on all without reference to any specific task, algorithm, or measurement protocol beyond the normalization convention.
Relation to No-Free-Lunch.
Task-universality (A4) demands that work across all systems without task-specific weighting—analogous to seeking a universal optimization algorithm without problem-specific tuning. The no-free-lunch theorem [36] shows that all algorithms have identical average performance across all problems; similarly, we show that task-universal complexity scalars satisfying all monotonicity axioms cannot exist. The key distinction: NFL concerns algorithm performance; ours concerns structural quantification.
- (A5) Relabeling Invariance:
-
A bijection is a relabeling if and only if:
-
It permutes indices of subsystems (qubits, bits, vertices) while preserving all structure
-
For classical systems: is a permutation of coordinate labels
-
For quantum systems: corresponds to a permutation matrix in the computational basis
-
Crucially: preserves ALL complexity measures: for all
For any such relabeling: .
What is NOT a relabeling:
-
Embeddings into higher-dimensional spaces (changes )
-
Basis changes that alter entanglement structure (changes )
-
Time evolution or dynamics (changes )
-
Compression or encoding (changes )
This axiom ensures complexity does not depend on arbitrary labeling conventions (e.g., calling the first qubit “A” vs “1”) but does NOT allow ignoring structural modifications that affect any complexity pillar.
-
- (A6) Normalization:
-
with where is the trivial system.
So what do these axioms mean?
(A1) says that independent systems add their complexities—like energies in thermodynamics or entropies in information theory. (A2a,b) ensure that complexity-increasing operations actually increase the scalar; the distinction between weak and strict monotonicity prevents trivial constant functions. (A3) prevents pathological discontinuities: small perturbations shouldn’t cause complexity to jump. (A4) demands true universality: no task-specific tuning or weights. (A5) ensures complexity doesn’t depend on arbitrary labeling choices (e.g., permuting qubit labels). (A6) provides basic regularity and normalization.
How to verify these in practice.
To check if a proposed satisfies these axioms:
-
Construct two independent systems and verify additivity (A1)
-
Find a morphism that increases algorithmic complexity and check if increases (A2)
-
Perform small perturbations and verify continuity (A3)
-
Demonstrate the measure works across different problem domains without recalibration (A4)
-
Permute system labels and check invariance (A5)
-
Verify boundedness and normalization (A6)
3.1 Frequently Asked Questions: Proof Clarifications
- Q1: If is an isomorphism, how can geometric complexity go up?
-
A: Axiom (A2) constrains the universal scalar only—pillar observers are evaluation functionals not constrained by monotonicity. Isomorphisms leave unchanged (via A2 applied to and ), but pillar observers can change. Specifically, rises via the fixed VR pipeline even though . The observers measure different aspects; attempts to unify them all.
- Q2: Why not make all steps non-invertible to force down everywhere?
-
A: One strict decrease suffices for contradiction. The other steps showcase pillar incompatibility without touching —isomorphisms demonstrate that different pillars can be strictly improved while keeping constant. This highlights the fundamental tension: cannot track all pillar changes simultaneously.
- Q3: Is Axiom (A2) too strong? Why require monotonicity on all Sys?
-
A: (A2) is the standard resource-style requirement used in quantum information theory (entanglement monotones under LOCC) and thermodynamics (entropy under reversible processes). Our impossibility shows that even this generous orthodox notion fails when asked to be faithful to all four pillars. Weaker requirements would make the impossibility even stronger.
4 Main Result: The Impossibility Theorem
Lemma 2 (Existence of Improvement Paths).
For any system category containing classical bit strings of length and quantum states on qubits, there exist systems and morphisms such that:
-
with
-
with
-
with
-
with
-
There exists a projection that forgets geometric structure
where the improvements do not automatically increase other complexity measures.
Proof.
We construct explicitly. Let with (8-bit string) and (3-qubit state).
Step 1 (Algorithmic): Apply pseudorandom generator to obtain and prepare GHZ state . The Kolmogorov complexity increases from to (seed plus quantum circuit), while information complexity remains low (no classical-quantum correlation yet).
Step 2 (Information): Create correlations by syndrome encoding: partition into blocks and add parity bits. Perform partial measurement on creating entanglement. This increases mutual information without adding algorithmic complexity (syndrome generation is computationally simple).
Why include a noise step?
While the cycle contradiction could be derived using only isomorphisms, we include the non-invertible noise step for pedagogical clarity. It demonstrates that the impossibility doesn’t rely solely on isomorphisms: even allowing operations that clearly should decrease a universal complexity measure (information-theoretic noise) doesn’t resolve the fundamental tension. The core issue is that improving each pillar via morphisms designed for that pillar creates incompatible constraints on any universal scalar.
However, we also require a noise step that strictly decreases . We define explicitly as:
-
Classical: Additive dithering where is drawn from uniform distribution with . This is a Markov kernel: .
-
Quantum: Weak dephasing channel with , or mixed with local noise: where (Pauli noise on qubits), with ensuring CPTP.
Key properties: (Markov/CPTP), is non-invertible (information loss), and by Axiom (A3’) strictly (non-degenerate noise).
Step 3 (Dynamical): Apply chaotic evolution: Arnold cat map on classical bits (Lyapunov exponent ) and kicked Ising evolution on quantum state (scrambling time ). This increases and entanglement entropy growth rate without changing the state’s algorithmic description length.
Step 4 (Geometric): Embed in topological structure: place classical configuration on vertices of a triangulated 2-complex with non-trivial fundamental group . Encode quantum state in surface code. This adds persistent homology features: .
Step 5 (Projection): Define to extract the underlying bit string and qubit state, discarding topological structure. By construction, has the same classical and quantum content as .
The key observation is that each morphism targets a specific complexity type without necessarily increasing others, enabled by the independence of the four pillars. The explicit construction is detailed in Appendix A. ∎
| Step | Map | Sys? | Iso? | A2 effect | A3 effect | result |
| 0 | (identity) | — | — | — | — | (baseline) |
| 1 | Yes | Yes | (weak) | Alg rises: | (usually ) | |
| 2 | Yes | No | (strict by A3’) | Info may rise slightly | (strict drop) | |
| 3 | Yes | Yes | Dyn rises: | |||
| 4 | Yes | Yes | Geom rises: | |||
| 5 | — | Yes | Iso forces: | — | Contradiction: |
Table interpretation.
Steps 1, 3, 4 are isomorphisms (reversible), so by Axiom (A2) applied to both and , we have and , implying (equality for isomorphisms). These steps strictly raise their respective pillar observers but leave unchanged. Only Step 2 (noise ) is non-invertible, causing strict decrease by Axiom (A3’). The closing isomorphism forces , yet the chain implies —contradiction.
Main result precisely stated.
We prove there is no scalar such that:
-
satisfies axioms A1–A6 (including monotonicity A2 for all morphisms in )
-
For each pillar , there exist morphisms designed to increase while being isomorphisms or having minimal effect on other pillars
Informally: cannot be simultaneously monotone for operations specifically designed to increase each pillar’s complexity without affecting others. The pillar observers are evaluation functionals not constrained by monotonicity axioms—they measure different aspects while attempts to unify them.
Theorem 1 (No-Go).
There is no functional on satisfying (A1)–(A6) such that (A2a,b) hold simultaneously for all families .
What the theorem claims.
Imagine you propose a complexity function . If it satisfies all our axioms for one pillar (say, algorithmic complexity), you might hope it works for all four. The theorem shows this is impossible: satisfying monotonicity for all four pillars simultaneously creates a logical contradiction.
Proof of Theorem 1.
We proceed by showing that the existence of improvement paths (Lemma 2) combined with the axioms leads to a contradiction.
Step 1: Existence of Improvement Paths
By Lemma 2 and the explicit construction in Appendix A, there exist systems and morphisms:
| (5) | ||||
| (6) | ||||
| (7) | ||||
| (8) |
plus a projection that forgets the geometric enhancement while preserving the underlying system.
Step 2: Application of Monotonicity Axioms
Assume satisfies axioms (A1)-(A6). By (A2b), for strict improvements:
| (9) | ||||
| (10) | ||||
| (11) | ||||
| (12) |
Step 3: Derivation of Contradiction
Combining the inequalities:
But exactly, so . This requires the projection to strictly decrease complexity:
However, if must be monotone under geometric morphisms that increase , it cannot simultaneously accommodate projections that decrease . Yet such projections manifestly exist (one can always forget topological structure). This creates a fundamental incompatibility: no scalar can track all four complexity types while respecting their natural partial orders.
Step 4: Conclusion
No functional can simultaneously satisfy all axioms for all four morphism families. The obstruction is fundamental: the partial orders induced by different complexity notions are incompatible with a universal scalar ordering. ∎
So what does this mean?
The proof shows that trying to compress all complexity aspects into one number leads to an impossible statement: for some . This isn’t due to measurement error or finite precision—it’s a fundamental logical impossibility arising from the conflicting structure of complexity types.
4.1 Companion Result: Operational Corollary for Designated Increase-Maps
Salvaging the positive-monotone intuition.
The original motivation for this work was the question: “Can a universal scalar be monotone under operations that increase each type of complexity?” Theorem 1 answers a related but different question (resource impossibility). Here we formalize the original intuition as a companion result.
Theorem 2 (Operational Impossibility for Designated Reversible Improvements).
Suppose we designate specific “complexity-increasing” reversible operations where:
-
Each is an isomorphism (reversible)
-
Each strictly increases its pillar observer:
-
All (Markov/CPTP)
Then no scalar can satisfy both:
-
Isomorphism invariance: for all isomorphisms
-
Increase-monotonicity: for all designated increase-ops
Proof.
Construct a 4-step cycle using only the designated operations:
By assumption (2):
| (13) | ||||
| (14) | ||||
| (15) | ||||
| (16) |
Therefore: .
However, define the closing isomorphism . Then exactly.
By assumption (1): , so .
This contradicts from the chain of increases. ∎
Remark 6 (Relationship to Main Theorem).
Theorem 2 captures the original v4 intuition (“no scalar monotone under all pillar families”) but with designated reversible increase-ops made explicit and Sys-internal. It complements Theorem 1 by showing even the weaker requirement (monotonicity only on specific designated ops, not all Sys) fails when combined with isomorphism invariance.
4.2 Necessity of Axioms
Which axioms are essential?
One might wonder: could we weaken some axiom to avoid the impossibility? The following result shows each axiom plays a necessary role.
Proposition 1 (Axiom Independence).
Each axiom in the set A1, A2b, A4, A5, A6 is necessary for the impossibility result. Axiom A3 (continuity) is included for technical completeness but is not essential for Theorem 1. Removing any single necessary axiom allows a universal complexity scalar to exist.
Proof sketch.
-
Without (A1) Additivity: Define . This satisfies monotonicity but not additivity.
-
Without (A2b) Strict Monotonicity: Define with fixed weights. Weak monotonicity holds but not strict for all families.
-
Without (A3) Continuity: The impossibility still holds for discrete systems and discontinuous measures. A3 ensures technical regularity but is not required for the core contradiction (the cycle construction works without continuity).
-
Without (A4) Task-Universality: Fix a specific task/weight, reducing to single morphism family.
-
Without (A5) Relabeling Invariance: Allow even when systems are equivalent up to labeling.
-
Without (A6) Normalization: Allow , trivially satisfying monotonicity.
∎
Remark 7 (Minimal Axiom Sets).
The minimal set for impossibility is {A2b, A4, A5}. These capture: (1) strict improvements must increase the scalar, (2) the measure applies to all morphism families universally, (3) relabeling invariance ensures structural consistency. Additionally, A1 (additivity) and A6 (normalization) are typically required for non-degenerate solutions, though pathological counterexamples exist without them. Axiom A3 (continuity) is not in the minimal set, confirming the impossibility is robust—it doesn’t rely on technical regularity but on the fundamental tension between monotonicity and universality.
5 Consequences and Positive Alternatives
What do we do instead?
The impossibility doesn’t mean complexity is unmeasurable—it means we must use vectors, not scalars. This section explores practical implications.
Corollary 1 (Vector Necessity).
Any faithful summary of multi-faceted complexity must be vector-valued (up to monotone reparametrizations of components). Scalarizations are possible only after fixing a task (weights or morphism family).
Remark 8 (Task-Relative Scalars).
If one restricts morphisms to a single family (e.g. only ) or fixes a task-dependent weighting , scalar measures are consistent. The no-go targets task-universal obeying all monotonicities simultaneously. For instance, when optimizing quantum circuits for execution time, one might weight heavily and lightly—this task-specific scalar is perfectly valid.
Scope and limitations.
This result addresses Stage-1 (pre-thermal) universal scalar complexity—intrinsic structural properties independent of physical implementation or thermal coupling. We do not claim:
-
Task-specific scalars are impossible: Weighted combinations with fixed task-dependent weights remain valid and useful. Our impossibility targets only task-universal scalars satisfying all monotonicity axioms simultaneously.
-
Multi-sector sums are impossible: The impossibility holds for single scalars. Multi-sector weighted sums where each represents complexity in a distinct “sector” or “calculus” (indexed by ) remain a viable avenue. Such multi-sector approaches may circumvent the impossibility by assigning different morphism families to different sectors, avoiding the global monotonicity requirement. This is an active area of investigation.
-
Causation from temperature/action: Temperature and action appear in related work as observer fields (Stage-2 thermal coupling), not causal drivers of complexity. The current result is pre-thermal and makes no claims about thermodynamic emergence.
-
Quantum measurement collapse: Our framework treats measurement as CPTP maps within Sys. Observer-dependent interpretations (e.g., time-symmetric weak values) are compatible but not required.
The impossibility is fundamental for task-universal single scalars but leaves open multiple productive research directions.
How to use complexity vectors in practice.
When comparing systems, plot them in 4D complexity space (or project to 2D/3D for visualization). Pareto frontiers emerge: system A dominates B if componentwise. Many systems will be incomparable—neither dominates the other—reflecting genuine trade-offs between complexity types. This vectorial approach is already used implicitly in multi-objective optimization; our result shows it’s mandatory for complexity.
Concrete example.
Consider designing a neural network. High (many parameters) may improve accuracy but increases training time. High (many inter-layer correlations) may improve expressiveness but complicates interpretation. High (chaotic gradient dynamics) may escape local minima but hinders reproducibility. High (intricate loss landscape topology) may enable diverse solutions but complicates optimization. There is no single “best” architecture—the optimal choice lies on the Pareto frontier and depends on task priorities (accuracy vs. speed vs. interpretability).
6 Implications and Applications
For complexity science.
Our result implies that complexity dashboards must be multi-dimensional. Any single “complexity index” necessarily privileges certain aspects over others. This has implications for:
-
Machine Learning: Model complexity cannot be captured by a single metric (parameter count, VC dimension, or description length alone). Practitioners already use multiple metrics (accuracy, training time, memory); our result justifies this as fundamental, not just practical.
-
Network Science: Graph complexity requires multiple measures (degree distribution, clustering, and path length). Ranking networks by a single centrality measure loses critical information.
-
Quantum Computing: Resource estimation needs vector-valued metrics (gate count, depth, and entanglement). Circuit optimizers must handle Pareto frontiers, not single objectives.
For physics.
The impossibility suggests that proposed “complexity equals action” conjectures [6, 32] cannot be universal. Different physical processes optimize different complexity aspects, explaining the diversity of structures in nature. Black hole complexity likely requires multiple measures to capture both bulk geometry and boundary quantum information.
For information theory.
Just as there is no universal compression algorithm (by no-free-lunch [36]), there is no universal complexity measure. Task-specific measures remain essential. This connects to Shannon’s insight that information is context-dependent—complexity inherits this context-dependence. Partial information decomposition [35] already shows information naturally resists scalar compression; our result extends this to general complexity.
Hierarchical emergence and lumpability.
Recent work by Rosas et al. [26] characterizes emergent macroscales as “software-like” self-contained processes via Markov lumpability—the condition under which coarse-graining preserves Markov structure. They establish that a microscopic process exhibits causally and informationally closed levels if and only if its causal states are strongly lumpable, providing a rigorous criterion for identifying emergent macroscopic descriptions [17, 25].
Our impossibility result is orthogonal yet complementary to this framework. While Rosas et al. identify where emergent levels exist through lumpability conditions, we prove that regardless of such hierarchical structure, no single task-universal scalar can simultaneously respect the ordering constraints of our four complexity pillars. Even within a single strongly lumpable macroscale—where coarse-grained dynamics maintain perfect Markovian structure—the vectorial nature of complexity remains irreducible. This demonstrates that hierarchical emergence does not rescue scalar universality; vectorial reporting remains necessary regardless of lumpable partitioning.
7 Reviewer Checklist (Reproducibility Gates)
-
Axioms (A1)–(A6) stated and motivated from standard practice.
-
Explicit four-system improvement cycle provided (Appendix A).
-
Example families instantiated in classical and quantum settings.
-
Relaxations under which a scalar is possible are stated (Section 5).
-
Axiom necessity analyzed (Proposition 2, Section 4.1).
-
Practical implications discussed (Section 6).
Acknowledgments
Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author.
Code Availability
Supplementary materials and documentation are available at:
Appendix A Concrete Cycle Construction
Purpose of this appendix.
The main theorem claims that improvement cycles exist. Here we construct explicit examples that can be verified computationally or analytically.
We construct an explicit improvement cycle using hybrid classical-quantum systems.
Theorem 3 (Explicit Improvement Cycle).
There exist five systems and morphisms forming a path where each of the first four edges is a strict improvement in its respective pillar, and the fifth edge is a projection returning to the initial system.
Proof.
System Construction: Let each system be a composite where is a classical bit string of length and is a qubit quantum state.
Initial System :
-
Classical:
-
Quantum:
-
Complexities: (constant program), (no correlations), (no dynamics), (trivial topology)
Edge 1: (Algorithmic improvement)
-
Classical: Apply pseudorandom generator with seed , yielding
-
Quantum: Prepare GHZ state via Hadamard on qubit 1 and two CNOTs
-
Result: bits (seed) bits
-
Complexity change: (take )
Edge 2: (Information improvement)
-
Classical: Partition into 4 blocks of 2 bits: and create syndrome bits encoding parity, yielding correlated structure
-
Quantum: Measure qubit 1 of in computational basis, creating entanglement between measurement outcome and remaining qubits
-
Result: Classical-quantum mutual information
-
Complexity change: (take )
Edge 3: (Dynamical improvement)
-
Classical: Apply Arnold cat map to bit pairs as coordinates
-
Quantum: Time-evolve under kicked Ising Hamiltonian for 5 kicks
-
Result: Classical dynamics have Lyapunov exponent
-
Complexity change: (take )
Edge 4: (Geometric improvement)
-
Classical: Embed the state into a 2-dimensional simplicial complex with non-trivial 1-cycles
-
Quantum: Encode into topological quantum error correcting code with code distance
-
Result: System has enhanced geometric/topological structure
-
Complexity change: (take )
Explicit disk annulus map : Define the area-preserving map where is the unit disk and is the annulus with inner radius :
| (17) | ||||
| (18) |
in polar coordinates with , .
Properties:
-
Area-preserving: (check via Jacobian: )
-
Bijection a.e.: Almost-everywhere bijective (except center point )
-
In Sys: Deterministic measure-preserving map (classical Markov with delta kernel)
-
Isomorphism: Invertible via
-
Geometric effect: Under fixed Vietoris-Rips pipeline with radius parameter , the annulus has (non-trivial 1-cycle from hole) while disk has (contractible). Thus: for fixed observer .
Edge 5: (Geometric projection - NOT a morphism in )
-
Define projection that forgets geometric embedding
-
Classical: Project from simplicial complex back to bit string
-
Quantum: Decode from topological code to bare qubit state
-
Result: exactly (not just up to relabeling)
-
Key property:
Closing the Cycle: The composition of morphisms creates a 5-edge path:
For any universal scalar satisfying (A2b), we have:
| (19) | ||||
| (20) | ||||
| (21) | ||||
| (22) |
Therefore: .
Now, the projection creates a dilemma:
-
If is to be universal, it must respect geometric complexity, so
-
But exactly, so
-
This means strictly decreases :
The contradiction: If must be monotone under ALL morphisms in (which increase geometric complexity), then operations that decrease geometric complexity (like ) cannot exist. But such projections clearly do exist - one can always “forget” topological structure. This shows that no scalar can be simultaneously monotone for operations that increase each type of complexity while also handling the full space of system transformations. ∎
A.1 Explicit Numerical Example
How to verify this example.
The following table provides concrete complexity values for a 2-bit classical + 2-qubit quantum system. These values can be computed using standard tools (Kolmogorov complexity estimators, entropy calculators, persistent homology software).
Example 1 (Verifiable 2+2 System).
System states:
-
: Classical bits = 00, Quantum state =
-
: Classical bits = 11, Quantum state = (Bell state)
-
: Classical bits = 10, Quantum state = (maximally mixed)
-
: Classical bits = 01, Quantum state after 3-step quantum walk with increased OTOC
Complexity values (normalized to [0,1]):
| System | ||||
|---|---|---|---|---|
| 0.1 | 0.0 | 0.0 | 0.2 | |
| 0.4 | 0.3 | 0.1 | 0.2 | |
| 0.4 | 0.6 | 0.2 | 0.3 | |
| 0.3 | 0.5 | 0.7 | 0.4 | |
| 0.1 | 0.2 | 0.3 | 0.8 |
Morphisms:
-
: Hadamard on qubit 1, CNOT(1,2), creates Bell state (circuit depth: 2 gates)
-
: Classical XOR operation conditioned on quantum measurement creates correlations
-
: Quantum walk evolution with steps increases scrambling
-
: Embed in triangulated 2-complex, project back (increases Betti numbers)
Verification: Each morphism strictly increases its target complexity:
| (23) | ||||
| (24) | ||||
| (25) | ||||
| (26) |
while returning algorithmic complexity to initial value: .
A.2 Visualization of Improvement Cycle
Appendix B Corrected Quantum Cycle
Why the original quantum example failed.
An earlier version claimed to return exactly to after unitary operations, violating unitarity. Here we provide a corrected version where the cycle returns to an equivalent state (same complexity) but not the identical state.
Proposition 2 (Quantum Improvement Cycle).
For a 4-qubit system, there exists a cycle of unitary operations creating strict improvements in different complexity pillars while returning to an equivalent (but not identical) state.
Proof.
Consider the 4-qubit Hilbert space .
State sequence:
| (27) | ||||
| (28) | ||||
| (29) | ||||
| (30) | ||||
| (31) |
where has the same algorithmic complexity as (both describable by gates from reference state ) but lives in a stabilizer code subspace with enhanced geometric structure (non-trivial homology in the graph state representation).
Explicit Hamiltonians:
-
(3 gates)
-
(increases mutual information between subsystems)
-
with , (Floquet evolution)
-
implements a stabilizer code encoding that preserves algorithmic simplicity (still gates to prepare from ) while adding topological protection (code distance )
The key insight: but they have identical algorithmic complexity:
-
Both are prepared by gates from
-
Both have the same circuit depth (constant)
-
But lives in a code subspace with non-trivial stabilizer group structure
Under relabeling that ignores the ambient code structure (focusing only on logical qubits), is equivalent to , satisfying Axiom (A5). This closes the cycle without violating unitarity. ∎
References
- [1] (2016) The complexity of quantum states and transformations: from quantum money to black holes. arXiv preprint arXiv:1607.05256. Cited by: item (I).
- [2] (1989) Mathematical methods of classical mechanics. 2 edition, Springer. Cited by: 2nd item.
- [3] (1950) A difficulty in the concept of social welfare. Journal of Political Economy 58 (4), pp. 328–346. Cited by: §1.
- [4] (1963) Social choice and individual values. 2 edition, Yale University Press. Cited by: §1.
- [5] (1988) Logical depth and physical complexity. In The Universal Turing Machine: A Half-Century Survey, pp. 227–257. Cited by: §2.4.
- [6] (2016) Holographic complexity equals bulk action?. Physical Review Letters 116 (19), pp. 191301. Cited by: §6.
- [7] (2017) An introduction to topological data analysis: fundamental and practical aspects for data scientists. arXiv preprint arXiv:1710.04019. Cited by: item (IV).
- [8] (2019) Quantum resource theories. Reviews of Modern Physics 91 (2), pp. 025001. Cited by: Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexity††thanks: Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author. DOI: 10.5281/zenodo.17653972, §1, Remark 3.
- [9] (2016) A mathematical theory of resources. Information and Computation 250, pp. 59–86. Cited by: Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexity††thanks: Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author. DOI: 10.5281/zenodo.17653972, §1, Remark 3.
- [10] (2006) Elements of information theory. 2 edition, Wiley-Interscience. Cited by: item (II), 1st item.
- [11] (1985) Ergodic theory of chaos and strange attractors. Reviews of Modern Physics 57 (3), pp. 617–656. Cited by: item (III).
- [12] (2010) Computational topology: an introduction. American Mathematical Society. Cited by: item (IV).
- [13] (2014) Fractal geometry: mathematical foundations and applications. 3 edition, Wiley. Cited by: item (IV).
- [14] (2010) A no free lunch theorem for multi-objective optimization. Information Processing Letters 110 (10), pp. 440–443. Cited by: §1.
- [15] (1996) Information measures, effective complexity, and total information. Complexity 2 (1), pp. 44–52. Cited by: §2.4.
- [16] (2023) The no free lunch theorem, kolmogorov complexity, and the role of inductive biases in machine learning. arXiv preprint arXiv:2304.05366. Cited by: §1.
- [17] (1976) Finite Markov chains. 2 edition, Springer-Verlag. Note: Chapter 6: Lumpability Cited by: §6.
- [18] (1980) Statistical physics. 3 edition, Vol. 5, Butterworth-Heinemann. Cited by: 1st item.
- [19] (1961) Irreversibility and heat generation in the computing process. IBM Journal of Research and Development 5 (3), pp. 183–191. Cited by: §2.4.
- [20] (2022) On the arrowian impossibility of machine intelligence measures. In Artificial General Intelligence: 15th International Conference, AGI 2022, pp. 22–31. Cited by: §1.
- [21] (2019) An introduction to kolmogorov complexity and its applications. 4 edition, Springer. Cited by: item (I).
- [22] (2000) Ultimate physical limits to computation. Nature 406 (6799), pp. 1047–1054. Cited by: §2.4.
- [23] (1977) Characteristic lyapunov exponents and smooth ergodic theory. Russian Mathematical Surveys 32 (4), pp. 55–114. Cited by: item (III).
- [24] (2016) Lieb-robinson bound and the butterfly effect in quantum field theories. Physical Review Letters 117 (9), pp. 091602. Cited by: item (III).
- [25] (1981) Markov functions. Annals of Probability 9 (4), pp. 573–582. Cited by: §6.
- [26] (2024) Software in the natural world: A computational approach to hierarchical emergence. arXiv preprint arXiv:2402.09090. Note: Version 2, June 2024 Cited by: §6.
- [27] (2000) Measuring information transfer. Physical Review Letters 85 (2), pp. 461. Cited by: item (II).
- [28] (2001) Computational mechanics: pattern and prediction, structure and simplicity. Journal of Statistical Physics 104 (3-4), pp. 817–879. Cited by: §2.4.
- [29] (1948) A mathematical theory of communication. Bell System Technical Journal 27 (3), pp. 379–423. Cited by: 2nd item.
- [30] (1962) The architecture of complexity. Proceedings of the American Philosophical Society 106 (6), pp. 467–482. Cited by: §2.4.
- [31] (2012) Introduction to the theory of computation. 3 edition, Cengage Learning. Cited by: 2nd item.
- [32] (2016) Computational complexity and black hole horizons. Fortschritte der Physik 64 (1), pp. 24–43. Cited by: §6.
- [33] (1936) On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society s2-42 (1), pp. 230–265. Cited by: 1st item.
- [34] (2023) Is there a finite complete set of monotones in any quantum resource theory?. Physical Review Letters 130 (24), pp. 240204. Cited by: §1.
- [35] (2010) Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515. Cited by: item (II), §6.
- [36] (1997) No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1), pp. 67–82. Cited by: §1, item (A4) Task-Universality:, §6.
- [37] (2018) Multiscale transfer entropy: measuring information transfer on multiple time scales. Communications in Nonlinear Science and Numerical Simulation 62, pp. 202–212. Cited by: item (II).
- [38] (2024) Scrambling dynamics and out-of-time-ordered correlators in quantum many-body systems: a tutorial. PRX Quantum 5 (1), pp. 010201. Cited by: item (III).