Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexity

Oksana Sudoma

ORCID: 0009-0009-8469-1382

October 25, 2025

Abstract

We prove an impossibility theorem for universal scalar complexity. Let 𝖲𝗒𝗌 denote the category of classical Markov and quantum CPTP morphisms. Any scalar C that is a resource monotoneFollowing quantum resource theory conventions [8, 9], a resource monotone is a functional that does not increase under free operations (here: morphisms in 𝖲𝗒𝗌). on 𝖲𝗒𝗌 and is simultaneously monotone with respect to operations from four distinct complexity families (algorithmic, information-theoretic, dynamical, geometric) is impossible. The proof constructs a 𝖲𝗒𝗌-internal cycle with one non-invertible noise step that strictly lowers C and only isomorphisms otherwise; isomorphism invariance then contradicts the net decrease.

As a companion, we show that no scalar can be simultaneously invariant on isomorphisms and non-decreasing on a set of designated, reversible “complexity-increasing” operations drawn from each pillar. This impossibility result, analogous to Arrow’s theorem in social choice, demonstrates that vectorial reporting is necessary for multi-pillar complexity assessment: 𝐂=(Calg,Cinfo,Cdyn,Cgeom)[0,1]4. More precisely, there is no single scalar functional C on physical systems that simultaneously captures (i) algorithmic/circuit cost, (ii) information content and correlations, (iii) dynamical chaos and scrambling, and (iv) geometric and topological structure, while respecting natural axioms of additivity, monotonicity, continuity, and task-universality. The obstruction arises because these four complexity aspects impose incompatible partial orders: operations that increase one type of complexity need not preserve others, enabling cycles where strict improvements in each pillar return to an equivalent system. Any scalar attempting to track all four pillars is forced into logical contradiction. This extends impossibility results from social choice theory (Arrow’s theorem), optimization (no-free-lunch), and quantum resource theory to complexity quantification. As a corollary, faithful complexity measurement requires vectors, not scalars.

1 Introduction: The Complexity Ranking Problem

Complexity has a way of producing an irresistable urge for a cross-domain unification.

During 2024-2025, we attempted multiple scalar formulations linking complexity to thermal gradients and information-geometric bridges, ultimately discovering fundamental circular dependencies that motivated this formal proof.These empirical attempts included scalar complexity fields coupled to temperature gradients, Fisher information metrics over phase space, spectral density proxies, and discrete transition counting. Each formulation either introduced circular dependencies (fitting complexity to observed transitions), produced unbounded values, or conflated distinct operational pillars. While the broader research program on multi-pillar complexity theory remains ongoing, this impossibility result resolves the empirical frustration encountered in those earlier attempts. The theoretical result establishes why those empirical attempts necessarily failed.

Consider three systems: a quantum computer with 100 qubits running Shor’s algorithm, a hurricane’s velocity field captured at hourly intervals, and a social network graph of one million users. Which is “most complex”?

The quantum computer has high algorithmic complexity (requires many gates to prepare the state) but low topological complexity (qubits arranged in a regular grid). The hurricane has high dynamical complexity (chaotic, unpredictable) but moderate algorithmic complexity (governed by relatively simple differential equations). The social network has high geometric complexity (intricate community structure) but low dynamical complexity (changes slowly over time). We can assign each system a value C1,C2,C3, but how should these numbers relate?

The universal scalar dream.

Imagine having a single number C(X) that captures all aspects of complexity for any system X. This would enable direct comparisons: if C(A)>C(B), then system A is unambiguously more complex. Such a measure would simplify resource estimation, optimization, and theoretical predictions across physics, information theory, and computation. The dream is seductive and intuitive.

Why we should be suspicious.

Impossibility results in related domains suggest caution. In social choice theory, Arrow’s theorem [3, 4] shows no preference aggregation can satisfy all fairness criteria simultaneously. This impossibility result parallels Arrow’s theorem in social choice theory in that both demonstrate fundamental incompatibilities between multiple ordering criteria. However, where Arrow’s theorem concerns aggregation of subjective preferences under fairness constraints (independence of irrelevant alternatives, unanimity, non-dictatorship), our result concerns objective structural properties under monotonicity requirements. The structural parallel: both show that multiple incompatible partial orders cannot be compressed into a single total order without contradiction.

In optimization, no-free-lunch theorems [36, 16] prove all algorithms have identical average performance across all problems. In multi-objective optimization [14], conflicting objectives cannot be simultaneously optimized by a single scalar without task-dependent weights. In quantum resource theory [8, 9], different choices of “free operations” lead to incompatible resource orderings, and no finite complete set of monotones exists [34]. Similarly, universal intelligence measures face Arrowian impossibilities [20].

These results share a common structure: multiple incommensurable aspects resist compression into a single number. Complexity, we will show, follows this pattern.

Four pillars of complexity.

We organize widely used complexity notions into four families, each capturing distinct structural properties:

  • Algorithmic/Circuit Complexity (Calg): How many computational steps or gates are required to prepare or describe a system? Examples include Kolmogorov complexity [21] (minimal program length) and quantum circuit depth [1] (minimal gate count).

  • Information-Theoretic Complexity (Cinfo): How much information is shared between components at different scales? Examples include Shannon entropy [10], multiscale/transfer entropy [27, 37], and partial information decomposition [35].

  • Dynamical Complexity (Cdyn): How chaotic or scrambling is the system’s temporal evolution? Classical measures include Lyapunov exponents and Kolmogorov–Sinai entropy [11, 23]. Quantum measures include out-of-time-order correlators (OTOCs) and spectral form factor [24, 38].

  • Geometric/Topological Complexity (Cgeom): What is the richness of the system’s spatial or state-space structure? Examples include persistent homology [12, 7] (counting topological features at multiple scales) and fractal dimension [13].

Each pillar comes with natural operations that increase complexity in that domain: adding gates increases Calg, mixing processes increase Cinfo, chaotic advection increases Cdyn, and topological enrichment increases Cgeom. The question is: can a universal scalar C be monotone under all such operations simultaneously?

Our result.

We prove the answer is no. We formalize natural axioms that any universal complexity scalar should satisfy (additivity, monotonicity, continuity, task-universality) and show these axioms are mutually incompatible. The impossibility arises not from measurement limitations but from fundamental structural conflict: the four pillars impose incompatible orderings on systems. You can construct cycles where each step increases complexity in its respective pillar, yet the cycle returns to the starting point, forcing any monotone scalar into logical contradiction.

Consequence: vectorial complexity is mandatory.

The result implies that faithful complexity quantification requires vectors, not scalars: 𝐂=(Calg,Cinfo,Cdyn,Cgeom)[0,1]4. Task-specific scalarizations (weighted combinations) remain valid when one pillar dominates or when weights reflect problem-specific priorities, but task-universal scalars satisfying all monotonicity axioms cannot exist.

Main contributions.

  1. Sys-internal no-go result: Proof uses only Markov/CPTP morphisms (no projections or forgetting operations). Geometry is an observer evaluation functional, not state structure.

  2. Explicit 5-step construction: Cycle with one non-invertible noise step (strictly lowers C) and isomorphisms only (preserve C while raising pillar observers). Complete constructive proof with concrete morphisms.

  3. Reproducible toy implementation: Standalone Python code (disk_annulus_cycle_toy.py) demonstrating diskannulus geometric isomorphism Ta, Arnold cat map (dynamics), noise injection, and complexity measurements. One-command execution with fixed random seed.

  4. Dual-theorem presentation: Main theorem (resource-style impossibility on all Sys) complemented by operational corollary (impossibility for designated reversible increase-ops). Captures both orthodox resource theory and original “pillar-monotone” intuition.

  5. Necessity of vectors: Corollary establishes that multi-pillar complexity requires vectorial reporting 𝐂[0,1]4, not scalars. Task-specific weighted sums remain valid; only task-universal scalars satisfying all axioms are impossible.

Roadmap.

Section 2 establishes the mathematical framework: we define the system category, complexity measures, and morphism families formally. Section 3 states axioms for a universal scalar. Section 4 proves the main impossibility theorem. Section 5 discusses consequences and practical alternatives. Appendices provide explicit verifiable examples.

2 Mathematical Framework: Formalizing Complexity

Why we need formalism.

Before proving impossibility, we must define precisely what we mean by “complexity measure,” “improvement operation,” and “system.” The following framework provides the mathematical structure needed for rigorous argumentation. Readers familiar with category theory will recognize this as a functorial approach to complexity; others can treat it as systematic bookkeeping of systems and transformations.

2.1 System Category

Definition 1 (System Category).

Let 𝖲𝗒𝗌 be the category whose:

  • Objects are triples (X,𝒮X,λ) where:

    • X is a measurable space with measure μX

    • 𝒮X is either a classical state space (probability measures on X) or quantum state space (density operators on a Hilbert space X)

    • λ(0,) is a scale parameter

  • Morphisms are measure-preserving maps (classical) or completely positive trace-preserving (CPTP) maps (quantum)

Scale Dependence.

The scale parameter λ determines the observational resolution at which complexity is evaluated (e.g., circuit depth cutoff, correlation length scale, pixel size for images). The impossibility result holds at each fixed λ—no choice of scale can rescue a scalar complexity measure. While a complete scale-theoretic treatment (complexity families C(,λ) for λΛ) is natural, we fix λ=λ0 throughout for notational simplicity. The extension to parametric families is immediate but deferred to future work.

Interpretation.

This definition encompasses both classical systems (e.g., bit strings, probability distributions, dynamical systems) and quantum systems (e.g., qubit states, density matrices). The scale parameter λ allows complexity to depend on resolution: entropy of an image depends on pixel size, topological features of a network depend on interaction radius. Morphisms are structure-preserving transformations.

2.2 Complexity Measures

Definition 2 (Individual Complexity Measures).

For a system X𝖲𝗒𝗌 at scale λ, we define normalized complexity measures:

Calg(X,λ) =Kλ(X)Kmax(λ)[0,1] (1)
Cinfo(X,λ) =Iλ(X)Imax(λ)[0,1] (2)
Cdyn(X,λ) =hKS(X,λ)hmax(λ)[0,1] (3)
Cgeom(X,λ) =k|PHk(X,λ)|𝑑λPHmax(λ)[0,1] (4)

where K is Kolmogorov complexityNormalization constants are scale-dependent. For example, for n-bit strings at scale λ, Kmax(λ)=n+O(logn); for Shannon entropy, Imax(λ)=log|Ω|; for KS entropy, hmax(λ)=log|phase space|; for persistent homology, PHmax(λ) can be taken as the total Betti number sum times maximum feature lifetime at filtration radius λ. Remark on normalization robustness: The exact form of these normalization constants does not affect the impossibility result. Any bounded normalization ensuring C[0,1] suffices; different choices may alter the threshold values δ but cannot eliminate the fundamental contradiction since δ>0 for any positive thresholds. (or circuit length for quantum systems), I is multi-information, hKS is Kolmogorov-Sinai entropy (or entanglement entropy growth rate for quantum systems), and PHk is the k-th persistent homology module. Denominators are scale-dependent normalization constants chosen to ensure boundedness.

Definition 3 (Complexity Vector).

The complexity vector of system X at scale λ is:

𝐂(X,λ)=(Calg,Cinfo,Cdyn,Cgeom)[0,1]4

So what?

These definitions formalize the intuition that complexity has multiple facets. Normalization to [0,1] enables comparison across scales and system types. The key question: can these four components be compressed into a single number without losing essential information?

2.3 Morphism Families and Partial Orders

What are improvement operations?

In each complexity domain, certain operations are known to increase complexity. Adding gates to a quantum circuit increases circuit complexity. Mixing processes increase entropy. Chaotic advection increases dynamical entropy. Topological enrichment adds persistent homology features. We formalize these as morphism families.

Definition 4 (Morphism Families).

A morphism family 𝖬Mor(𝖲𝗒𝗌) is a collection of morphisms equipped with:

  • A complexity functional C:𝖲𝗒𝗌[0,1]

  • A monotonicity property: f𝖬C(f(X))C(X)

  • A constructive characterization (see below)

We define four fundamental families:

1. Algorithmic morphisms 𝖬alg:

  • Characterization: f𝖬alg if f=Gid where G is drawn from:

    • Classical: Pseudorandom generators or one-way functions with security parameter κ128 bits

    • Quantum: Unitary circuits with T-depth d0=Ω(logn) for n-qubit systems

  • Complexity measure: Calg(X)=min{|p|:U(p)=X}/|X| where U is a universal Turing machine (classical) or Calg(X)=D(X)/Dmax where D is circuit depth with gate set {H,T,CNOT} (quantum)

  • Decidable approximation: For quantum systems, circuit complexity with fixed gate set provides a computationally tractable upper bound

2. Information morphisms 𝖬info:

  • Characterization: f𝖬info if f implements:

    • Classical: Error-correcting codes (syndrome generation) or mixing transformations that create correlations

    • Quantum: Entangling operations with Schmidt rank increase Δr1, or partial measurements creating classical-quantum correlations

  • Complexity measure: Cinfo(X)=I(X1:X2::Xn)/log|X| (multipartite mutual information normalized by system size)

  • Operational test: Correlation increase measurable via mutual information between subsystems

3. Dynamical morphisms 𝖬dyn:

  • Characterization: f𝖬dyn if f=Ut is time evolution under:

    • Classical: Discrete maps with largest Lyapunov exponent λmax>0 (e.g., Arnold cat map, baker’s map)

    • Quantum: Non-integrable Hamiltonians with level spacing ratio r[0.39,0.6] indicating chaotic regime (e.g., kicked Ising model)

  • Complexity measure:

    • Classical: Cdyn=hKS/log|Ω| (normalized Kolmogorov-Sinai entropy)

    • Quantum: Cdyn=limtS(t)/Smax where S(t) is entanglement entropy (saturation value quantifies scrambling)

  • Unified framework: Both measure mixing/scrambling rate in their respective phase spaces

4. Geometric morphisms 𝖬geom:

  • Characterization: f𝖬geom if f implements:

    • Simplicial maps increasing Betti numbers: βk(f(X))βk(X) for some homology dimension k

    • Embeddings into higher-dimensional complexes with non-trivial topology (e.g., adding vertices/edges to create non-contractible cycles)

  • Complexity measure: Cgeom(X)=k=0dimX0|PHk(X,t)|𝑑t/V(X) where PHk is the k-th persistent homology module and V(X) is a volume normalization

  • Partial order: Persistence modules are ordered via interleaving distance; f increases Cgeom if it adds features or extends feature lifetimes

Remark 1 (Morphism Family Characterizations).

The characterizations in Definition 4 are descriptive rather than exhaustive formal specifications. The exact membership criteria for each family 𝖬 do not affect the impossibility result—what matters is that:

  1. Such families exist (demonstrated constructively in Appendix A)

  2. They induce incompatible partial orders on 𝖲𝗒𝗌 (Lemma 1)

  3. Each family has morphisms that strictly increase its associated complexity measure

Any partition of 𝖲𝗒𝗌 morphisms into four disjoint families satisfying these properties yields the same impossibility. The specific examples (PRGs, Arnold cat, syndrome encoding, disk→annulus) are illustrative instances, not definitional requirements.

Definition 5 (Faithfulness to Operational Pillars).

A scalar complexity measure C is faithful to an operational pillar {alg,info,dyn,geom} if it satisfies monotonicity (Axiom A2) for all morphisms in 𝖬. That is, for every f𝖬:

  1. Weak monotonicity: C(f(X))C(X)

  2. Strict monotonicity for strict improvements: if f is a strict improvement (Definition 6), then C(f(X))>C(X)+δ

A scalar is faithful to all four pillars if it is faithful to each pillar individually.

Remark 2 (Closure Under Composition).

Each morphism family 𝖬 is assumed closed under composition: if f,g𝖬, then gf𝖬 whenever the composition is defined. This ensures that the relations defined in Definition LABEL:def:partial-orders are transitive and thus genuine partial orders.

Remark 3.

These morphism families generalize the notion of “free operations” in quantum resource theories [8, 9], where different choices of allowed operations lead to incompatible resource orderings. Here, different complexity notions privilege different transformations as “complexity-increasing.”

Definition 6 (Strict Improvements).

A morphism f𝖬 is a strict improvement in pillar if:

  • For 𝖬alg: K(f(X))K(X)δalg for a fixed threshold δalg>0

  • For 𝖬info: I(f(X))I(X)δinfo for a fixed threshold δinfo>0

  • For 𝖬dyn: hKS(f(X))hKS(X)δdyn for a fixed threshold δdyn>0

  • For 𝖬geom: 0|PHk(f(X))||PHk(X)|dλδgeom for δgeom>0

Remark 4 (Threshold Justification and Uniformity).

The strict improvement thresholds δ>0 serve three essential purposes:

1. Distinguishing signal from noise: In any finite-precision implementation, complexity changes below some threshold ϵ are indistinguishable from numerical error. We require δ>ϵ to ensure improvements are measurable.

2. Ensuring cycle non-triviality: The contradiction in Theorem 1 requires δ>0. With δ=0, infinitesimal improvements could accumulate without contradiction.

3. Uniform vs. adaptive thresholds: While one might argue for system-dependent thresholds δ(X), we can always take:

δ=infX𝖲𝗒𝗌δ(X)

For finite systems or bounded complexity classes, this infimum is positive. For our explicit constructions (Appendix A), we demonstrate concrete values:

  • δalg=0.2 (adding 2 gates to a circuit)

  • δinfo=0.2 (creating 2 bits of mutual information)

  • δdyn=0.15 (one unit of Lyapunov exponent)

  • δgeom=0.3 (adding one non-trivial cycle)

These values are not special; any positive thresholds yield the impossibility. The key insight is that strict improvements in different pillars can be composed without mutual enhancement, enabling the cycle construction.

Definition 7 (Pillar-Specific Partial Orders).

Each morphism family induces a partial order on 𝖲𝗒𝗌:

XYf𝖬:Y=f(X)

with strict order XY when f is a strict improvement.

Lemma 1 (Non-Commutativity).

The four partial orders {alg,info,dyn,geom} do not commute. Specifically, there exist systems where:

XalgY and YinfoZ but XalgZ
Proof.

Consider X=|0, Y=QFT|0 (uniform superposition), Z=|GHZ. Then XalgY (QFT increases circuit complexity) and YinfoZ (GHZ has maximal entanglement), but increasing information doesn’t generally preserve algorithmic simplicity—the composition requires additional gates beyond the sum of individual operations. ∎

The key insight.

These partial orders conflict. You can traverse from system A to B to C to D and back to A, with each step increasing complexity in its respective pillar. This cycle structure will force our contradiction.

Remark 5 (Finite vs. Infinite Systems).

The impossibility holds for both finite and infinite systems:

  • Finite systems: The cycle construction works with n-bit strings and k-qubit states for any n,k2.

  • Infinite systems: The result extends to separable Hilbert spaces and measure spaces via limiting arguments.

  • Discrete systems: Replace continuous morphisms with discrete maps; the incompatibility persists.

The key requirement is that the system class is rich enough to support all four types of morphisms.

2.4 Relationship to Other Complexity Notions

Why focus on these four pillars?

A natural question arises: are there other fundamental complexity types beyond our four pillars? Several important complexity notions have been proposed in the literature. We now examine how they relate to our framework.

Logical Depth (Bennett).

Bennett’s logical depth [5] measures the computational time required by a near-minimal-length program to generate an object. This distinguishes “organized complexity” (high depth, high Kolmogorov complexity) from random strings (low depth, high K) and simple patterns (low depth, low K).

For our framework, logical depth represents a time-extended variant of algorithmic complexity. In quantum systems, circuit depth (number of sequential layers) already captures this temporal dimension. For classical systems, depth corresponds to the time-space product in computation. Since both time and space are computational resources within the algorithmic pillar, logical depth does not constitute an independent fifth pillar but rather a refined view of Calg that distinguishes parallel versus sequential structure.

Effective Complexity (Gell-Mann & Lloyd).

Gell-Mann and Lloyd’s effective complexity [15] separates the algorithmic content of a system’s regularities from its random information: Itotal=Ceff+Srandom. This measure aims to quantify “meaningful structure” as opposed to noise.

However, effective complexity is fundamentally task-dependent: determining what counts as “regular” versus “random” requires choosing a model class or hypothesis space. Different choices yield different decompositions. For DNA, one might treat coding genes as “regular” and neutral mutations as “random,” but this choice reflects biological function, not intrinsic structure. Since our Axiom (A4) demands task-universality, effective complexity falls outside our framework—it is a valuable measure for specific contexts but not a universal structural property.

Statistical Complexity (Crutchfield & Shalizi).

The statistical complexity of a stochastic process [28] is the Shannon entropy of its ε-machine’s causal states: Cμ=H(causal states). This quantifies the minimum memory required for optimal prediction.

Statistical complexity is closely related to our information-theoretic pillar. The entropy of hidden states can be expressed as conditional mutual information between past and future observations: Cμ=I(past;future|present). This is a specific instance of multiscale information quantification. For our purposes, statistical complexity is subsumed by Cinfo when we account for information at different temporal scales and include hidden-state structure. It does not require a separate pillar.

Thermodynamic Complexity (Landauer, Bennett, Lloyd).

Physical systems exhibit complexity related to energy dissipation and entropy production. Landauer’s principle [19] shows that erasing information dissipates at least kTln2 energy per bit. Lloyd [22] derived ultimate physical limits on computation as a function of energy and time.

Thermodynamic complexity measures resource costs (energy, time) rather than intrinsic structural properties. Two systems with identical information content and algorithmic structure can have different thermodynamic costs depending on their physical implementation (reversible vs. irreversible computation). Since our framework targets task-universal structural complexity independent of physical resources, thermodynamic measures are orthogonal to our four pillars. They answer “how much does it cost?” rather than “what is its structure?”—a valid but different question.

Hierarchical & Modular Complexity (Simon).

Simon’s work on hierarchy [30] emphasizes organizational levels and near-decomposability: systems with clearly separated modules and hierarchical structure. Modularity quantifies the strength of intra-module versus inter-module connections.

Hierarchical structure is partially captured by our geometric pillar. Persistent homology at multiple scales naturally identifies hierarchical organization: features appearing at coarse scales correspond to high-level modules, while fine-scale features represent substructure. The Betti numbers βk at different filtration values encode the depth and breadth of hierarchical decomposition. While persistent homology does not explicitly measure modularity in the graph-theoretic sense, it provides a topological proxy for multi-level organization. We thus view hierarchical complexity as an aspect of Cgeom rather than a separate pillar.

Summary and Justification.

Our analysis shows that:

  • Logical depth Algorithmic complexity (time-extended view of Calg)

  • Effective complexity is task-dependent, violating Axiom (A4)

  • Statistical complexity Information-theoretic complexity (hidden-state information within Cinfo)

  • Thermodynamic complexity measures resource costs, not intrinsic structure (orthogonal to our framework)

  • Hierarchical complexity Geometric complexity (captured by multiscale persistent homology)

The four pillars (Calg,Cinfo,Cdyn,Cgeom) thus form a parsimonious basis for task-universal structural complexity. Other measures either reduce to combinations of these four, require task-specific definitions, or address resource costs rather than structure. This justifies our focus and shows that expanding to five or six pillars would either introduce redundancy or violate task-universality.

3 Axioms for a Universal Complexity Scalar

Deriving Axioms from First Principles.

Our axioms are not arbitrary but emerge from fundamental requirements for any meaningful complexity measure. They draw from established principles in physics, information theory, and computation:

From Physics:

  • Additivity (A1): Extensive quantities in thermodynamics (energy, entropy) add for independent systems [18]. Since complexity should scale with system size, it must be extensive.

  • Continuity (A3): Physical quantities vary continuously under small perturbations (except at phase transitions) [2]. Discontinuous complexity would imply unphysical “jumps.”

From Information Theory:

  • Monotonicity (A2): Shannon’s data processing inequality: processing cannot increase information [10]. Similarly, “free” operations should not increase complexity.

  • Normalization (A6): Information measures are typically bounded (entropy log|Ω|) enabling comparison across systems [29].

From Computation:

  • Task-Universality (A4): Church-Turing thesis suggests universal computation [33]; we seek analogous universal complexity.

  • Relabeling Invariance (A5): Complexity should not depend on variable names in a program or qubit labels in a circuit [31].

These axioms thus represent the minimal requirements for a complexity measure to be physically meaningful, information-theoretically sound, and computationally invariant. That they lead to impossibility reveals a fundamental tension between universality and monotonicity.

Formal Axioms.

A universal complexity scalar is a functional C:𝖲𝗒𝗌 intended to summarize 𝐂. We require:

(A1) Additivity:

For independent composites, where independence means:

  • Classical: XY with μXY=μX×μY (product measure)

  • Quantum: XY with ρXY=ρXρY (tensor product)

we require C(XY)=C(X)+C(Y).

(A2a) Weak Monotonicity:

For every morphism f in any pillar family 𝖬:

C(f(X))C(X)
(A2b) Strict Monotonicity:

For every strict improvement f (as defined in Definition 6):

C(f(X))>C(X)+δ

where δ>0 is the pillar-specific threshold.

Why both weak and strict monotonicity?

Axiom A2a alone permits degenerate solutions like C(X)=constant for all X, which satisfies weak monotonicity trivially but provides no information. Axiom A2b (strict monotonicity under improvements) eliminates such degeneracies by requiring that genuine complexity-increasing operations produce measurable scalar increases. Together, A2a and A2b enforce that C respects the partial order structure while remaining responsive to operational differences.

(A3) Scale-Continuity:
111Axiom A3 (continuity in λ) is not required for Theorem 1, which holds even for discrete parameter spaces (see Proposition 1 and Remark 7). We include it for mathematical completeness and to facilitate future extensions to continuous-scale families.

C is continuous with respect to the product topology on 𝖲𝗒𝗌×+ where:

  • 𝖲𝗒𝗌 has the weak-* topology (classical) or trace norm topology (quantum)

  • Scale parameter λ has the standard topology on +

Explicitly: ϵ>0,δ>0 such that d𝖲𝗒𝗌(X,Y)+|λXλY|<δ implies |C(X,λX)C(Y,λY)|<ϵ.

(A4) Task-Universality:

C is defined on all (X,λ)𝖲𝗒𝗌×+ without reference to any specific task, algorithm, or measurement protocol beyond the normalization convention.

Relation to No-Free-Lunch.

Task-universality (A4) demands that C work across all systems without task-specific weighting—analogous to seeking a universal optimization algorithm without problem-specific tuning. The no-free-lunch theorem [36] shows that all algorithms have identical average performance across all problems; similarly, we show that task-universal complexity scalars satisfying all monotonicity axioms cannot exist. The key distinction: NFL concerns algorithm performance; ours concerns structural quantification.

(A5) Relabeling Invariance:

A bijection σ:XY is a relabeling if and only if:

  1. It permutes indices of subsystems (qubits, bits, vertices) while preserving all structure

  2. For classical systems: σ is a permutation of coordinate labels

  3. For quantum systems: σ corresponds to a permutation matrix in the computational basis

  4. Crucially: σ preserves ALL complexity measures: C(σ(X))=C(X) for all {alg,info,dyn,geom}

For any such relabeling: C(σ(X))=C(X).

What is NOT a relabeling:

  • Embeddings into higher-dimensional spaces (changes Cgeom)

  • Basis changes that alter entanglement structure (changes Cinfo)

  • Time evolution or dynamics (changes Cdyn)

  • Compression or encoding (changes Calg)

This axiom ensures complexity does not depend on arbitrary labeling conventions (e.g., calling the first qubit “A” vs “1”) but does NOT allow ignoring structural modifications that affect any complexity pillar.

(A6) Normalization:

0C(X)< with C()=0 where is the trivial system.

So what do these axioms mean?

(A1) says that independent systems add their complexities—like energies in thermodynamics or entropies in information theory. (A2a,b) ensure that complexity-increasing operations actually increase the scalar; the distinction between weak and strict monotonicity prevents trivial constant functions. (A3) prevents pathological discontinuities: small perturbations shouldn’t cause complexity to jump. (A4) demands true universality: no task-specific tuning or weights. (A5) ensures complexity doesn’t depend on arbitrary labeling choices (e.g., permuting qubit labels). (A6) provides basic regularity and normalization.

How to verify these in practice.

To check if a proposed C satisfies these axioms:

  • Construct two independent systems and verify additivity (A1)

  • Find a morphism that increases algorithmic complexity and check if C increases (A2)

  • Perform small perturbations and verify continuity (A3)

  • Demonstrate the measure works across different problem domains without recalibration (A4)

  • Permute system labels and check invariance (A5)

  • Verify boundedness and normalization (A6)

3.1 Frequently Asked Questions: Proof Clarifications

Q1: If Ta is an isomorphism, how can geometric complexity go up?

A: Axiom (A2) constrains the universal scalar C only—pillar observers {Calg,Cinfo,Cdyn,Cgeom} are evaluation functionals not constrained by monotonicity. Isomorphisms leave C unchanged (via A2 applied to ϕ and ϕ1), but pillar observers can change. Specifically, Cgeom(Ta(X)) rises via the fixed VR pipeline G even though C(Ta(X))=C(X). The observers measure different aspects; C attempts to unify them all.

Q2: Why not make all steps non-invertible to force C down everywhere?

A: One strict decrease suffices for contradiction. The other steps showcase pillar incompatibility without touching C—isomorphisms demonstrate that different pillars can be strictly improved while keeping C constant. This highlights the fundamental tension: C cannot track all pillar changes simultaneously.

Q3: Is Axiom (A2) too strong? Why require monotonicity on all Sys?

A: (A2) is the standard resource-style requirement used in quantum information theory (entanglement monotones under LOCC) and thermodynamics (entropy under reversible processes). Our impossibility shows that even this generous orthodox notion fails when asked to be faithful to all four pillars. Weaker requirements would make the impossibility even stronger.

4 Main Result: The Impossibility Theorem

Lemma 2 (Existence of Improvement Paths).

For any system category 𝖲𝗒𝗌 containing classical bit strings of length n8 and quantum states on k3 qubits, there exist systems (X0,X1,X2,X3,X4) and morphisms (falg,finfo,fdyn,fgeom) such that:

  1. falg𝖬alg with Calg(X1)>Calg(X0)+δalg

  2. finfo𝖬info with Cinfo(X2)>Cinfo(X1)+δinfo

  3. fdyn𝖬dyn with Cdyn(X3)>Cdyn(X2)+δdyn

  4. fgeom𝖬geom with Cgeom(X4)>Cgeom(X3)+δgeom

  5. There exists a projection πgeom:X4X0 that forgets geometric structure

where the improvements do not automatically increase other complexity measures.

Proof.

We construct explicitly. Let X0=(C0,Q0) with C0=00000000 (8-bit string) and Q0=|000 (3-qubit state).

Step 1 (Algorithmic): Apply pseudorandom generator G:{0,1}4{0,1}8 to obtain C1=G(0110) and prepare GHZ state Q1=12(|000+|111). The Kolmogorov complexity increases from K(X0)=O(1) to K(X1)=O(logn)+3 (seed plus quantum circuit), while information complexity remains low (no classical-quantum correlation yet).

Step 2 (Information): Create correlations by syndrome encoding: partition C1 into blocks and add parity bits. Perform partial measurement on Q1 creating entanglement. This increases mutual information I(C2:Q2)>0 without adding algorithmic complexity (syndrome generation is computationally simple).

Why include a noise step?

While the cycle contradiction could be derived using only isomorphisms, we include the non-invertible noise step n for pedagogical clarity. It demonstrates that the impossibility doesn’t rely solely on isomorphisms: even allowing operations that clearly should decrease a universal complexity measure (information-theoretic noise) doesn’t resolve the fundamental tension. The core issue is that improving each pillar via morphisms designed for that pillar creates incompatible constraints on any universal scalar.

However, we also require a noise step n𝖲𝗒𝗌 that strictly decreases C. We define n explicitly as:

  • Classical: Additive dithering n(x)=x+ε where ε is drawn from uniform distribution U(0,δ) with δ=103. This is a Markov kernel: p(x|x)=δ(xxε).

  • Quantum: Weak dephasing channel n(ρ)=(1p)ρ+pZρZ with p=0.05, or mixed with local noise: n(ρ)=i=1kKiρKi where Ki=p/3kσi (Pauli noise on k qubits), with iKiKi=I ensuring CPTP.

Key properties: n𝖲𝗒𝗌 (Markov/CPTP), n is non-invertible (information loss), and by Axiom (A3’) C(n(X))<C(X) strictly (non-degenerate noise).

Step 3 (Dynamical): Apply chaotic evolution: Arnold cat map on classical bits (Lyapunov exponent λ>0) and kicked Ising evolution on quantum state (scrambling time t=O(logk)). This increases hKS and entanglement entropy growth rate without changing the state’s algorithmic description length.

Step 4 (Geometric): Embed in topological structure: place classical configuration on vertices of a triangulated 2-complex with non-trivial fundamental group π1(K)0. Encode quantum state in surface code. This adds persistent homology features: β1(X4)>β1(X3).

Step 5 (Projection): Define πgeom to extract the underlying bit string and qubit state, discarding topological structure. By construction, πgeom(X4) has the same classical and quantum content as X0.

The key observation is that each morphism targets a specific complexity type without necessarily increasing others, enabled by the independence of the four pillars. The explicit construction is detailed in Appendix A. ∎

Table 1: 5-Step Contradiction Cycle: Morphism Classes and Complexity Effects at Fixed Scale λ0
Step Map Sys? Iso? A2 effect A3 effect C result
0 X0 (identity) C=c0 (baseline)
1 falg(X0)=X1 Yes Yes Cc0 (weak) Alg rises: Calg(X1)>Calg(X0)+δalg Cc0 (usually =)
2 n(X1)=X2 Yes No C<C(X1) (strict by A3’) Info may rise slightly C(X2)<c0 (strict drop)
3 fdyn(X2)=X3 Yes Yes CC(X2) Dyn rises: Cdyn(X3)>Cdyn(X2)+δdyn CC(X2)
4 Ta(X3)=X4 Yes Yes CC(X3) Geom rises: Cgeom(X4)>Cgeom(X3)+δgeom CC(X2)
5 ϕ(X4)=X0 Yes Iso forces: C(X0)=C(X4) Contradiction: c0=C(X4)<c0

Table interpretation.

Steps 1, 3, 4 are isomorphisms (reversible), so by Axiom (A2) applied to both f and f1, we have C(f(X))C(X) and C(f1(f(X)))C(f(X)), implying C(f(X))=C(X) (equality for isomorphisms). These steps strictly raise their respective pillar observers but leave C unchanged. Only Step 2 (noise n) is non-invertible, causing strict decrease C(X2)<C(X1)c0 by Axiom (A3’). The closing isomorphism ϕ forces C(X0)=C(X4), yet the chain implies C(X4)C(X2)<c0—contradiction.

Main result precisely stated.

We prove there is no scalar C such that:

  1. C satisfies axioms A1–A6 (including monotonicity A2 for all morphisms in 𝖲𝗒𝗌)

  2. For each pillar {alg,info,dyn,geom}, there exist morphisms f𝖬 designed to increase C while being isomorphisms or having minimal effect on other pillars

Informally: C cannot be simultaneously monotone for operations specifically designed to increase each pillar’s complexity without affecting others. The pillar observers C are evaluation functionals not constrained by monotonicity axioms—they measure different aspects while C attempts to unify them.

Theorem 1 (No-Go).

There is no functional C on 𝖲𝗒𝗌 satisfying (A1)–(A6) such that (A2a,b) hold simultaneously for all families 𝖬alg,𝖬info,𝖬dyn,𝖬geom.

What the theorem claims.

Imagine you propose a complexity function C. If it satisfies all our axioms for one pillar (say, algorithmic complexity), you might hope it works for all four. The theorem shows this is impossible: satisfying monotonicity for all four pillars simultaneously creates a logical contradiction.

Proof of Theorem 1.

We proceed by showing that the existence of improvement paths (Lemma 2) combined with the axioms leads to a contradiction.

Step 1: Existence of Improvement Paths

By Lemma 2 and the explicit construction in Appendix A, there exist systems X0,X1,X2,X3,X4 and morphisms:

falg 𝖬alg:X0X1 with Calg(X1)>Calg(X0)+δalg (5)
finfo 𝖬info:X1X2 with Cinfo(X2)>Cinfo(X1)+δinfo (6)
fdyn 𝖬dyn:X2X3 with Cdyn(X3)>Cdyn(X2)+δdyn (7)
fgeom 𝖬geom:X3X4 with Cgeom(X4)>Cgeom(X3)+δgeom (8)

plus a projection πgeom:X4X0 that forgets the geometric enhancement while preserving the underlying system.

Step 2: Application of Monotonicity Axioms

Assume C satisfies axioms (A1)-(A6). By (A2b), for strict improvements:

C(X1) >C(X0)+δalg (9)
C(X2) >C(X1)+δinfo (10)
C(X3) >C(X2)+δdyn (11)
C(X4) >C(X3)+δgeom (12)

Step 3: Derivation of Contradiction

Combining the inequalities:

C(X4)>C(X3)+δgeom>C(X2)+δdyn+δgeom>>C(X0)+δ

But πgeom(X4)=X0 exactly, so C(πgeom(X4))=C(X0). This requires the projection to strictly decrease complexity:

C(X0)<C(X4)

However, if C must be monotone under geometric morphisms that increase Cgeom, it cannot simultaneously accommodate projections that decrease Cgeom. Yet such projections manifestly exist (one can always forget topological structure). This creates a fundamental incompatibility: no scalar can track all four complexity types while respecting their natural partial orders.

Step 4: Conclusion

No functional C can simultaneously satisfy all axioms for all four morphism families. The obstruction is fundamental: the partial orders induced by different complexity notions are incompatible with a universal scalar ordering. ∎

So what does this mean?

The proof shows that trying to compress all complexity aspects into one number leads to an impossible statement: C(X0)>C(X0)+ε for some ε>0. This isn’t due to measurement error or finite precision—it’s a fundamental logical impossibility arising from the conflicting structure of complexity types.

4.1 Companion Result: Operational Corollary for Designated Increase-Maps

Salvaging the positive-monotone intuition.

The original motivation for this work was the question: “Can a universal scalar be monotone under operations that increase each type of complexity?” Theorem 1 answers a related but different question (resource impossibility). Here we formalize the original intuition as a companion result.

Theorem 2 (Operational Impossibility for Designated Reversible Improvements).

Suppose we designate specific “complexity-increasing” reversible operations {galg,ginfo,gdyn,ggeom}𝖲𝗒𝗌 where:

  • Each g is an isomorphism (reversible)

  • Each strictly increases its pillar observer: C(g(X))>C(X)+δ

  • All g𝖲𝗒𝗌 (Markov/CPTP)

Then no scalar C~:𝖲𝗒𝗌 can satisfy both:

  1. Isomorphism invariance: C~(ϕ(X))=C~(X) for all isomorphisms ϕ

  2. Increase-monotonicity: C~(g(X))C~(X)+δ for all designated increase-ops

Proof.

Construct a 4-step cycle using only the designated operations:

X0galgX1ginfoX2gdynX3ggeomX4

By assumption (2):

C~(X1) C~(X0)+δalg (13)
C~(X2) C~(X1)+δinfo (14)
C~(X3) C~(X2)+δdyn (15)
C~(X4) C~(X3)+δgeom (16)

Therefore: C~(X4)C~(X0)+δ.

However, define the closing isomorphism ϕ=(ggeomgdynginfogalg)1. Then ϕ(X4)=X0 exactly.

By assumption (1): C~(ϕ(X4))=C~(X4), so C~(X0)=C~(X4).

This contradicts C~(X4)>C~(X0) from the chain of increases. ∎

Remark 6 (Relationship to Main Theorem).

Theorem 2 captures the original v4 intuition (“no scalar monotone under all pillar families”) but with designated reversible increase-ops made explicit and Sys-internal. It complements Theorem 1 by showing even the weaker requirement (monotonicity only on specific designated ops, not all Sys) fails when combined with isomorphism invariance.

4.2 Necessity of Axioms

Which axioms are essential?

One might wonder: could we weaken some axiom to avoid the impossibility? The following result shows each axiom plays a necessary role.

Proposition 1 (Axiom Independence).

Each axiom in the set {A1, A2b, A4, A5, A6} is necessary for the impossibility result. Axiom A3 (continuity) is included for technical completeness but is not essential for Theorem 1. Removing any single necessary axiom allows a universal complexity scalar to exist.

Proof sketch.
  • Without (A1) Additivity: Define C(X)=maxiCi(X). This satisfies monotonicity but not additivity.

  • Without (A2b) Strict Monotonicity: Define C(X)=iwiCi(X) with fixed weights. Weak monotonicity holds but not strict for all families.

  • Without (A3) Continuity: The impossibility still holds for discrete systems and discontinuous measures. A3 ensures technical regularity but is not required for the core contradiction (the cycle construction works without continuity).

  • Without (A4) Task-Universality: Fix a specific task/weight, reducing to single morphism family.

  • Without (A5) Relabeling Invariance: Allow C(X0)C(X0) even when systems are equivalent up to labeling.

  • Without (A6) Normalization: Allow C, trivially satisfying monotonicity.

Remark 7 (Minimal Axiom Sets).

The minimal set for impossibility is {A2b, A4, A5}. These capture: (1) strict improvements must increase the scalar, (2) the measure applies to all morphism families universally, (3) relabeling invariance ensures structural consistency. Additionally, A1 (additivity) and A6 (normalization) are typically required for non-degenerate solutions, though pathological counterexamples exist without them. Axiom A3 (continuity) is not in the minimal set, confirming the impossibility is robust—it doesn’t rely on technical regularity but on the fundamental tension between monotonicity and universality.

5 Consequences and Positive Alternatives

What do we do instead?

The impossibility doesn’t mean complexity is unmeasurable—it means we must use vectors, not scalars. This section explores practical implications.

Corollary 1 (Vector Necessity).

Any faithful summary of multi-faceted complexity must be vector-valued 𝐂=(Calg,Cinfo,Cdyn,Cgeom) (up to monotone reparametrizations of components). Scalarizations are possible only after fixing a task (weights or morphism family).

Remark 8 (Task-Relative Scalars).

If one restricts morphisms to a single family (e.g. only 𝖬alg) or fixes a task-dependent weighting Cw=iwiCi, scalar measures are consistent. The no-go targets task-universal C obeying all monotonicities simultaneously. For instance, when optimizing quantum circuits for execution time, one might weight Calg heavily and Cgeom lightly—this task-specific scalar is perfectly valid.

Scope and limitations.

This result addresses Stage-1 (pre-thermal) universal scalar complexity—intrinsic structural properties independent of physical implementation or thermal coupling. We do not claim:

  • Task-specific scalars are impossible: Weighted combinations Cw=iwiCi(X) with fixed task-dependent weights remain valid and useful. Our impossibility targets only task-universal scalars satisfying all monotonicity axioms simultaneously.

  • Multi-sector sums are impossible: The impossibility holds for single scalars. Multi-sector weighted sums C=awaIa where each Ia represents complexity in a distinct “sector” or “calculus” (indexed by a) remain a viable avenue. Such multi-sector approaches may circumvent the impossibility by assigning different morphism families to different sectors, avoiding the global monotonicity requirement. This is an active area of investigation.

  • Causation from temperature/action: Temperature T and action S appear in related work as observer fields (Stage-2 thermal coupling), not causal drivers of complexity. The current result is pre-thermal and makes no claims about thermodynamic emergence.

  • Quantum measurement collapse: Our framework treats measurement as CPTP maps within Sys. Observer-dependent interpretations (e.g., time-symmetric weak values) are compatible but not required.

The impossibility is fundamental for task-universal single scalars but leaves open multiple productive research directions.

How to use complexity vectors in practice.

When comparing systems, plot them in 4D complexity space (or project to 2D/3D for visualization). Pareto frontiers emerge: system A dominates B if 𝐂(A)𝐂(B) componentwise. Many systems will be incomparable—neither dominates the other—reflecting genuine trade-offs between complexity types. This vectorial approach is already used implicitly in multi-objective optimization; our result shows it’s mandatory for complexity.

Concrete example.

Consider designing a neural network. High Calg (many parameters) may improve accuracy but increases training time. High Cinfo (many inter-layer correlations) may improve expressiveness but complicates interpretation. High Cdyn (chaotic gradient dynamics) may escape local minima but hinders reproducibility. High Cgeom (intricate loss landscape topology) may enable diverse solutions but complicates optimization. There is no single “best” architecture—the optimal choice lies on the Pareto frontier and depends on task priorities (accuracy vs. speed vs. interpretability).

6 Implications and Applications

For complexity science.

Our result implies that complexity dashboards must be multi-dimensional. Any single “complexity index” necessarily privileges certain aspects over others. This has implications for:

  • Machine Learning: Model complexity cannot be captured by a single metric (parameter count, VC dimension, or description length alone). Practitioners already use multiple metrics (accuracy, training time, memory); our result justifies this as fundamental, not just practical.

  • Network Science: Graph complexity requires multiple measures (degree distribution, clustering, and path length). Ranking networks by a single centrality measure loses critical information.

  • Quantum Computing: Resource estimation needs vector-valued metrics (gate count, depth, and entanglement). Circuit optimizers must handle Pareto frontiers, not single objectives.

For physics.

The impossibility suggests that proposed “complexity equals action” conjectures [6, 32] cannot be universal. Different physical processes optimize different complexity aspects, explaining the diversity of structures in nature. Black hole complexity likely requires multiple measures to capture both bulk geometry and boundary quantum information.

For information theory.

Just as there is no universal compression algorithm (by no-free-lunch [36]), there is no universal complexity measure. Task-specific measures remain essential. This connects to Shannon’s insight that information is context-dependent—complexity inherits this context-dependence. Partial information decomposition [35] already shows information naturally resists scalar compression; our result extends this to general complexity.

Hierarchical emergence and lumpability.

Recent work by Rosas et al. [26] characterizes emergent macroscales as “software-like” self-contained processes via Markov lumpability—the condition under which coarse-graining preserves Markov structure. They establish that a microscopic process exhibits causally and informationally closed levels if and only if its causal states are strongly lumpable, providing a rigorous criterion for identifying emergent macroscopic descriptions [17, 25].

Our impossibility result is orthogonal yet complementary to this framework. While Rosas et al. identify where emergent levels exist through lumpability conditions, we prove that regardless of such hierarchical structure, no single task-universal scalar can simultaneously respect the ordering constraints of our four complexity pillars. Even within a single strongly lumpable macroscale—where coarse-grained dynamics maintain perfect Markovian structure—the vectorial nature of complexity 𝐂[0,1]4 remains irreducible. This demonstrates that hierarchical emergence does not rescue scalar universality; vectorial reporting remains necessary regardless of lumpable partitioning.

7 Reviewer Checklist (Reproducibility Gates)

  • Axioms (A1)–(A6) stated and motivated from standard practice.

  • Explicit four-system improvement cycle provided (Appendix A).

  • Example families 𝖬 instantiated in classical and quantum settings.

  • Relaxations under which a scalar is possible are stated (Section 5).

  • Axiom necessity analyzed (Proposition 2, Section 4.1).

  • Practical implications discussed (Section 6).

Acknowledgments

Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author.

Code Availability

Supplementary materials and documentation are available at:

https://github.com/boonespacedog/complexity-vector-1

Appendix A Concrete Cycle Construction

Purpose of this appendix.

The main theorem claims that improvement cycles exist. Here we construct explicit examples that can be verified computationally or analytically.

We construct an explicit improvement cycle using hybrid classical-quantum systems.

Theorem 3 (Explicit Improvement Cycle).

There exist five systems X0,X1,X2,X3,X4𝖲𝗒𝗌 and morphisms forming a path where each of the first four edges is a strict improvement in its respective pillar, and the fifth edge is a projection returning to the initial system.

Proof.

System Construction: Let each system Xi be a composite (Ci,Qi) where Ci is a classical bit string of length n=8 and Qi is a k=3 qubit quantum state.

Initial System X0:

  • Classical: C0=00000000

  • Quantum: Q0=|000

  • Complexities: Calg(X0)=0.05 (constant program), Cinfo(X0)=0 (no correlations), Cdyn(X0)=0 (no dynamics), Cgeom(X0)=0.1 (trivial topology)

Edge 1: falg:X0X1 (Algorithmic improvement)

  • Classical: Apply pseudorandom generator G:{0,1}4{0,1}8 with seed 0110, yielding C1=10110101

  • Quantum: Prepare GHZ state Q1=12(|000+|111) via Hadamard on qubit 1 and two CNOTs

  • Result: K(C1)4 bits (seed) >K(C0)=O(1) bits

  • Complexity change: Calg(X1)=0.35>0.05+δalg (take δalg=0.2)

Edge 2: finfo:X1X2 (Information improvement)

  • Classical: Partition C1 into 4 blocks of 2 bits: {10,11,01,01} and create syndrome bits encoding parity, yielding correlated structure

  • Quantum: Measure qubit 1 of Q1 in computational basis, creating entanglement between measurement outcome and remaining qubits

  • Result: Classical-quantum mutual information I(C2:Q2)>I(C1:Q1)=0

  • Complexity change: Cinfo(X2)=0.55>0.30+δinfo (take δinfo=0.2)

Edge 3: fdyn:X2X3 (Dynamical improvement)

  • Classical: Apply Arnold cat map A=(2111)mod4 to bit pairs as coordinates

  • Quantum: Time-evolve under kicked Ising Hamiltonian H=Jiσziσzi+1+hiσxi for 5 kicks

  • Result: Classical dynamics have Lyapunov exponent λ=ln(3+52)0.96

  • Complexity change: Cdyn(X3)=0.70>0.50+δdyn (take δdyn=0.15)

Edge 4: fgeom:X3X4 (Geometric improvement)

  • Classical: Embed the state into a 2-dimensional simplicial complex K with non-trivial 1-cycles

  • Quantum: Encode into topological quantum error correcting code with code distance d=2

  • Result: System X4 has enhanced geometric/topological structure

  • Complexity change: Cgeom(X4)=0.8>0.4+δgeom (take δgeom=0.3)

Explicit disk annulus map Ta: Define the area-preserving map Ta:𝔻2𝔸a where 𝔻2 is the unit disk and 𝔸a is the annulus with inner radius a(0,1):

r =a2+(1a2)r2 (17)
θ =θ (18)

in polar coordinates (r,θ) with r[0,1], θ[0,2π).

Properties:

  • Area-preserving: 𝔻2𝑑x𝑑y=𝔸a𝑑x𝑑y (check via Jacobian: rdr=rdr)

  • Bijection a.e.: Almost-everywhere bijective (except center point r=0)

  • In Sys: Deterministic measure-preserving map (classical Markov with delta kernel)

  • Isomorphism: Invertible via r=(r2a2)/(1a2)

  • Geometric effect: Under fixed Vietoris-Rips pipeline with radius parameter ε(a,1), the annulus has PH1>0 (non-trivial 1-cycle from hole) while disk has PH1=0 (contractible). Thus: Cgeom(Ta(X))>Cgeom(X) for fixed observer G.

Edge 5: πgeom:X4X0 (Geometric projection - NOT a morphism in 𝖬geom)

  • Define projection πgeom:𝖲𝗒𝗌𝖲𝗒𝗌 that forgets geometric embedding

  • Classical: Project from simplicial complex back to bit string C0=00000000

  • Quantum: Decode from topological code to bare qubit state Q0=|000

  • Result: πgeom(X4)=X0 exactly (not just up to relabeling)

  • Key property: Cgeom(πgeom(X4))=0.1<0.8=Cgeom(X4)

Closing the Cycle: The composition of morphisms creates a 5-edge path:

X0falgX1finfoX2fdynX3fgeomX4πgeomX0

For any universal scalar C satisfying (A2b), we have:

C(X1) >C(X0)+δalg (19)
C(X2) >C(X1)+δinfo (20)
C(X3) >C(X2)+δdyn (21)
C(X4) >C(X3)+δgeom (22)

Therefore: C(X4)>C(X0)+δ.

Now, the projection πgeom creates a dilemma:

  • If C is to be universal, it must respect geometric complexity, so C(X4)>C(X0)

  • But πgeom(X4)=X0 exactly, so C(πgeom(X4))=C(X0)

  • This means πgeom strictly decreases C: C(X0)<C(X4)

The contradiction: If C must be monotone under ALL morphisms in 𝖬geom (which increase geometric complexity), then operations that decrease geometric complexity (like πgeom) cannot exist. But such projections clearly do exist - one can always “forget” topological structure. This shows that no scalar can be simultaneously monotone for operations that increase each type of complexity while also handling the full space of system transformations. ∎

A.1 Explicit Numerical Example

How to verify this example.

The following table provides concrete complexity values for a 2-bit classical + 2-qubit quantum system. These values can be computed using standard tools (Kolmogorov complexity estimators, entropy calculators, persistent homology software).

Example 1 (Verifiable 2+2 System).

System states:

  • X0: Classical bits = 00, Quantum state = |00

  • X1: Classical bits = 11, Quantum state = 12(|00+|11) (Bell state)

  • X2: Classical bits = 10, Quantum state = 12(|00+|01+|10+|11) (maximally mixed)

  • X3: Classical bits = 01, Quantum state after 3-step quantum walk with increased OTOC

Complexity values (normalized to [0,1]):

System Calg Cinfo Cdyn Cgeom
X0 0.1 0.0 0.0 0.2
X1 0.4 0.3 0.1 0.2
X2 0.4 0.6 0.2 0.3
X3 0.3 0.5 0.7 0.4
X0 0.1 0.2 0.3 0.8

Morphisms:

  • falg: Hadamard on qubit 1, CNOT(1,2), creates Bell state (circuit depth: 2 gates)

  • finfo: Classical XOR operation conditioned on quantum measurement creates correlations

  • fdyn: Quantum walk evolution U=exp(iHwalkt) with t=3 steps increases scrambling

  • fgeom: Embed in triangulated 2-complex, project back (increases Betti numbers)

Verification: Each morphism strictly increases its target complexity:

Calg(X1)Calg(X0) =0.3>δalg=0.2 (23)
Cinfo(X2)Cinfo(X1) =0.3>δinfo=0.2 (24)
Cdyn(X3)Cdyn(X2) =0.5>δdyn=0.3 (25)
Cgeom(X0)Cgeom(X3) =0.4>δgeom=0.3 (26)

while returning algorithmic complexity to initial value: Calg(X0)=Calg(X0)=0.1.

A.2 Visualization of Improvement Cycle

X0X1X2X3X4falg: Calgfinfo: Cinfofdyn: Cdynfgeom: Cgeomπgeom: Cgeom
Figure 1: The 5-edge improvement path. Each of the first four edges (solid) increases complexity in its respective pillar. The fifth edge (dashed purple) is a projection that forgets geometric structure, returning to X0. This forces C(X4)>C(X0) while πgeom(X4)=X0 exactly, creating an irreconcilable tension for any universal scalar.

Appendix B Corrected Quantum Cycle

Why the original quantum example failed.

An earlier version claimed to return exactly to |00 after unitary operations, violating unitarity. Here we provide a corrected version where the cycle returns to an equivalent state (same complexity) but not the identical state.

Proposition 2 (Quantum Improvement Cycle).

For a 4-qubit system, there exists a cycle of unitary operations creating strict improvements in different complexity pillars while returning to an equivalent (but not identical) state.

Proof.

Consider the 4-qubit Hilbert space =(2)4.

State sequence:

|ψ0 =|0000 (27)
|ψ1 =Ualg|ψ0=12(|0000+|0011+|1100+|1111) (28)
|ψ2 =Uinfo|ψ1=12(|0000+|0110+|1001+|1111) (29)
|ψ3 =Udyn|ψ2=(time-evolved scrambled state) (30)
|ψ0 =Ugeom|ψ3 (31)

where |ψ0 has the same algorithmic complexity as |ψ0 (both describable by O(1) gates from reference state |0000) but lives in a stabilizer code subspace with enhanced geometric structure (non-trivial homology in the graph state representation).

Explicit Hamiltonians:

  • Ualg=H1CNOT12CNOT34 (3 gates)

  • Uinfo=SWAP13SWAP24 (increases mutual information between subsystems)

  • Udyn=t=15exp(iHIsingt/5)exp(iHkick) with HIsing=JiZiZi+1, Hkick=hiXi (Floquet evolution)

  • Ugeom implements a stabilizer code encoding that preserves algorithmic simplicity (still O(1) gates to prepare from |0000) while adding topological protection (code distance d=2)

The key insight: |ψ0|ψ0 but they have identical algorithmic complexity:

  • Both are prepared by O(1) gates from |0000

  • Both have the same circuit depth (constant)

  • But |ψ0 lives in a code subspace with non-trivial stabilizer group structure

Under relabeling that ignores the ambient code structure (focusing only on logical qubits), |ψ0 is equivalent to |ψ0, satisfying Axiom (A5). This closes the cycle without violating unitarity. ∎

References

  • [1] S. Aaronson (2016) The complexity of quantum states and transformations: from quantum money to black holes. arXiv preprint arXiv:1607.05256. Cited by: item (I).
  • [2] V. I. Arnold (1989) Mathematical methods of classical mechanics. 2 edition, Springer. Cited by: 2nd item.
  • [3] K. J. Arrow (1950) A difficulty in the concept of social welfare. Journal of Political Economy 58 (4), pp. 328–346. Cited by: §1.
  • [4] K. J. Arrow (1963) Social choice and individual values. 2 edition, Yale University Press. Cited by: §1.
  • [5] C. H. Bennett (1988) Logical depth and physical complexity. In The Universal Turing Machine: A Half-Century Survey, pp. 227–257. Cited by: §2.4.
  • [6] A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle, and Y. Zhao (2016) Holographic complexity equals bulk action?. Physical Review Letters 116 (19), pp. 191301. Cited by: §6.
  • [7] F. Chazal and B. Michel (2017) An introduction to topological data analysis: fundamental and practical aspects for data scientists. arXiv preprint arXiv:1710.04019. Cited by: item (IV).
  • [8] E. Chitambar and G. Gour (2019) Quantum resource theories. Reviews of Modern Physics 91 (2), pp. 025001. Cited by: Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexitythanks: Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author. DOI: 10.5281/zenodo.17653972, §1, Remark 3.
  • [9] B. Coecke, T. Fritz, and R. W. Spekkens (2016) A mathematical theory of resources. Information and Computation 250, pp. 59–86. Cited by: Scalar Impossibility in Multi-Pillar Complexity Measures: Why a Single Number Cannot Capture Algorithmic, Information-Theoretic, Dynamical, and Geometric Complexitythanks: Mathematical formalism and proof construction assisted by AI tools (Claude by Anthropic, ChatGPT by OpenAI). All theoretical insights, hypothesis formulation, and scientific claims are the sole responsibility of the author. DOI: 10.5281/zenodo.17653972, §1, Remark 3.
  • [10] T. M. Cover and J. A. Thomas (2006) Elements of information theory. 2 edition, Wiley-Interscience. Cited by: item (II), 1st item.
  • [11] J. Eckmann and D. Ruelle (1985) Ergodic theory of chaos and strange attractors. Reviews of Modern Physics 57 (3), pp. 617–656. Cited by: item (III).
  • [12] H. Edelsbrunner and J. Harer (2010) Computational topology: an introduction. American Mathematical Society. Cited by: item (IV).
  • [13] K. Falconer (2014) Fractal geometry: mathematical foundations and applications. 3 edition, Wiley. Cited by: item (IV).
  • [14] J. Fliege and A. Vaz (2010) A no free lunch theorem for multi-objective optimization. Information Processing Letters 110 (10), pp. 440–443. Cited by: §1.
  • [15] M. Gell-Mann and S. Lloyd (1996) Information measures, effective complexity, and total information. Complexity 2 (1), pp. 44–52. Cited by: §2.4.
  • [16] M. Goldblum, A. Schwarzschild, E. Borgnia, D. Tsipras, J. Geiping, and T. Goldstein (2023) The no free lunch theorem, kolmogorov complexity, and the role of inductive biases in machine learning. arXiv preprint arXiv:2304.05366. Cited by: §1.
  • [17] J. G. Kemeny and J. L. Snell (1976) Finite Markov chains. 2 edition, Springer-Verlag. Note: Chapter 6: Lumpability Cited by: §6.
  • [18] L. D. Landau and E. M. Lifshitz (1980) Statistical physics. 3 edition, Vol. 5, Butterworth-Heinemann. Cited by: 1st item.
  • [19] R. Landauer (1961) Irreversibility and heat generation in the computing process. IBM Journal of Research and Development 5 (3), pp. 183–191. Cited by: §2.4.
  • [20] S. Legg and J. Hernández-Orallo (2022) On the arrowian impossibility of machine intelligence measures. In Artificial General Intelligence: 15th International Conference, AGI 2022, pp. 22–31. Cited by: §1.
  • [21] M. Li and P. M. Vitányi (2019) An introduction to kolmogorov complexity and its applications. 4 edition, Springer. Cited by: item (I).
  • [22] S. Lloyd (2000) Ultimate physical limits to computation. Nature 406 (6799), pp. 1047–1054. Cited by: §2.4.
  • [23] Y. B. Pesin (1977) Characteristic lyapunov exponents and smooth ergodic theory. Russian Mathematical Surveys 32 (4), pp. 55–114. Cited by: item (III).
  • [24] D. A. Roberts and B. Swingle (2016) Lieb-robinson bound and the butterfly effect in quantum field theories. Physical Review Letters 117 (9), pp. 091602. Cited by: item (III).
  • [25] L. C. G. Rogers and J. W. Pitman (1981) Markov functions. Annals of Probability 9 (4), pp. 573–582. Cited by: §6.
  • [26] F. E. Rosas, B. C. Geiger, A. I. Luppi, A. K. Seth, D. Polani, M. Gastpar, and P. A. M. Mediano (2024) Software in the natural world: A computational approach to hierarchical emergence. arXiv preprint arXiv:2402.09090. Note: Version 2, June 2024 Cited by: §6.
  • [27] T. Schreiber (2000) Measuring information transfer. Physical Review Letters 85 (2), pp. 461. Cited by: item (II).
  • [28] C. R. Shalizi and J. P. Crutchfield (2001) Computational mechanics: pattern and prediction, structure and simplicity. Journal of Statistical Physics 104 (3-4), pp. 817–879. Cited by: §2.4.
  • [29] C. E. Shannon (1948) A mathematical theory of communication. Bell System Technical Journal 27 (3), pp. 379–423. Cited by: 2nd item.
  • [30] H. A. Simon (1962) The architecture of complexity. Proceedings of the American Philosophical Society 106 (6), pp. 467–482. Cited by: §2.4.
  • [31] M. Sipser (2012) Introduction to the theory of computation. 3 edition, Cengage Learning. Cited by: 2nd item.
  • [32] L. Susskind (2016) Computational complexity and black hole horizons. Fortschritte der Physik 64 (1), pp. 24–43. Cited by: §6.
  • [33] A. M. Turing (1936) On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society s2-42 (1), pp. 230–265. Cited by: 1st item.
  • [34] R. Uola, S. Designolle, and T. Kraft (2023) Is there a finite complete set of monotones in any quantum resource theory?. Physical Review Letters 130 (24), pp. 240204. Cited by: §1.
  • [35] P. L. Williams and R. D. Beer (2010) Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515. Cited by: item (II), §6.
  • [36] D. H. Wolpert and W. G. Macready (1997) No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1), pp. 67–82. Cited by: §1, item (A4) Task-Universality:, §6.
  • [37] S. Wu, C. Wu, S. Lin, C. Wang, and K. Lee (2018) Multiscale transfer entropy: measuring information transfer on multiple time scales. Communications in Nonlinear Science and Numerical Simulation 62, pp. 202–212. Cited by: item (II).
  • [38] Z. Xu, L. P. Garcia-Pintos, A. Chenu, and A. del Campo (2024) Scrambling dynamics and out-of-time-ordered correlators in quantum many-body systems: a tutorial. PRX Quantum 5 (1), pp. 010201. Cited by: item (III).