We regard theory as a conglomerate of a language for writing statements, a language for writing valid proofs and a collection of axioms (basis of statements the theory assumes to be true). A theory is said to be consistent if there is no pair of valid proofs proving contradictory statements. A theory is said to be capable of reflection iff it's possible to write the statement "this theory is consistent" in the language of the theory itself.
There are three main results concerning consistency:
1) (Gödel's Second Incompleteness Theorem) All common non-trivial theories cannot prove their own consistency. This applies essentially to all theories that include the notion of an infinite ordered set with a distinguished zero object (e. g. natural numbers) and provide a way to construct a larger object for any given one.
2) (Gentzen's Algorithm) For common theories there is an algorithm that "analytically verifies" their proofs. Analytically checked proofs can be directly shown to be contradiction-free. This does not contradict to the (1) because no theory which is subject to (1) can internally show that verification algorithm holds for every valid input.
3) (Dan Willard's Self-verifying theories) There are non-trivial exceptions to (1). They are not only capable of reflection, but even prove their own consistency.
The theories (3) share a common property: their proofs can be analytically verified inplace. Whereas for theories which Gödel's Theorem (1) applies the algorithm normally requires enormous amounts of additional working space. This goes so far that analytical verification of some proofs could be fundamentally physically infeasible due to finiteness of the visible universe.
This could be a clue to understanding the Gödel's Theorem (1): maybe the true reason we cannot show Gentzent's algorithm to succeed always without additional assumptions exactly because it doesn't phyiscally do so.
In this case we could use a physical estimate for information content of the visible universe and restrict our attention to proofs of the "modest" length which are at least in theory verifiable. Then we'd be pretty sure that these proofs are definitely noncontradictous. If it would turn out that upper bound on proof length is beyond that of normal human-developed proofs, a longstanding question about solidity of mathematical foundations would be at least partially solved.
Now how can we find the upper bound for proof length starting with informational content of the universe? It appears that the following approach could roughly do.
Fact 2.1 The measure of theory complexity is its proof theoretic ordinal. It is the least ordinal α so that α-induction can show that the analytic proof verification algorithm always holds on valid inputs for this theory. Proof theoretic ordinals have been explicitly found for lots of "easy" theories and there is ongoing work to find them for more strong theories.
Goal 2.2 We need to establish the connection between proof theoretic ordinals and computational resource consumption of algorithms.
Interaction networks are in many concerns the optimal computation framework regarding to resource demands, hence
Conjecture 2.3 If a theory has proof theoretic ordinal α and has a cut-elimination procedure, then cut elimination (= analytic verification algorithm) can be implemented using an interaction network system of the order α.
(Terminology: an interaction network system (Σ, ℛ) is strongly normalizing iff there is a well-founded partial order ≺ on network configurations, such that any rule r: ℛ strictly decreases the configuration with respect to this order. Order of an interaction network system is a minimal ordinal α which is isomorphic to an order as described above.)
Conjecture 2.4 There is an ordinal indexed family of functions Hα(n: Nat): Nat such that
a) The function Hα(n) can be computed using an interaction network system of the order α.
b) For any network interaction system of the order α holds: a configuration Γ that has n connections (between its agents) can be reduced to its normal form via reduction chain of utmost Hα(n) intermediate configurations with utmost Hα(n) connections in them.
Corollary: Hα(n) is optimal bound for space consumption of interaction networks: natural numbers n in its minimal representation is a configuration with n connections. So, the interaction network system implementing calculation of Hα(n) would produce a normal form with Hα(n) connections when starting with n.
Goal 2.5 Prove or disprove existence of such family..
Conjecture 2.6 The family Hα(n) coincides with Hardy hierarchy as it is defined up to the weak Mahlo ordinal in “A Uniform Approach to Fundamental Sequences and Hierarchies” by Buchholz, Cichon, Weiermann.
Note: below ωω the functions are given through
Hωk(n) = n [k] 1,
Ha + b(n) = Hb(Hb(n)),
n  m = n + m
n [k] m = (x ↦ x [k-1] x)m n.
In particular, Hωk∙m + c(n) = n [k + 1] m + c:
H0(n) = n
H1(n) = n + 1
Hω(n) = 2n
Hω∙2 + 1(n) = 3n + 1
Hω2(n) = n2
Hω2∙2 + ω∙3 + 5(n) = 4n3 + 5
Hω3∙2(n) = nnn
Goal 2.7 Extend the Bird's notation to all ordinals up to the weak Mahlo ordinal, formulate the calculation rules in terms of interaction network systems of minimal order, find explicit representations of Hα(n) in terms of Bird's expressions. Develop a technique for estimating maximal n such that Hα(n) < N for N given in usual scientific form, like 1010300.
Goal 2.8 It seems that Bird's expressions can be extended natural numbers to all ordinals. It's already established that all ordinals below Γ have be given by an expression (akin to Cantor normal form) that uses finite natural numbers, ω-symbol and Bird's nested arrays (as strong generalization of sum, product, power, etc). By using the whole machinery of Bird's expressions for T(M)-ordinals we'll probably derive an alternative (more fine grained) ordinal notation capable of representing all T(M) ordinals when starting with finite numbers and ω. Unified notation for ordinals and rapidly-growing functions might be of a great advantage.
Conjecture 2.9 Axiom of transfinite induction up to α is equivalent to the axiom ∀(n: Nat).ENOUGH_SPACETIMEα(n), where ENOUGH_SPACETIMEα(n) states ∃(N: Nat).N ≥ Gα(n).
Goal 2.10 Develop a closed minimalistic self-verifying theory (kind of rudimentary intuitionistic dependent type theory, I assume) with deductive framework capable of "working under assumption", where we'll be able to apply all the usual arithmetic machinery to growing functions f(n) under assumption ENOUGH_SPACETIMEα(|n|), where α is computational complexity ordinal of f and |n| is the number of connections in the configuration representing the argument n.
One can say, in this theory the type of growing functions like "square" would look like
square: (n: Nat) -> ENOUGH_SPACETIMEω2(|n|) -> Nat.
So that the function first accepts the argument, then the evidence* that it'll have enough space and time to finish calculation, and only afterwards returns a number. This corresponds to the secure real-time programming approach where function requests itself a chunk of memory and a span of time before beginning the calculation. (* The evidence or witness is just a large enough number that occupies at least the amount of space the function needs to perform the calculation; the ability of the caller to construct and present such a number confirms sufficiency of the ressources.)
“Even pure functions in programming have side effects. They use memory. They use CPU. They take runtime.” (q) Harald Armin Massa, talk @ PyCon 2010