@LogikLogicus/# INTEGERS

Files

- main.py
- digits.py
- exceptions.py
- integers.py
- README.md

main.py

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91

```
# This is a thought-experiment that goes roughly as follows:
# 1. Abandon ALL axioms!!!
# 1.1 Abandon Set theory
# 1.2 Abandon the Classic law of identity (key point!)
# Reason: Mathematicians draw a distinction between provability and decidability. x =x is ASSUMED with no proof-of work. And so proof-of-work we shall get.
# 2. Start with a Lambda-universe - functions only
# 3. Invent symbols/alphabet (NOT DIGITS) 1,2,3,4,5,6,7,8,9,0
# 4. Derrive that which we call "digits" from alphabet.
# 5. Derrive that which we call "integers" from digits.
# 6. Recognize that we are re-inventing arithmetic compilers
# 7. Recognize that we knew all along that we are re-inventing arithmetic compilers
#
# The purpose of this is - whatever :) Fun?
# The value of this seems to be that by going this route
# Every single TYPE (digit, integer) has been proven (DECIDED not ASSUMED) sound and valid giving us an actual LAW rather than an axiom of identity.
# In doing so we get to measure the computational complexity of deciding the truth-value of digits and integers empirically. Something philosophers and Mathematicians assume as constant, given and trivial O(1). Ignoring the fact that x = x is undecidable for infinities.
#
# If we can measure the cost-of-proof, and provide proof-of-work thus we have unified the concepts of "provability" and "decidability".
#
# Every object that has been PROVEN adheres to
# for all x: x = x
# Where the MEANING of "=" is overloaded and type-dependent
# e.g digit_eq() compares digits
# integer_eq() compares integers.
# float_eq() compares floats
#
# Obviously, I am doing this in a weakly-types language so whatever.
# As long as the interpreter doesn't explode we are consistent-ish :)
# Besides, let me evangelise to you why para-consistent logics are better :)
# If the POC shows any merit this can always become something else.
#
# Most interesting result so far:
# digit(1) != int(1) :))))
import pprint
from digits import *
from integers import *
from contexttimer import Timer
# Prove all theorems about the digits
COMPLEXITY_OF_DIGITS = {}
for axiom in DIGITS:
# Positive claim A == A => True
A = False
with Timer() as t:
A = digit_eq(axiom, axiom)
COMPLEXITY_OF_DIGITS[axiom] = {}
COMPLEXITY_OF_DIGITS[axiom]['A'] = t.elapsed
# Negative claim A != A => False
B = True
with Timer() as t:
B = digit_ne(axiom, axiom)
COMPLEXITY_OF_DIGITS[axiom]['B'] = t.elapsed
# Validate excluded middle
if A and B:
raise ProofInvalid('A: {}, B: {}'.format(A, B))
COMPLEXITY_OF_DIGITS[axiom] = t.elapsed
# Prove all the theorems about the integers
COMPLEXITY_OF_INTEGERS = {}
for axiom in INTEGERS:
# Positive claim A == A => True
A = False
with Timer() as t:
A = integer_eq(axiom, axiom)
COMPLEXITY_OF_INTEGERS[axiom] = {}
COMPLEXITY_OF_INTEGERS[axiom]['A'] = t.elapsed
# Negative claim A != A => False
B = True
with Timer() as t:
B = integer_ne(axiom, axiom)
COMPLEXITY_OF_INTEGERS[axiom]['B'] = t.elapsed
# Validate excluded middle
if A and B:
raise ProofInvalid('A: {}, B: {}'.format(A, B))
COMPLEXITY_OF_INTEGERS[axiom] = t.elapsed
pp = pprint.PrettyPrinter(indent=4)
pp.pprint("##### Complexity of digits")
pp.pprint(COMPLEXITY_OF_DIGITS)
pp.pprint("##### Complexity of Integers")
pp.pprint(COMPLEXITY_OF_INTEGERS)
```