Month: April 2020

CH and GCH

I’m not even close to being an expert in mathematics, but I have some ideas regarding the continuum hypothesis and the generalized continuum hypothesis.


Regarding the continuum hypothesis, I think one possible way to have a cardinality between and might be to define something like the real numbers but where certain elements of the sequence composing the number or patterns of elements are equivalent to certain other elements or patterns of elements.

For example, you specify your numbers in base 10 and define that all 8’s are equivalent to 9’s, so 0.7652861066492217 would be equal to 0.7652961066492217, 0.7652861066482217, and 0.7652961066482217. (Not sure if it would become an issue that 1=0.999…=0.888…, 2=1.999…=1.888…, etc., so you might not be able to have any whole numbers, but that’s just an artifact of this particular example.:P Also, making one numeral equivalent to another is probably the same as working in base 9 instead of base 10, which shouldn’t impinge on its cardinality. So maybe that’s a bad example, or maybe this disproves my whole tack.)

Another example would be to use binary and define that all sequences of {0, 1, 0} are equivalent to {0, 1, 1}. So 0.3125, or {{}, {0, 1, 0, 1}} would be equivalent to 0.4375, or {{}, {0, 1, 1, 1}}. (In this notation I’m defining real numbers as sequences of two sequences, where the first sequence is the integer part and the second sequence is the fractional part. Also, a better example wouldn’t allow for numbers/sequences where you have to choose between multiple paths of substitution. Not sure that’s even possible in base 2.)

This would create a set that’s smaller than the real numbers but larger than the integers. I have a hunch that you’d need to define your equivalencies in a way where it they happen an infinite number of times in the set of all real numbers.


Regarding the generalized continuum hypothesis, my idea is that an integer is a sequence of digits, while a real number is two sequences of digits, and in between any two consecutive integers is an unlimited number of fractional parts of the numbers, where the fractional part is the second sequence. So, naturally, to get to the next cardinality, you’d just have to add a third sequence. Between any two consecutive real numbers would be an unlimited number of possible values for the third sequence. So basically the set of this type of number is an infinite set of infinite sets of infinite sets, and it’s all ordered. For ℵ₃ you’d need numbers comprising four sequences, and so on and so on.


I don’t have a solution for P≟NP, but I may have some insight as to what direction we can take to arrive at a solution.

My theory is that there are certain concepts and shortcuts in our mathematical/computational language that hide what we’re really doing on an absolute level.

An example would be working with numbers. Essentially a number is a sequence of elements (or if it’s a real number, two sequences—one before the decimal point and one after; if it’s a complex number, 4 sequences). The simplest way to qualify the range of the sequences would be to use base 2 and consider the range to be {0, 1}, but you can use any base you want.

A number is not just a sequence, though; certain sequences being “numbers” implies certain relationships between them and the possible operations on them. If, instead of “*”, we listed the algorithm applied to the sequences involved that multiplication reduces to, we would be using a language that more directly reflects what we’re actually doing.

I think that finding a language that wholly reflects what we’re actually doing when we compute—and not because it’s possible to directly reflect what you’re actually doing in the language but because it’s necessary to—would be the first stepping stone in solving P≟NP.

I think of particular importance would be how we define selecting a limited amount of members from a larger set. An analogy would be the difference between procedural programming and constraint programming; or maybe between constraint programming and something like functional programming except where the executor doesn’t do any sophisticated things to figure out how to solve problems; or maybe between regular functional programming, where the executor is sophisticated, and a kind of limited functional programming, where you have to spell out how to do anything sophisticated/combinatorial/quadratic/whatever within the code.

So, for example, when defining prime numbers in this new mathematical language, the very act of defining them would necessitate implementing some kind of algorithm for selecting them.

But then, you’d think that if it’s necessary to define an algorithm for selecting the primes just by the nature of the definitional language, then it wouldn’t be arbitrary which algorithm you select, yet there is definitely more than one possible algorithm to implement. Maybe the reconciliation of those two facts is that this new language would make it obvious how all solutions that select primes (at least from the same range, be it finite or infinite) are equal or isomorphic to each other. And if it can do that, then it wouldn’t be too far a leap to expect it to make it obvious, for our purpose of proving P≟NP, how the solution to any NP-complete problem leads to the solution to any other NP-complete problem.

Not that this gets you all the way to solving P≟NP, but the point is that once you have this language, you’re way closer to the essence of all problems and solutions, which should naturally lead to insights into solving P≟NP. It might still be a lot of work, but at least we’d have a starting point, whereas it seems that currently we’re clueless as to where to even start.