Idea #6 – Project Catalyst Organization (ProCat)

The idea is for an organization (maybe NPO to keep it from becoming corrupt, or maybe for-profit to allow it more capital to work with to be more powerful and effective) to facilitate the creation of businesses that are less driven by the profit motive than your ordinary business. They may still be profit-oriented, but they should have parameters in place that mitigate the tendency toward profiteering-per-se.
Some possible solutions are:

  • Make businesses that are owned exclusively by their employees
  • Make businesses that are owned exclusively by the communities they’re situated in
  • Make businesses that are owned exclusively by the people who use their products or services
  • Make businesses that are nonprofit but not aimed at charity work; rather, they do ordinary things (like a retail store, for example) but are restricted by the legalities of a nonprofit organization

If such an organization is created, it may also handle the facilitation of community projects such as museums, parks, libraries, hospitals, art installations, roads/rails/vehicles for public transportation, etc. The idea is that a web portal would be created where members of any given community may submit ideas for projects, submit variations on submitted ideas, vote on those variations, or pledge to donate specific amounts of money to actualize those ideas.

If not enough money is pledged for a specific idea, then nobody pays; if enough money is pledged, then everyone who pledged is called upon to pay. The NPO would then handle the creation of a given project, including contracting for construction, hiring employees, dealing with the government and the city counsel to get the necessary permissions and to coordinate the duties of the NPO in creating the project with the duties that can only be carried out by the local, state and federal governments.

A similar web portal could also be made for creating potential bills of legislation, creating modifications of them, voting on those modifications, pitching the end results to the government, and maybe even raising funds to lobby for those bills. That may be best implemented as a function of Project Catalyst, or it may be best realized as a completely separate entity.

The idea of submitting ideas, submitting variations/modifications of ideas, and voting on those variations of course could, and probably should, apply to the business-creation side of the NPO as well.

The business-creation aspect of ProCat could revolutionize the economy and workforce, giving workers more rights and better working conditions, and significantly curtailing corporate profiteering and the wealth imbalance.

Idea #5 – Card Games

Stack ‘Em (Solitaire)

Use a shuffled deck.

The object is to stack from ace to king (or 2 to ace if preferred, as long as it’s decided beforehand) in each of four stacks, one stack per suit.

You get two stacks of your own to work with (they start with no cards in them.). Cards are face-up. The rules for these two stacks are:
a) you can place a card on a stack only if it’s of an equal or lower face value than the previous top card
b) you can take a card off of either stack at any time to place onto one of the four suit stacks.

You get 3 or fewer cards in your hand at any one time. You can take a card out of your hand and put it on a stack at any time.
(you may not take a card from any of the stacks and put it in your hand.)

Take a card from the top of the deck if you have 2 or fewer cards in your hand and put it in your hand or onto any of the stacks if possible.

Do this process until you’re stuck or you win.

Obviously, variants of this game can be created to make it easier, such as having three stacks and/or five cards in your hand, but I find that with a little bit of practice you can win with the above rules half or most of the time.

Swap (Solitaire)

This game is similar to the Stack ‘Em game.

Use a shuffled deck.

The object is to fill up four stacks, one per suit, from ace to king, or 2 to ace if preferred but only if decided before the game starts.

You get 4 additional stacks that you can work with at your own will (they start out empty.) Cards are face-up. At any time you may
a) take a card off the top of the deck and place onto any of these four stacks
b) take a card off of any of these stacks and put onto one of the four suit piles
c) take an entire stack and place it on top of another stack

You of course may also take a card off the top of the deck and place directly onto one of the four suit stacks.

Do this until you’re stuck or you win.

Note that, unlike in the Stack ’em game, you may put a card of a higher value on top of a card of a lower value. Obviously in some cases this will get you stuck when trying to take the cards off the stacks and put them up into the ordered stacks. Part of the game is figuring out when you can do this without causing a paradox (for example the 4 of hearts is covering the 5 of spades and the 6 of spades is covering the 3 of hearts), or just crossing your fingers and hoping that it doesn’t..

Another difference is that in the Stack ’em game, since you can’t place a higher card on a lower card, you can just keep the stacks directly vertical — good for conserving space when you need to. In this version you’ll need to look at the stack histories to know when you should or shouldn’t put a higher card on top of a lower card, so you may want to keep the stacks (or parts of the stacks) cascaded.

Grid (Multi-Player)

Getting a feel for the strategy of this game probably requires following the instructions and playing it!
Any number of players can play, although it probably becomes pointless with too many players (especially for variant 1).

This game has two variants.

Variant 1:

Use a shuffled deck.

Place 8 cards, face up, on the table in a pattern like this, where the As are.


(The cards will not be all aces, that’s just the letter I used to represent their places. They’ll be whatever you draw off the top of the deck.)

The eight card positions are actually eight potential stacks.

Give each player one card, face-up. These are their personal stacks. The player with the most cards in their stack at the end of the game wins.

Place the rest of the deck, face-down, in the middle position (where the O is in the middle of the A’s)

Players take turns around the table. On a turn you first fill any missing stacks out of the eight with cards from the deck (one card per empty stack). Then you do one of the following:
a) Place one card/stack on another card/stack, as long as the top cards of the two stacks are either the same suit or the same value. repeat as desired.
b) Take exactly one stack and place it on your personal stack. You may only do this if the top card on your personal stack is the same suit or value as the top card on said stack.
You may not pass. (You must either take a stack or place at least one stack/card atop another, unless no move is possible.)

Variant 2:

Just like variant 1 except that you can’t “repeat as desired”. You either do (a) twice, or (b) once. You cannot pass. If you do (a) and only doing it once is possible, you do it once.

Variant 3 (this variant came about because I played it and it had been so long I forgot the exact rules to variant 2. it turned out to be a good game):

Just like variant 2 except that you can do (a) twice or (b) twice, or both (a) once and (b) once in any single turn.


This one I actually forgot the rules to, so you can ignore it. But it’s a big shame, because I taught it to my sister and she taught it to all her friends and they played it because it was so fun.

I just remember you had a few cards in your hand, and there were a few stacks on the table, and you could place a card from your hand onto a stack under certain conditions, I think it was if the top card on the stack had the same suit or number as the card in your hand. I don’t remember if you could do anything else with the stacks on the table. I don’t remember how you win, but I guess it was by running out of cards. But I don’t remember how/when you get new cards into your hand.

You didn’t take turns in this game. You did everything as fast as possible to beat your opponents, so it required fast thinking. Sometimes two people would go for the same card at once, but one of the hands has to be below the other, so you knew who got it.

The Figura Language

There may be a radically different way to do grammar, which is simpler than any other grammar, and can potentially get the same things across given the right words extant within the language. The simple rule is this: everything is an analogy. The simplest kind of statement would be two words, e.g., Christmas:Hanukkuh. That says that Christmas is in some way analogous to Hannakuh. A sentence might actually be one word, as individual words may take the place of simple or complex analogies. “I am hungry” could be be expressed as one word, if such a word so evolved. 

Analogies may be nested to any degree, with single words replacing pairs wherever convenient. No words are pre-defined for this language; they would have to evolve on their own, and/or be borrowed from other existing languages, or all be borrowed from one language. I’d personally prefer English or Latin. English makes sense because it’s the closest language to a universal language we have yet, and also it’s the most natural, flowy sounding language (even according to those for whom English is not their mother tongue). and Latin makes sense because it’s the best common denominator to the tree of languages that have evolved since it, regarding their words’ roots, and also it just sounds really cool.

Supposing we could summon up a suitable analogy for “is”—the “is” of attribution—and we then replaced that analogy with the word “is”, we could then say “the son is hot” like so: “sun:hot:::is”. “Is” could probably be defined as something like this: “Earth:round” (since the relationship between Earth and round is that the former “is” the latter). Thus we wouldn’t have to say “son:hot::Earth:round” and generally go around using the idiom “Earth:round” all the time. The “is” of identity, on the other hand, would be much more concrete: it could signify A:A, or even 2+2:4, so we could say, ’43rd president:”George W. Bush”::isofidentity’ (or whatever shorter word we concoct for “is” of identity, such as “isi”), which expands to: ’43rd president:”George W. Bush”::2+2:4′. Except that “43rd president” would actually be in analogical form, for example, something like this: “president:43::ordinal,” where ordinal could defined as, say, “first:1″. So the final statement might be something like this: ‘president:43::ord:::”George W. Bush”::isi’, fully expanding to (as one possibility): ‘president:43::first:1::::”George W. Bush”::a:a’.

“And” would be a relationship too, of course, but I’m not sure of what kind. It seems to signify the most general kind of relation: the relationship two things have by virtue of their being related in the given sentence. Actually, I guess it could be considered a grouping term. It seems even antithetical to the system of hierarchical pairs (which is essentially what the Figura system is), since “and” can link any number of items serially. And the same applies to “or.”

I’m loathe to add more rules to the grammar just for those two words, particularly since they can be represented via hierarchical pairs, albeit awkwardly (kind of analogously to using nested ifs in programming instead of “else if”‘s), but I had already been thinking of allowing series anyway, of the form “a:b:c:d:[etc]”. The only problem, then, is, for example, if “b” in the above example represented the pair “e:f”, how would one fully expand the expression? It doesn’t seem possible to do in a particularly logical way: one could only itemize the relation “b” as a singular entity. But there’s no really big reason not to do it this way other than that it breaks the language’s capacity of allowing one to expand any given sentences by recursively expanding its terms, within the language’s own grammar. ­

In Latin, semantic structure is afforded purely by “accidence”—that is, word relationships are determined solely (or almost solely?) by inflection, so a sentence is a collection of inflected words that you can put in any order you want, in general. If inflections are used with Figura in its adoption, or creation, of words, then it adds the risk of making the semantics more complicated, by using two individually sufficient modes of grammar—the analogical and the polysynthetic—rather than less complicated. (If we don’t want to do that, and we use Latin for our word base, then perhaps we could use Latino sine Flexione.)

Though, on the other hand, I suppose Figura’s benefit of complexity-in-simplicity is no less beneficial if used as a feature of a new language—rather than as the sole basis of its semantics—in conjunction with any other feature of organization, as long as it’s still the sole modus for word ordering and punctuation. In other words, maybe it’d just be like having Latin—or Esperanto for that matter—on crack.

But on the other hand again, those inflections that take the place of word order in other languages may totally clash with the grammar of hierarchical analogy. I should hope that, if words in an existing language with sentence-structural implications are borrowed or otherwise used, then we don’t just forgo bothering to synthesize any analogical definitions for them based on more-principal words. Especially considering that this language is mostly intended to be an experiment in cognition: that is, is our understanding of things fundamentally based on comparison and nothing more?  (How else would organized thought arise from the so-called tabula rasa?) And if not, then why is it that the simple modality of Figura could go so far?

In the examples I’ve given thus far, I only demonstrated how the language could be done in writing. You can’t very well pronounce series of colons in speech. There are a few different possibilities for efficiently expressing the grammar in speech:

1. For each number of consecutive colons, invent a new word, which I’ll call a “structuring word”: E.g, if “:” were “an” and “::” were “kan”, then “sun:hot::is” would be said, “sun an hot kan is”. This is still rather verbose—people’s jaws would probably become tired from having to say “an” and “kan” so often. We can probably completely eliminate the word for single “:”’s without any loss of information, since a “:” would in that case be the only possibility for any two adjacent non-structuring words. However, I’m still not sure that’s efficient enough.

2. ­Invent words for specific analogy-tree structures and precede a given sentence, or sentence part, with the word for its particular tree structure. Since there are so many possible tree structures, a combination of this and Method 1 should probably be used, where only the most common tree structures get their own words.

3. Invent words for all the shortest and lowest-level tree structures, and enunciate higher-level bifurcations only as they come, using method 1. This may be cleaner analytically than method 2—if it’s even practical, but it might come at the considerable expense of both naturalness/organicity and dynamism in the language. 

4. Invent words for various tree structures per method 3 or 4, but allow a structural word to refer to a larger-scale structure inasmuch as its implied structure applies particularly to that scale. For example, in a:b::c:::d::::e:f, its second-level structure would be “x:::y::::z”—where “x” signifies “a:b:c”, “y “signifies “d”, and “z” signifies “e:f”. If the ⌂:::::::⌂ relation is called “tar,” ⌂:::⌂ is called “yar,” and ⌂:⌂ is called “an”, then the entire sentence could be said, “tar yar a b c d an e f”. In other words, “yar a b c” is embedded within “tar ⌂ ⌂ ⌂”, where the first ⌂ becomes “yar a b c” or “a:b::c”, the second “⌂” becomes “d”, and the third ⌂ becomes “an e f” or “e:f”.

Or alternatively, “tar” and “yar” could be the same word, as a second-level structure of ⌂:::::::⌂ corresponds to a first-level structure of ⌂:::⌂. That might only serve to be confusing, though. I picked 2 as the number of bifurcation levels per scale arbitrarily, but it doesn’t have to be 2, and it doesn’t have to be anything in particular anyway. Bifurcation levels themselves could just be indicated by the selection of word used, and scales/bifurcation levels may or may not be relative to how they’re nested.

For example, if one could say, “tar tar a b c d an e f” for “a:b::c:::d::::e:f,” then the bifurcation levels implied by the first tar” would be relative to those implied by inner “tar”; and if one could say, “tar an a b c d” for “a:b::c:::d,” then the implied bifurcation levels for tar” would be shifted up by only 1 instead of 2 levels. Since Method 2 or 3 might be used in combination with Method 1, “a:b::c:::d::::e:f” might, for example, be expressible as “tar yar a b c d e an f”, or, perhaps, if “san” is “:::” and “man” is “::::”, “tar a b c san d man e an f,” or even “tar a b c san d man an e f”, et cetera and so on—depending on how we define the grammar.

5. The number of possible tree structures that we’d have to invent words for could be greatly reduced if we instated a practice of rearranging sentences’ word orders to fit already-existing tree-structure words. For example, we wouldn’t need to separate words for each of structures ⌂:::⌂ and ⌂:::⌂, because “a::b:c” could, instead, be said as “b:c::a”. This may make the language sound unnecessarily strained and inelegant, though. At least it would not necessarily be a rule that we use an existing tree-structure word whenever possible.. we could be allowed to use Method 1 whenever we wish.

But I am not even sure that we would have to use any of the above methods but Method 2, since we may create a lexicon of tree structures rich enough to be applicable to all applied situations.

You may have noticed that I haven’t even addressed the problem of actions yet, which bears worth getting into. Take “he went to the store.” In Figura, it might be “he:store::wentto” (or, we could say, “he:store::goto:::did”—but we’ll go with “he:store::wentto”..), but what is “wentto”? For a suitable analogy, all we really need is a comparable juxtaposition—take any instance in which someone went to something which is well-known. Christ went to the altar, but “Christ:altar” doesn’t really work because Christ also did other things with the altar. 

What we need here is a story.. a story of a man, who went to something, the story being mainly centered around his having gone to it. An epic or a folk tale, perhaps.. or even better.. a mythology. By being immortalized in language, this allegory would be thusly embedded in the minds of all who use it, and brought closer to the surface of consciousness every time someone says that somebody went to something. The same would apply to all the other allegories and analogies we would use for other words. So, thanks to a language, we would now have a suitably myth-based culture.

The word “wentto” as defined above wouldn’t be the only possibility to use in the sentence, naturally.. The man being an object, and the store being an object, other possible analogies could apply. There could be an allegory, or even an iconic historical event, where one thing moved toward another thing. And this even might be more applicable to the above situation, in context, than that the allegory that actually involves a man, according to the judgement of the person using it, for some odd reason.

Or perhaps there is a third allegory, one in which a man went to a store. And maybe this one is always used, or more generally used, for people going particularly to stores. Or perhaps the speaker just thought it more suitable for the moment. And why not 2 to 3 or more allegories that apply specifically to a person going to a thing?  In that case she could have said, “goto(3) kan he an store did,” where “goto(3)” is meant to symbolize the third word taken from these allegories and wouldn’t actually use the number 3 in natural Figura.

In that case, we’ve eliminated the highest-order relational signifier, which would have been a “san” between “store” and “did,” because, with all the other structural signifiers in place, there was no other possibility for that relation; “store” and “did” could only have been separated by a “:::“.

Anyway, the point is that there could be any number of available metaphors, overlapping or even redounding in applicability, for use in any given situation. Having a plurality of words available for any given use isn’t a new concept, of course. English, for example, is known to be an extremely “rich” language, having numerous synonyms (with usefully differing aesthetics and connotations) for most words.

In the example, “goto(3) kan he an store did,” I really wanted to order the words, “he [??] did [??] goto(3) [??] store”, but there was just no way to do that with the given system of structures. So it brought to mind an extra possibility: the possibility of significators for structures where other words occur in between analogous pairs. This would mean that, instead of signifying structures such as ⌂:::⌂, the order in which the words occur in the structure could itself be a dynamic, so that, e.g., “mar” could signify ⌂2:3::1. Thus “a:b::c” could be expressed as “mar c a b”.

Note that this syntax also allows us to invoke the same sentence word twice in the implied structure, e.g.: tra could signify ⌂1:2::2:3, so that “tra Donald Trump rock genius” would mean that Donald Trump is to a rock as a rock is to a genius. Of course, this brings into view the problem that multi-word terms being borrowed into the language could cause ambiguity in the tree structures.

Of course, I had two choices there: have ­subscripts index words in a sentence so that where they appear in the key reflects where the indexed words would appear in that key, so that ⌂2 means that the second letter after “mar” appears there, or the converse: have ⌂2 mean that the second letter in “a:b::c” would appear there. The first choice is the better one for almost all purposes, and is more logical. Although instead of using ⌂’s and subscripts, we could use letters, as in “mar: b:c::a.” We could also include the inverse lookup, like so:

mar: b:c::a 🡒 “a b c” .. a:b::c 🡒 “c a b”. In other words, “mar a b c” would mean “b:c::a” and “mar c a b” would mean “a:b::c”.

Or we could illustrate it as,

mar: ⌂2:3::1 🡒 “⌂123” .. ⌂1:2::3 🡒 “⌂312

Or better, we could do it graphically:




And some other arbitrary and unlikely examples of what could be invented (these examples probably being needlessly complicated)..




And repetition..


The last subject I wanted to touch upon is definite and indefinite articles. Some readers might be wondering how one would express “an apple” or “the apple” in Figura.. 

“An apple” could be signified simply by “apple”, as with Esperanto, but for “the apple” it seems we’d need some way of expressing the attribute “as that which”, for example, as in “an apple, as that which had been mentioned before…”, but this seems hard to do in Figura. Without parts of speech, “that” has little meaning. And “as” might be tricky to represent in a language that’s only based on “as” to begin with. Even if we just directly adopted the word “the” and tried to invent a suitable meaning-analogy for it, it doesn’t relate two different things—it only relates one thing, and apple:the seems like a bad analogy, because how can something that could apply to everything be analogous to “apple”? General:specific might be a pertinent analogy, but how would it be used? 

I think something like “apple:aforementioned object” might be sufficient. Come to think of it, if we can say “aforementioned object” then we can just say “aforementioned apple.”  How do we attribute “aforementioned” to the object? Similarly, how would we say “red apple”? I guess the problem of including the definite article boils down to the problem of including adjectives. Does “apple:red” work? Consider “apple:me::ate”: “an apple is to me as [something else is to something/someone that ate one in some well-known event or allegory].” Now consider “apple:red::me:::ate”: an apple is to red as [me, as if I were a relation] in the way that [something else is to something/someone that ate one in some well-known event or allegory].”.. nope, doesn’t work.

One possibility might be just to use word-compounding, by which “red apple” would become “red-apple”. But would word-compounding constitute a secondary grammatical principle? Perhaps not, since even though “red” and “apple” are separate words in English, the word base of Figura is not defined as the word base of English, so “red-apple” (as an example) can easily be infused into Figura’s word base as simply just another word.. Also, I am partial to word-compounding languages, especially regarding noun compounding. In this scenario, perhaps red-apple would be noun compounding because, as Figura has no convention for adjectives specifically, “red” would not by definition be an adjective.. It could be synonymous with the qualia “redness” or the color red, which are both nouns.

It may seem, on the face of it, that requiring a hyphen for, e.g., red-apple or yellow-phone, as opposed to using “redapple” or “yellowphone”, is an arbitrary distinction, given that they are just single terms in Figura either way and that any compound word that becomes common enough is prone to eventually having its hyphen dropped anyway.. and thus it may seem that such a requirement is unnecessarily authoritative. But it really does serve a useful purpose: without specified word delineations, it’s harder to tell—at least at first glance—where one word ends and another begins. In some cases, there could be syntactical ambiguity involved; in other cases, it could simply be aesthetically displeasing, throwing off the mind’s natural ability to parse. But once a compound word becomes mainstream enough to have its hyphen dropped, recognizing it as such has already become second-nature. And some compound words never do become agglutinated because it doesn’t look right. (Consider “mainstream” versus “second-nature,” for example.) So, authority and distinction on the use of hyphenated words are thus justified. 

Adverbs, on the other hand, might never have to be used. If you want to say “he ran quickly to the store” instead of “he ran to the store,” you’d simply use another more suitable allegory in which someone ran quickly to something, perhaps even to a store.  

Going back to the original “aforementioned apple” problem, it occurs to me that we could just do it like so: “apple::object:aforementioned”, but what analogy would “aforementioned” expand to? Or could we just borrow the word directly? Or would it name a myth/popular event that we make up or recall? And could “object:aforementioned” be shortened to “the”? Or could we just say “apple:aforementioned”? And then “the” doesn’t necessarily refer to something that was previous mentioned, though it does seem to imply that the object was somehow already considered or necessitated.

One final note: while I created this language for the purposes of linguistic experiment, it has also brought my attention to the general possibility of using allegory and mythology more pervasively in existing languages. For such purposes, for any given language, ideally we should be able to create a bunch of new words—particularly or especially adverbs—which represent specific allegories, myths, parables, epics, well-known events, and so on, of its given culture. That may or may not be memetically practical, though, given that their origins would be artificial and given modern anti-mythological culture.

Idea #4 – Information Technology – Binary Sort

I happen to have two (slightly functionally different) descriptions of this idea stored away, so I’ll just copy both.

My (novel?) idea for a sorting algorithm. It doesn’t do any comparisons, and it works in O(n) time.

We do not compare any elements. We create a binary tree and when navigating the tree we use the binary pattern of the element we’re currently sorting as the path. When we come to a 1 in the path and there’s no 1 branch, or a 0 in the path and there’s no 0 branch, we create branches until we get to our number. A node which contains 2 branches might also represent an element, as not all elements will necessarily have the same number of bits (for example, if we’re sorting strings). Each node would store a count of how many times its value (i.e. what sorted element it encodes if it’s taken as an end node) occurs in the list. That’s how we’d have duplicate values and how a branch can also be a value.

This has a worst-case scenario of O(n*l) where n is the length of the list and l is the average length of a member in bits, or O(t) where n is the total length of all the members in bits. An O(n log n) sort algorithm actually has to compare member values byte by byte, so for comparison it’s really O(n l/8 * log n), or O(n * l/16 * log n) if they’re comparing words, or O(n * l/32 * log n) if they’re comparing double-words.

It might be more advantageous than other sorting methods to implement this one in assembly.

The algorithm could alternatively use separate binary trees for each number of bits in the given elements, but I don’t how that would have any benefits.

A sorting algorithm that sorts in constant amortized time, but which is proportional to the average length, in binary, of the elements being sorted

Let’s say we have a function that returns a list of {0, 1} values which is a given element’s binary representation. Now we make an array of the size of our input list, which we will use as a binary tree. Each node will be a struct instance. For each element in our list to be sorted, we navigate this tree using the element’s binary sequence as the path, creating new branches as necessary. When we get to the end, we insert a reference to the original object into the beginning of a linked list of references to objects belonging at that node (i.e., equivalent elements). Then at the end of our sorting we simply walk the tree and traverse its linked lists to return our sorted list. If we want equivalent objects to appear in the same order in which they appeared in the original list, we can append them to the end of the linked lists instead and have the node store a pointer to the current last linked list item.

This is only the naive version, though.. we don’t really have to make the full path for every element. Instead of having a function that returns an object’s binary representation, we’ll have a function that returns an iterator for it. Then instead of navigating/creating nodes to the end of the path, we navigate until we find a node who simply stores a pointer to an iterator for an element that’s somewhere underneath it. Now, since a node can only contain one such iterator pointer, when you come to such a node you must poll values from both iterators (your current element’s and the node’s) and create a path until the values diverge, at which point you create two children and put your respective iterator and element pointers in those. Of course you’d nullify the pointers in the original node. You have a 50/50 chance of this happening on the first poll, a 3/4 chance of it happening by the second poll, etc. When an iterator exhausts itself you delete the iterator object, nullify the iterator pointer and start the linked list of equivalent elements for that node.

Alternatively to the linked lists we could simply have a pointer to the first instance of an equivalent member and a count value that specifies how many times it repeats, but that only works with members that don’t have any ‘hidden variables’ with respect to their binary representations. Plain old strings or numbers would work just fine that way. Sorting person objects by last name, wouldn’t.

This algorithm isn’t generally suitable for mixed-type collections or any type that uses a special comparison function. Strings, ints, floating points and fixed decimals should work fine. The above is optimized for strings. For a numerical type, all iterators are exhausted at the same branching level (8, 16, 32, or 64), although that doesn’t change the algorithm a whole lot — maybe provides for some minor optimizations. But because comparing two numbers might be less expensive than grabbing a value from a binary iterator, we could forgo the binary representations altogether for numbers. Instead of a node storing a pointer to an iterator and an element, it would simply store a number. That could be a huge improvement. And even if we still used binary representations, most numeric values probably start with lots of 0’s, so we could speed up the process by performing a BSF assembly command on it to determine the number of leading zeros, then jump directly to a node, skipping the first portion of the path constituting zeros, based on a table of pointers for numbers of leading zeros up to 8, 16, 32, or 64.