The Figura Language

There may be a radically different way to do grammar, which is simpler than any other grammar, and can potentially get the same things across given the right words extant within the language. The simple rule is this: everything is an analogy. The simplest kind of statement would be two words, e.g., Christmas:Hanukkuh. That says that Christmas is in some way analogous to Hannakuh. A sentence might actually be one word, as individual words may take the place of simple or complex analogies. “I am hungry” could be be expressed as one word, if such a word so evolved. 

Analogies may be nested to any degree, with single words replacing pairs wherever convenient. No words are pre-defined for this language; they would have to evolve on their own, and/or be borrowed from other existing languages, or all be borrowed from one language. I’d personally prefer English or Latin. English makes sense because it’s the closest language to a universal language we have yet, and also it’s the most natural, flowy sounding language (even according to those for whom English is not their mother tongue). and Latin makes sense because it’s the best common denominator to the tree of languages that have evolved since it, regarding their words’ roots, and also it just sounds really cool.

Supposing we could summon up a suitable analogy for “is”—the “is” of attribution—and we then replaced that analogy with the word “is”, we could then say “the son is hot” like so: “sun:hot:::is”. “Is” could probably be defined as something like this: “Earth:round” (since the relationship between Earth and round is that the former “is” the latter). Thus we wouldn’t have to say “son:hot::Earth:round” and generally go around using the idiom “Earth:round” all the time. The “is” of identity, on the other hand, would be much more concrete: it could signify A:A, or even 2+2:4, so we could say, ’43rd president:”George W. Bush”::isofidentity’ (or whatever shorter word we concoct for “is” of identity, such as “isi”), which expands to: ’43rd president:”George W. Bush”::2+2:4′. Except that “43rd president” would actually be in analogical form, for example, something like this: “president:43::ordinal,” where ordinal could defined as, say, “first:1″. So the final statement might be something like this: ‘president:43::ord:::”George W. Bush”::isi’, fully expanding to (as one possibility): ‘president:43::first:1::::”George W. Bush”::a:a’.

“And” would be a relationship too, of course, but I’m not sure of what kind. It seems to signify the most general kind of relation: the relationship two things have by virtue of their being related in the given sentence. Actually, I guess it could be considered a grouping term. It seems even antithetical to the system of hierarchical pairs (which is essentially what the Figura system is), since “and” can link any number of items serially. And the same applies to “or.”

I’m loathe to add more rules to the grammar just for those two words, particularly since they can be represented via hierarchical pairs, albeit awkwardly (kind of analogously to using nested ifs in programming instead of “else if”‘s), but I had already been thinking of allowing series anyway, of the form “a:b:c:d:[etc]”. The only problem, then, is, for example, if “b” in the above example represented the pair “e:f”, how would one fully expand the expression? It doesn’t seem possible to do in a particularly logical way: one could only itemize the relation “b” as a singular entity. But there’s no really big reason not to do it this way other than that it breaks the language’s capacity of allowing one to expand any given sentences by recursively expanding its terms, within the language’s own grammar. ­

In Latin, semantic structure is afforded purely by “accidence”—that is, word relationships are determined solely (or almost solely?) by inflection, so a sentence is a collection of inflected words that you can put in any order you want, in general. If inflections are used with Figura in its adoption, or creation, of words, then it adds the risk of making the semantics more complicated, by using two individually sufficient modes of grammar—the analogical and the polysynthetic—rather than less complicated. (If we don’t want to do that, and we use Latin for our word base, then perhaps we could use Latino sine Flexione.)

Though, on the other hand, I suppose Figura’s benefit of complexity-in-simplicity is no less beneficial if used as a feature of a new language—rather than as the sole basis of its semantics—in conjunction with any other feature of organization, as long as it’s still the sole modus for word ordering and punctuation. In other words, maybe it’d just be like having Latin—or Esperanto for that matter—on crack.

But on the other hand again, those inflections that take the place of word order in other languages may totally clash with the grammar of hierarchical analogy. I should hope that, if words in an existing language with sentence-structural implications are borrowed or otherwise used, then we don’t just forgo bothering to synthesize any analogical definitions for them based on more-principal words. Especially considering that this language is mostly intended to be an experiment in cognition: that is, is our understanding of things fundamentally based on comparison and nothing more?  (How else would organized thought arise from the so-called tabula rasa?) And if not, then why is it that the simple modality of Figura could go so far?

In the examples I’ve given thus far, I only demonstrated how the language could be done in writing. You can’t very well pronounce series of colons in speech. There are a few different possibilities for efficiently expressing the grammar in speech:

1. For each number of consecutive colons, invent a new word, which I’ll call a “structuring word”: E.g, if “:” were “an” and “::” were “kan”, then “sun:hot::is” would be said, “sun an hot kan is”. This is still rather verbose—people’s jaws would probably become tired from having to say “an” and “kan” so often. We can probably completely eliminate the word for single “:”’s without any loss of information, since a “:” would in that case be the only possibility for any two adjacent non-structuring words. However, I’m still not sure that’s efficient enough.

2. ­Invent words for specific analogy-tree structures and precede a given sentence, or sentence part, with the word for its particular tree structure. Since there are so many possible tree structures, a combination of this and Method 1 should probably be used, where only the most common tree structures get their own words.

3. Invent words for all the shortest and lowest-level tree structures, and enunciate higher-level bifurcations only as they come, using method 1. This may be cleaner analytically than method 2—if it’s even practical, but it might come at the considerable expense of both naturalness/organicity and dynamism in the language. 

4. Invent words for various tree structures per method 3 or 4, but allow a structural word to refer to a larger-scale structure inasmuch as its implied structure applies particularly to that scale. For example, in a:b::c:::d::::e:f, its second-level structure would be “x:::y::::z”—where “x” signifies “a:b:c”, “y “signifies “d”, and “z” signifies “e:f”. If the ⌂:::::::⌂ relation is called “tar,” ⌂:::⌂ is called “yar,” and ⌂:⌂ is called “an”, then the entire sentence could be said, “tar yar a b c d an e f”. In other words, “yar a b c” is embedded within “tar ⌂ ⌂ ⌂”, where the first ⌂ becomes “yar a b c” or “a:b::c”, the second “⌂” becomes “d”, and the third ⌂ becomes “an e f” or “e:f”.

Or alternatively, “tar” and “yar” could be the same word, as a second-level structure of ⌂:::::::⌂ corresponds to a first-level structure of ⌂:::⌂. That might only serve to be confusing, though. I picked 2 as the number of bifurcation levels per scale arbitrarily, but it doesn’t have to be 2, and it doesn’t have to be anything in particular anyway. Bifurcation levels themselves could just be indicated by the selection of word used, and scales/bifurcation levels may or may not be relative to how they’re nested.

For example, if one could say, “tar tar a b c d an e f” for “a:b::c:::d::::e:f,” then the bifurcation levels implied by the first tar” would be relative to those implied by inner “tar”; and if one could say, “tar an a b c d” for “a:b::c:::d,” then the implied bifurcation levels for tar” would be shifted up by only 1 instead of 2 levels. Since Method 2 or 3 might be used in combination with Method 1, “a:b::c:::d::::e:f” might, for example, be expressible as “tar yar a b c d e an f”, or, perhaps, if “san” is “:::” and “man” is “::::”, “tar a b c san d man e an f,” or even “tar a b c san d man an e f”, et cetera and so on—depending on how we define the grammar.

5. The number of possible tree structures that we’d have to invent words for could be greatly reduced if we instated a practice of rearranging sentences’ word orders to fit already-existing tree-structure words. For example, we wouldn’t need to separate words for each of structures ⌂:::⌂ and ⌂:::⌂, because “a::b:c” could, instead, be said as “b:c::a”. This may make the language sound unnecessarily strained and inelegant, though. At least it would not necessarily be a rule that we use an existing tree-structure word whenever possible.. we could be allowed to use Method 1 whenever we wish.

But I am not even sure that we would have to use any of the above methods but Method 2, since we may create a lexicon of tree structures rich enough to be applicable to all applied situations.

You may have noticed that I haven’t even addressed the problem of actions yet, which bears worth getting into. Take “he went to the store.” In Figura, it might be “he:store::wentto” (or, we could say, “he:store::goto:::did”—but we’ll go with “he:store::wentto”..), but what is “wentto”? For a suitable analogy, all we really need is a comparable juxtaposition—take any instance in which someone went to something which is well-known. Christ went to the altar, but “Christ:altar” doesn’t really work because Christ also did other things with the altar. 

What we need here is a story.. a story of a man, who went to something, the story being mainly centered around his having gone to it. An epic or a folk tale, perhaps.. or even better.. a mythology. By being immortalized in language, this allegory would be thusly embedded in the minds of all who use it, and brought closer to the surface of consciousness every time someone says that somebody went to something. The same would apply to all the other allegories and analogies we would use for other words. So, thanks to a language, we would now have a suitably myth-based culture.

The word “wentto” as defined above wouldn’t be the only possibility to use in the sentence, naturally.. The man being an object, and the store being an object, other possible analogies could apply. There could be an allegory, or even an iconic historical event, where one thing moved toward another thing. And this even might be more applicable to the above situation, in context, than that the allegory that actually involves a man, according to the judgement of the person using it, for some odd reason.

Or perhaps there is a third allegory, one in which a man went to a store. And maybe this one is always used, or more generally used, for people going particularly to stores. Or perhaps the speaker just thought it more suitable for the moment. And why not 2 to 3 or more allegories that apply specifically to a person going to a thing?  In that case she could have said, “goto(3) kan he an store did,” where “goto(3)” is meant to symbolize the third word taken from these allegories and wouldn’t actually use the number 3 in natural Figura.

In that case, we’ve eliminated the highest-order relational signifier, which would have been a “san” between “store” and “did,” because, with all the other structural signifiers in place, there was no other possibility for that relation; “store” and “did” could only have been separated by a “:::“.

Anyway, the point is that there could be any number of available metaphors, overlapping or even redounding in applicability, for use in any given situation. Having a plurality of words available for any given use isn’t a new concept, of course. English, for example, is known to be an extremely “rich” language, having numerous synonyms (with usefully differing aesthetics and connotations) for most words.

In the example, “goto(3) kan he an store did,” I really wanted to order the words, “he [??] did [??] goto(3) [??] store”, but there was just no way to do that with the given system of structures. So it brought to mind an extra possibility: the possibility of significators for structures where other words occur in between analogous pairs. This would mean that, instead of signifying structures such as ⌂:::⌂, the order in which the words occur in the structure could itself be a dynamic, so that, e.g., “mar” could signify ⌂2:3::1. Thus “a:b::c” could be expressed as “mar c a b”.

Note that this syntax also allows us to invoke the same sentence word twice in the implied structure, e.g.: tra could signify ⌂1:2::2:3, so that “tra Donald Trump rock genius” would mean that Donald Trump is to a rock as a rock is to a genius. Of course, this brings into view the problem that multi-word terms being borrowed into the language could cause ambiguity in the tree structures.

Of course, I had two choices there: have ­subscripts index words in a sentence so that where they appear in the key reflects where the indexed words would appear in that key, so that ⌂2 means that the second letter after “mar” appears there, or the converse: have ⌂2 mean that the second letter in “a:b::c” would appear there. The first choice is the better one for almost all purposes, and is more logical. Although instead of using ⌂’s and subscripts, we could use letters, as in “mar: b:c::a.” We could also include the inverse lookup, like so:

mar: b:c::a 🡒 “a b c” .. a:b::c 🡒 “c a b”. In other words, “mar a b c” would mean “b:c::a” and “mar c a b” would mean “a:b::c”.

Or we could illustrate it as,

mar: ⌂2:3::1 🡒 “⌂123” .. ⌂1:2::3 🡒 “⌂312

Or better, we could do it graphically:

mar:

or..

mar:

And some other arbitrary and unlikely examples of what could be invented (these examples probably being needlessly complicated)..

fuxor:

needlin:

anaxor:

And repetition..

gem:

The last subject I wanted to touch upon is definite and indefinite articles. Some readers might be wondering how one would express “an apple” or “the apple” in Figura.. 

“An apple” could be signified simply by “apple”, as with Esperanto, but for “the apple” it seems we’d need some way of expressing the attribute “as that which”, for example, as in “an apple, as that which had been mentioned before…”, but this seems hard to do in Figura. Without parts of speech, “that” has little meaning. And “as” might be tricky to represent in a language that’s only based on “as” to begin with. Even if we just directly adopted the word “the” and tried to invent a suitable meaning-analogy for it, it doesn’t relate two different things—it only relates one thing, and apple:the seems like a bad analogy, because how can something that could apply to everything be analogous to “apple”? General:specific might be a pertinent analogy, but how would it be used? 

I think something like “apple:aforementioned object” might be sufficient. Come to think of it, if we can say “aforementioned object” then we can just say “aforementioned apple.”  How do we attribute “aforementioned” to the object? Similarly, how would we say “red apple”? I guess the problem of including the definite article boils down to the problem of including adjectives. Does “apple:red” work? Consider “apple:me::ate”: “an apple is to me as [something else is to something/someone that ate one in some well-known event or allegory].” Now consider “apple:red::me:::ate”: an apple is to red as [me, as if I were a relation] in the way that [something else is to something/someone that ate one in some well-known event or allegory].”.. nope, doesn’t work.

One possibility might be just to use word-compounding, by which “red apple” would become “red-apple”. But would word-compounding constitute a secondary grammatical principle? Perhaps not, since even though “red” and “apple” are separate words in English, the word base of Figura is not defined as the word base of English, so “red-apple” (as an example) can easily be infused into Figura’s word base as simply just another word.. Also, I am partial to word-compounding languages, especially regarding noun compounding. In this scenario, perhaps red-apple would be noun compounding because, as Figura has no convention for adjectives specifically, “red” would not by definition be an adjective.. It could be synonymous with the qualia “redness” or the color red, which are both nouns.

It may seem, on the face of it, that requiring a hyphen for, e.g., red-apple or yellow-phone, as opposed to using “redapple” or “yellowphone”, is an arbitrary distinction, given that they are just single terms in Figura either way and that any compound word that becomes common enough is prone to eventually having its hyphen dropped anyway.. and thus it may seem that such a requirement is unnecessarily authoritative. But it really does serve a useful purpose: without specified word delineations, it’s harder to tell—at least at first glance—where one word ends and another begins. In some cases, there could be syntactical ambiguity involved; in other cases, it could simply be aesthetically displeasing, throwing off the mind’s natural ability to parse. But once a compound word becomes mainstream enough to have its hyphen dropped, recognizing it as such has already become second-nature. And some compound words never do become agglutinated because it doesn’t look right. (Consider “mainstream” versus “second-nature,” for example.) So, authority and distinction on the use of hyphenated words are thus justified. 

Adverbs, on the other hand, might never have to be used. If you want to say “he ran quickly to the store” instead of “he ran to the store,” you’d simply use another more suitable allegory in which someone ran quickly to something, perhaps even to a store.  

Going back to the original “aforementioned apple” problem, it occurs to me that we could just do it like so: “apple::object:aforementioned”, but what analogy would “aforementioned” expand to? Or could we just borrow the word directly? Or would it name a myth/popular event that we make up or recall? And could “object:aforementioned” be shortened to “the”? Or could we just say “apple:aforementioned”? And then “the” doesn’t necessarily refer to something that was previous mentioned, though it does seem to imply that the object was somehow already considered or necessitated.

One final note: while I created this language for the purposes of linguistic experiment, it has also brought my attention to the general possibility of using allegory and mythology more pervasively in existing languages. For such purposes, for any given language, ideally we should be able to create a bunch of new words—particularly or especially adverbs—which represent specific allegories, myths, parables, epics, well-known events, and so on, of its given culture. That may or may not be memetically practical, though, given that their origins would be artificial and given modern anti-mythological culture.

Leave a Reply