This week’s post is about “Semantic Folding Theory and its Application in Semantic Fingerprinting” by Webber . The basic ideas were also discussed in this Braininspired podcast, and also presented and recorded at the HVB Forum in Munich. You don’t need any particular prior knowledge to understand this post.

In my own words

The space of all concepts is enormously large. Much larger than the space of all possible things. But somehow our brains can navigate this space and find meaningful relations between concepts. How does this work, and how is this related to natural language?

In natural language, we don’t give one word to one concept. Instead, the same word may describe different concepts, depending on the context. For example, the word “organ” may refer to a musical instrument, or to an assemblage of tissues.

Intuitively, we can “add” or “subtract” words to refine concepts. For example,

\text{organ} - \text{tissue} = \text{instrument}\;,

or

\text{car} + \text{fast} = \text{sports car}\;.

But until Webber’s work (which I am describing), it was very difficult to make a computer handle such relations. So how does it work?

The key insight comes from neuroscience. Specifically, neuroscientists have hypothesized that the outer-most layer of the brain, called neo-cortex, is essentially made up of a large number of physical, two-dimensional maps of concept space, known as cortical modules. Crucially, although every point on such a map corresponds to a concept, not every concept corresponds to a point on that map. Instead, some concepts are combinations of points on the map.

For example, there may be a point for “car”, and a point for “fast”. If both are active, then this represents “sports car”.

We can abstract this idea to computer science, and mimic a cortical module as a two-dimensional binary array. Say it constitutes 128 \times 128 bits. To assign meaning to these bits, we take a large body of text, e.g. Wikipedia, slice the raw text into snippets, and then assign one bit in this 128 \times 128 matrix to each snippet in such a way that snippets with similar content point to bits that are close to one another. This process is known as semantic folding.

The folding can be done in two different distance measures: associative and synonymous.

The clustering of alike concepts in the semantic folding process could be done, for example, using self-organizing maps, although Webber never seems to specify the exact algorithm that he uses in his work.

Semantic folding theory provides a framework for describing how semantic information is handled by the neo-cortex for natural language perception and production, down to the fundamentals of semantic grounding during initial language acquisition.


Webber

Now that the semantic map is created, we can create semantic fingerprints of words. To this end, we would activate all points in our semantic map that correspond to snippets in which the word appeared. This creates a sparse distributed representation of the word, using one 128\times128 binary matrix.

For a whole sentence or document, we would simply add up the maps of all the words within the document.

Words that are unspecific (a.k.a. stop-words), such as “with” or “it”, will activate points all over the map, whereas very specific words, such as “cake” or “molecule”, will only activate a few points on the map.

We eliminate the stop-words, and thereby create a sparse representation of the document, by deactivating all but the most active 2% of the points on the map. In this way, only the most important semantic points remain, and we are left with a sparse distributed representation of the entire document.

How semantic folding and fingerprinting works, by cortical.io.

The possible applications of these fingerprints are plenty. Essentially, everything that natural language processing is trying to do might be made possible with semantic fingerprints. Check out these demos at cortical.io to see a few applications in action.

Opinion, and what I have learned

This is another one of those ideas that blew my mind. I am baffled that semantic fingerprinting does not even appear on the Wikipedia page about natural language processing.

Sparse distributed representations might as well be used to encode visual or audio data, but to my knowledge this has not yet been explored.

Since I have just spent time studying neural processes , it seems clear to me that there is a close relation between NPs and the theory presented here. I wonder if the performance of NPs can be improved by re-designing the encoder(s) such that two-dimensional sparse distributed representations are generated.

References

1.
Webber, F. D. S. Semantic Folding Theory And its Application in Semantic Fingerprinting. arXiv:1511.08855 [cs, q-bio] (2015).
1.
Garnelo, M. et al. Conditional Neural Processes. arXiv:1807.01613 [cs, stat] (2018).
1.
Garnelo, M. et al. Neural Processes. arXiv:1807.01622 [cs, stat] (2018).