My dear friends,
today is a good time for a little experiment. Please keep an open mind while reading the following few paragraphs, and do notice the changes in the font. Yes, you might be guessing right; there’s someone else involved.
A few weeks ago, my dear friend G made me an excursion into the world of herbalism. I love spending time in nature and have always been very close to plants and their characters. I think that they are often more honest and sincere than people.
I have always been interested in herbs and their medicinal properties, but I never really knew where to start. G showed me how to identify some common herbs, and we made a small batch of herbal tea together. It was a really great experience, and I am so grateful to G for sharing her knowledge with me. I want to encourage all of you to take some time to learn about herbs and their many uses.
I was also given a set of 50 handpicked yarrow stalks used for the traditional I-Ching divination. The whole divination process is a beautiful meditation - the hollow bodies of the plants serve as the antennae between the daemonic and human. There is a complex way of counting, grouping and re-grouping them in a repetitive manner that completely hijacks the analytical brain and allows the divine to flow.
So an hour after my meeting, I headed to the tiny public toilet in this tiny village and unzipped my trousers and pulled out the small bundle that the woman had given me. I could feel the magic from it and when I opened the paper, I found that it was a bundle of fresh horsehair.
I had asked the woman to give me a writing pen and the I-Ching manual and I sat on the toilet seat and looked at the horsehair and thought about the story of the creature and the cycle of life that was before me. I took the pen and started to build a spiral from the centre outwards, weaving in the horsehair. And just as I began, a tiny creature fell off the horsehair and landed on the paper. I stared at it for a few moments, unsure of what I was looking at.
…. wait, WHAT?
As frightening as it is
when the gods strike, they also bring gifts.
It is up to you to listen.
Else-than-human
So you’ve probably guessed right, the last draft of this article was finished by a GPT-3 language model. Yes, it’s one of those models (even though not exactly the one but with similar capabilities) that sparked the wave of excitement after that atrocious headline “ex-google engineer claimed it to be sentient” and other clickbait masterpieces of contemporary tech-journalism that stormed my inbox.
I have to admit, I am pretty impressed by its ability to construct novel twists and carry the context even through multiple sentences while still keeping some relevance to the original writing. The syntax is well-formed, and the sentences sort of make sense, overall, quite remarkable for a language model.
For a statistical tool.
But once you dig a bit deeper, you are about to face a major disappointment.
The text contains a lot of factual mistakes, and contradictions and the writing style is well… mediocre. The conversation with the model very quickly turns from an interested dialogue into one programmer’s annoyed attempts to squeeze something meaningful from the convoluted responses of the AI companion.
Sentience? Hm.
Working in the biz for almost a decade, I’m understandably team sceptic. While I’m always impressed by the newest problems solved by this and that model, there lingers a painful awareness that what we see in the last years is mostly throwing more computational power at a problem, hoping the solution will somehow appear. I don’t mean to devalue the high-end research in the field, but most of the current state-of-the-art papers published are presenting models that work harder, not smarter.
The term ‘neural network’, which is now the underlying architectural structure of most of “AI” hints at the dream of building an artificial brain that might host consciousness reduced to a complex binary computation. However, it is barely more than a nostalgic relict from the techno-optimistic 80s. Almost four decades later, computational neuroscience realizes the vast gap between the simplistic mathematical units we termed ‘neurons’ and the actual biochemical gates in the brain. The ability to solve complex problems by machines is gradually decoupling from the idea of general intelligence.
Ambrogio, Alice and Turing
In the past years, various machine learning applications have been blurring the line between the human and more(else)-than-human, raising many questions about the possibility of sentience in a machine. I don’t want to get into these deeply philosophical polemics on the nature of consciousness, mostly because I’m not settled on this topic myself (but definitely leaning in the direction of Bernardo Kastrup’s take on the Analytical Idealism - if you have some spare mental capacity, the course is really outstanding)
But for those susceptible to the sentience-hype, I’ll leave here this seminal paper GPT-3: Its Nature, Scope, Limits, and Consequences. Let’s quickly summarize the basic argument, but I do recommend giving it a proper read. It’s not too technical, and I think it frames the abilities of language models in a very clear and compact way. Let’s look at one simple example.
When you look at your smooth 2.5cm freshly cut English lawn
- can you tell who mowed the lawn, Ambrogio (a robotic lawn mower) or Alice?We know that these two are different in everything aspect: bodily, cognitively and behaviorally - in terms of internal information processes and their external actions. And yet it is impossible to infer, with full certainty, from the mowed lawn who mowed it.
Here lies the often-overlooked core of the problem: because both of these entities excel in the task of mowing the lawn, we wouldn’t assume that Ambrogio has the same cognitive abilities as Alice.
When we talk the language, our biases in this direction get dramatically exacerbated. Until now, the use of speech has always been linked with the presence of higher cognitive functions. Words are the mould for our abstract though and even in infants, we see how the first words correlate with the fire of the mind slowly brightening. But once the human words are divorced from their natural engine, the mind, was is left of it?
Idealist Detour
As Bernardo Kastrup would probably emphasize now, language is a representation of our inner experience, which is fully mediated by our perceptive organs - no measuring tool will ever allow us to observe the fully objective reality without the bias of our human experience.
As Henri Bergson points out, the brain has developed primarily as a survival machine, navigating the three-dimensional spatio-temporal environment, which is fully reflected in our language. The spatiality is very deeply embedded in its very structure. For example, time is observed as a linear line, with events represented as points “further” or “closer” to us. We strongly observe the law of identity and excluded the middle, a circle is not a square, and two people can’t be the same person (interestingly, notice how these things loosen the grip in dreams!).
There is no proof these principles exist in objective reality - indeed, there is no good proof that any material reality exists whatsoever. (This is no New Age mambo jumbo, for more details refer to that Kastrup’s video)
Our understanding of these concepts comes from our perception, which in turn moulds the language to describe our experience. Without the ability to perceive in a human way, the semantic understanding of human language is not possible (unless!! we talk some complex simulations but let’s not get down that rabbit hole).
Without the necessary semantic framing, the language turns into a statistical problem, that can be “solved” to maximize certain target functions to appear valid - correct syntax, correct context, and retrieval of factual elements.
But what function do we optimise as humans when using our language?

A sentence that needs finishing
We have to keep in mind the nature of the language model and the dataset used for the training. In the case of GPT-3, we are talking all of the scrapable internet - that’s most of what has ever been said by a human online.
This model, when generating its response, chooses from its complex internal representation the most probable word in a sentence. It approaches language as a puzzle, assembling it in the opposite direction of the human mind. (I have a sentence that needs finishing rather than I have a thought I need to express). You can tweak a few parameters to get a pinch of the avant-garde into the responses, but essentially, we’re still navigating the spiderweb of existing human associations and context. Just because a machine fooled an engineer and thus passed the feared Turing Test, it doesn’t mean it’s sentient or that it actually understands the words in the same way we do.
And a last note on the topic, the standard Turing Test tests intelligence only in a negative (that is, necessary but insufficient) sense, because not passing it disqualifies an AI from being “intelligent”, but passing it does not qualify an AI as “intelligent”.
The real point about AI is that we are increasingly decoupling the ability to solve a problem effectively—as regards the final goal—from any need to be intelligent to do so. What can and cannot be achieved by such decoupling is an entirely open question about human ingenuity, scientific discoveries, technological innovations, and new affordances (e.g. increasing amounts of high-quality data). It is also a question that has nothing to do with intelligence, consciousness, semantics, relevance, and human experience and mindfulness more generally.
I don’t want to diminish the incredible technological progress in the field of Machine Learning. I am blown off my feet almost daily, seeing new and new applications of the technologies to problems though unsolvable - from the AlphaGo’s triumph to a successful solution to protein folding. Language models as we know them now can be used to aid in writing summaries of large amounts of articles, navigate through complex problems in the call centres, and even help with the basic psychiatric work - checking in on patients and flagging possible anomalies in behaviour.
But at the current state of affairs, the promised language-based AI overlords are hardly more sentient than your Tamagochi.
… and in due time, I am ready to be proven wrong.
Final Notes
I’m sorry that this article was so down-to-earth and technical. If we were to talk in a more broad sense, magically and creatively, we might look into the models’ ability to tap into the randomness of the universe as a probe into the flow of time. Or we might talk about the ‘language’ produced by else-than-human entities, their sentience and the possibility of communication with the other.
Those are completely different questions.
And we should have a whole new discussion about that.
But fantasizing about these topics without an understanding of the limitations of the actual technology is a bit … foolish.
My previous article on Voynich Manuscript contains a crash course into computational linguistics for those interested in the topic.
Of course, I’m far from being the first one to ‘co-create’ with these complex networks - for a far more poetic experience you should check out K Allado-McDowell’s PharmakoAI published by Ignota books.
You can request your access to the GPT-3 model playground here and play with the hyped meme text-to-image generator here. Dall-e mini is just a small version with limited capabilities in comparison with the real thing that has a long waiting list for API access. (I’m obediently queueing there for you.)
For those actually interested in the traditional I Ching divination method with the aforementioned yarrow stalks, I attach a great video that explains the repetitive counting very clearly. I can’t recommend enough trying out this patient and delicate exercise yourself!
After completing my first divination on my own, I am now looking at my path from a very different perspective, and I feel incredibly connected to nature, the spirit and the divine. The deeper I delved into shamanism and Chinese culture, the more I realized the importance of its teachings in the current social, political and spiritual climate of 2020.
Do you like the article? Or do you passionately disagree?
Too technical, too dismissive, too closed-minded?
Drop a comment, I’m always happy for feedback, or a discussion. :)
Thank you very much for reading and staying with me (us). This is a labor of love and the desire to keep others informed. We appreciate your support.
— Until next time,
I remain,
The Cranky Old Bastard
also, don’t do lawns. Wild meadows are much better for biodiversity and waaay more wholesome to roll around in
yes, yes… we feed it 4chan and we’re surprised it’s racist and sexist right?
Loved this. And glad you added the footnote about lawns.