Comprehending AI - How Cyborg History and Performative Sciences Change the Human Experience
How do the LLMs of today like ChatGPT compare to previous cyborg objects like GPS and search-engines, and what can we learn from the history of cyborg objects? Could LLMs be the future of smart contract development? Let’s find out.
The past six months, the word on everyone’s lips has been AI, specifically Large Language Models (LLMs). While proponents praise this as a milestone of human ingenuity and the herald of a new tomorrow, opponents condemn it as the death of human creativity and a driving force of societal upheaval. While these two positions may seem mutually exclusive, the truth is probably somewhere in the middle.
Even though the concept of LLMs can seem both futuristic and alien to us, it’s worth considering that we’ve seen objects like this before and that we don’t need to invent the wheel twice to try and grasp the concept of LLMs as a cultural phenomenon and make sense to what sort of impact it may have on the human experience and the societies we live in.
In that sense, it’s prudent to demystify the concept and to find a suitable framework to address technology as a cultural phenomenon in the hopes of giving meaning to its effect on humanity. AI is not magical, nor is it fundamentally different from other technological advances we’ve seen previously. Rather, it’s yet another cyborg in a long line of cyborg objects that break down the barrier between man and machine, externalize human capabilities, and fundamentally change the way we interact with the world we find ourselves in.
A brief look at Cyborg History
Andrew Pickering first coined the term Cyborg Objects in 1995 as a conceptual tool for understanding the advancements made by the post-WW2 military-industrial complex. With the advent of the performative sciences fuelled by the employment of scientists from the Western military-industrial complex, the scene was set to use science as a performative tool to solve all sorts of problems - the results of which we see today in the myriad of technological advancements that have forever changed the way we interact with the world.
Broadly understood, cyborg objects are specific types of objects that effectively destabilize the border between human and machine via a feedback loop that externalizes human capabilities and prompts human action as a result. In this relationship, the human prompts the machine, which takes an action and then prompts the human to take another action. In this understanding, we’re dealing with a reciprocal interactive relationship between human and machine, where both entities are exerting influence on each other and prompting (and taking) action.
One of the first and simplest cyborg objects to come out of the military-industrial complex was early radar technology. While the concept of radar may seem banal today, the technology has the same ontological structure as the concept of AI in the sense that it externalizes human capabilities (the capability of aiming) to a machine, which feeds back coordinates to a human operator, who is now prompted to shoot at the specific coordinates. In the same sense, the servo-engines’ capability to “read” the human operator's steering and correct steering via position feedback externalizes the human capability of steering and prompts the human operator to drive their car in a specific way.
Take a modern example, the smartphone. The smartphone may be the cyborg object to end all cyborg objects, in virtue of the sheer amount of human capabilities we can externalize through it. The smartphone externalizes everything from navigation (GPS), recognizing awesome music (Shazam), doing research (search engines), remembering when your flight leaves (calendar), and on and on. My smartphone will even give me a push notification when it’s time to leave my house and provide me with suitable routes to take, so I won’t miss my plane, thereby externalizing planning, navigation, and research (of transport routes) all at once.
Given that the concept of AI follows the same ontological structure as that of smartphones, servo-engines, and radar, we can use these as rough conceptual indicators of how we can expect humans and society to evolve with AI. Because humanity does indeed evolve with cyborg objects, merely in virtue of the reciprocal relationship at the center of the concept and as a direct result of externalizing capabilities that were previously internal. Take the GPS in your smartphone, for example; it’s no secret that humans are considerably worse at navigating traffic without a GPS today than we were before. At the same time, we’re much worse at navigating the Dewey Decimal System today than we were before we had near-limitless knowledge at the tips of our fingers. With this in mind, it’s safe to say that the objects we choose to have a reciprocal relationship change us over time and can make us dependent on them.
How Cyborg Objects affect human capabilities
While LLMs can be used for a myriad of different purposes, the basic ontological structure remains the same. A human operator prompts the LLM, which returns a message. The returned message prompts the human operator to take action. In some cases, the human operator may be dismayed with the result and prompt the LLM again to try and achieve a different result. In other cases, the human operator may be satisfied (or even impressed) with the result and use it for human purposes. As technology progresses, the latter is more and more likely to be the case. After all, I’m very rarely disappointed in my GPS these days.
The exact nature of the externalized capability, of course, depends on the purpose the LLM is used for. If I use it to write a blog article, I’m externalizing the capabilities of research, writing, spell-checking, etc. If I use it to code a smart contract, I’m externalizing the capabilities of writing good code.
So are LLMs the herald of a new tomorrow and a testament to human ingenuity? Or is it the death of human creativity and a driving force of societal upheaval? Well, it might be both, depending on how you look at it. The advent of LLMs will change the world (and us) much like the smartphone did, and it is indeed a technological marvel.
As we’ve seen before, when we externalize human capabilities, we become worse at exercising them. In that sense, we can probably expect humans to become worse at doing their own research, writing, spell checking, coding, and so on, much like we’ve become worse at using a library or navigating with a map or printed directions from the internet. It’s not all bad, though. Externalizing capabilities that are better handled by machines also serve to improve human productivity in ways that can truly change the world. Even though we may have become worse at aiming without radar, humans today can aim better (with machines) than we ever could (without). In the same manner, we’re better at navigating the world (with GPS) today than we ever were (without).
In that sense, we can expect to become worse at the capabilities we externalize to LLMs, while we can expect to achieve better results by doing so. So while we may become worse coders ourselves, the code that is written will probably become better. While we may become worse at writing or spell-checking, the writing itself may very well improve.
Coin Operated Agents as a Cyborg Object
An interesting use case for LLMs, which we’ve covered at the end of this blog article, is Emin Gün Sirrer’s proposed Coin Operated Agents (COA). In this concept, we externalize the human capability of translating intent to code. Specifically, we specify the intent of the smart contract or transaction in human language (e.g., “A lending contract that…”, “This transaction pays $5000 to the fundraising effort to resurrect the Firefly series, but only if the director can raise a sum total of $30m”).
Interestingly, we already have pretty bad capabilities related to translating intent to code and vice-versa. The massive amount of funds that have been lost to malicious contracts over the years is a testament to this. In that sense, we probably don’t have that much to lose capability-wise (well, coders do, but they’re usually not the ones getting scammed by malicious contracts) by externalizing code-writing to an LLM. On the other hand, we may very well have the world to gain if COAs can actually rid the world (or the subnet) of malicious contracts forever. So what does this mean for human creativity? Well, that depends on where we understand creativity to be situated. Humanity will probably become worse at writing creative code, while we may very well become better at formulating creative intentions.
Crypto & Learning
Hungry for knowledge? Here you can get acquainted with blockchain, wallet security, DeFi and much more.
PUBLISHED 4TH MAY 2022
Understanding token metrics - how to read a crypto token
As with everything in life, crypto investments can be rewarding if you go in with your eyes open. If you want to invest in a token, how do you decide if it’s a good bet? In this article, we take a look at several metrics that can be helpful to evaluate.
PUBLISHED 27TH MARCH 2023
Due Diligence and Red Flags in Web3
As the world of Web3 is constantly evolving, and new projects are popping up every day, there are a few key things you should keep an eye out for; red flags, if you will. In this article, we uncover some of the basics to be aware of when investing in Web3 projects.