We’re all dealing with encounters with AI these days; even if you’re actively trying to avoid it, you’re going to be subjected to it one way or another. An AI image here. A piece of AI slop on the internet. A quick research question via ChatGPT. If you’re still in the workplace, you’re probably being subject to an almost religious-like crusade related to fostering AI use, which is pretty weird.
While watching the Hannah Fry documentary on BBC iPlayer, I thought I’d reflect on my encounters with AI.
This post uses the term AI to refer to large language models. I realise that’s very reductionist, but it seems to be the world we live in. The banner image for this post is AI-generated, since I figured I’d test the madness with something I’ve avoided stepping into.
Where do I sit
I’m not someone who’s gone all in on AI, though many of these things are relative. When I say I’m not someone who’s gone all in, I’m comparing myself to those who’ve become a sort of religious cult leader on the subject.
I’d describe myself as an inquisitive, rigorous contrarian, which I should probably explain. I have serious concerns about the view that AI should strip away what essentially makes us human. For example, I’ve got little patience for replacing the process with the product. A glib example being you don’t write a novel, you get AI to do it. Why? Because there are many situations in which the process of getting to a product is as valuable, or even more valuable, to us than the end product itself.
I guess, whether naive or not, I see AI as a way to enhance human engagement with the process rather than just giving me the product. A way to make the process richer or more interesting, or to get me to a point of decision or sound judgement, maybe a bit quicker.
The exploration of ideas
As a metacongitive person, I sometimes find myself exploring ideas that don’t necessarily have any end product. For example, I sometimes fall down a rabbit hole exploring a tabletop role-playing game campaign idea. I did this recently with a fantasy Avengers idea, something that’s been sitting around in my head for quite a while.
As I was thinking about this idea, I started talking to ChatGPT about it. I’ll admit it was an enthralling process. I don’t mean enthralling in a Richard Dawkins level of AI delusion, but just the power of it, considering it had harvested the whole of the internet like the Dalek on the Doctor Who episode. It was capable of making leaps around my idea, actually enhancing it and allowing me to think about it more deeply and in different ways. It wasn’t just functional depth but leaps on esoteric elements like themes. It would also allow me to discuss and work through some of my challenges with the idea around narrative conceits.
I guess it was like having access to a group of fellow gamers having a pretty good discussion around the idea and working it out. I guess some people might find this weird, but I don’t. There was no psychosis involved. I wasn’t falling into the trap of thinking it was a person. It was just a sort of search engine experience that was really cool, with more of the framework of a high-energy, engaging discussion, and interesting ideas emerging from the process. It was also a harmless topic; it’s not like I was discussing my psychological state.
But where does this cross the line? I was using it to hash out an idea for a role-playing game table. I wasn’t using it to construct something I was going to ask people to spend money on or form commercial contracts around. Is it still acceptable to use this sort of process to help write your novel? Where does using AI as a surrogate for hashing out ideas with a fellow novelist end, and where does writing your novel begin?
I’d say that’s a question people will have to individually answer.
Making expertise portable
I’ve gained a minor obsession with making expertise portable. People are going to be horrified by that statement, but I like to think I’m approaching it from the point of view where portable doesn’t mean replacement. This is another area where people will make their own judgment.
As a metacognitive person, I have a thread of consultancy in my career history; even when that wasn’t my actual job, I’d be approaching it from that perspective. This means I tend to value processes that help human beings reach good decisions and make good judgments. I also tend to think this is the thing you don’t want to remove from humans, but you can enhance the process to it or make it more portable.
I experimented with YouTube first. I created a practice document around my content and the audience it was intended for, so I could assess scripts and uploaded videos against it. I could then ask Gemini to consider these things from the perspective of an experienced YouTube consultant with the context for my content and audience. Yes, I even asked it to assess my practice document based on whether people would find the content useful and engaging, and whether there was an audience for it. These things can often be circular.
I’ve since experimented with it at work, creating templates and associated practice documents for requirements, solution approaches, designs, etc. It’s been interesting as they work through what good looks like from the perspective of a more experienced resource to a less experienced one, as something to read, and as good instruction to the AI on what good looks like. The AI’s ability to assess the human-produced product as a measure of what good looks like, so it can be improved, has been very useful.
It’s not replacing the process with the product, but supporting the process and helping consider the end product.
Uploading a skillset
We can’t instantly upload a skillet two minutes before needing to fly a helicopter, as they do in The Matrix, but we’re getting closer. I can now do certain things I’d need to reach out to someone else for myself, thanks to AI.
I might quickly research some expertise that I’d normally consult a marketing strategist about. I will work out how to write a database query that I’d normally need to wait for a database developer to do. This is just off the top of my head. Look, there is no way what I am doing here is replacing that expertise, as I am using AI to address the low-hanging fruit. A broad understanding of something. An SQL query that would be simple for the database developer.
But the power of not having to reach out for what the expert would consider simple, with the impact on their flow, workload, and extended elapsed time, is pretty powerful. It makes me more efficient by allowing me to progress.
I know people would say I’m not achieving anything I couldn’t get to by looking up the information on the internet through a conventional search. This is true. The key point is the phrase I could get to. AI reduces the number of steps to get the answer. I don’t have to look up and work out how to extend my query to do something different; it gives me the answer.
Is this breaking my rule against replacing the process with the product? It probably is, and I didn’t consider that until I wrote that question just now. I guess I’m fine with replacing the process with the product when the process itself is not valuable. Is this relative to the person and the complexity? Probably.
Do I just talk to it?
Admittedly, I do. I’m not forming relationships with it or constructing some sort of AI girlfriend or anything, though I sometimes think of pushing the role-playing angle to absurd levels, like getting it to respond to a highly technical topic as a 19th-century poet or something. I also don’t do this a lot, but it does happen.
In a way, it’s more conversational searching than it is an actual conversation in the truest sense.
I recently went down a rabbit hole about Nightwitch’s Wacken Open Air 2013 concert, which is both spectacular and enthralling because of the story leading up to it. It’s a bit like Queen’s Live Aid moment, but for a symphonic rock band from Finland. I’ll admit, I explored this in a Gemini conversation, which was a mix of search and a flowing chat with someone who knew more about the band, the moment, and the history than I did.
I don’t have a problem with this process as long as you keep your critical focus and a material separation between what you’re doing and anthropomorphising what you’re talking to.
My thoughts on the future
“All of this has happened before, and it will all happen again”
— JM Barrie (or Battlestar Galactica for a pop culture reference)
Look, I don’t think it’s going to be great. I don’t think this is an anti-technology view; it’s just common sense on how we handle things as human beings. We will drive things to a negative crash before the true advantage is cemented because it’s happened before. We will realise just as many negatives as positives, possibly more negatives than positives. We will let profit-seeking override any other concerns or measures.
The Industrial Revolution did not improve people’s lives for many years. Social media has proven to be both a positive and a negative for society, and that’s probably a half-full view. The views of the people directing this technology seem positively anti-human at times, fostering a sort of single-winner narrative at best, economic and personal. If it’s going to be a positive result at all, it’s going to be negative first.
Is this a realistic view or a bad one? I guess I’ll let you decide. I mean, it could create the Star Trek future, but do you trust us to get there? I mean, even in Star Trek, various things drove them to the brink of destruction before the world we see in the show.
