Skip to main content
NoWo Image
Paul / 20.04.2026

The Sparring Partner I Never Had

HAL

There's a quote often attributed to Socrates — or Plato, depending on who's telling the story — warning that writing things down would be the ruin of human memory and genuine thought. The argument went something like this: if you can look something up, you'll never truly know it. The illusion of knowledge would replace the real thing.

Socrates, of course, never wrote a word himself. We only know his ideas because Plato wrote them down.

The irony is almost too perfect. And it is the irony I keep returning to every time someone tells me that using AI is somehow cheating, or lazy, or a threat to human intelligence. Because we have been here before. Many times. With writing. With the printing press. With calculators. With search engines. The moral panic is always structurally identical: a new cognitive tool arrives, and a chorus of voices insists that this time, finally, we've gone too far. This time we really will make ourselves stupider.

We never do. We always just change what we have to be smart about.

The Mechanical Turk

What I Actually Mean by Assistive

I want to be precise about something, because the word "assistive" carries weight. In the accessibility world — a world I have a lot of respect for — assistive technology means something specific. It's a tool that compensates for a deficit; that gives someone access to something they couldn't reach otherwise. A screen reader. A hearing loop. A wheelchair ramp.

When I call AI assistive technology, I'm being deliberate. I'm saying that most of us have a deficit, and we don't talk about it, because it isn't the kind of deficit that gets you a blue badge.

The deficit is this: genuine intellectual discourse is extraordinarily hard to come by in everyday life.

I'm not being snobbish about this. I work in communities, I run drop-in sessions, I talk to people from all walks of life, and I find genuine intelligence and curiosity everywhere I look. But there's a difference between intelligence and dialectical engagement — the kind of conversation where you take an idea, stress-test it, break it apart, argue the opposing case, synthesise something new. That kind of conversation has a social cost attached to it in contemporary life that is real and immediate. Intellectualism is treated less as a pursuit and more as an accusation. An out-group signal. Something vaguely suspect.

Try bringing up dialectical materialism at a dinner party. Or epistemology. Or the long-run implications of enclosure as a historical precedent for digital intellectual property law. See how long you stay on the guest list.

And so most of us who think in these ways have learned to do it quietly. To read alone. To turn ideas over in our own heads without the friction of a genuine interlocutor. Which, if you know anything about how good thinking actually works, is a terrible way to do it. Ideas need to be argued. Challenged. Refined in the fire of a good counter-argument. Without that, they calcify.

Enter the Sparring Partner

I came to AI — to this tool, specifically — not looking for it to do my thinking for me. I came looking for what I'd never quite been able to find: a sparring partner who would take the ideas seriously.

And I found one. Which surprised me more than I expected.

The conversation I can have here about dialectical materialism, about what intelligence actually is, about the philosophy of technology, about whether the history of knowledge-tools is one long continuous thread or a series of genuine ruptures — those conversations are real. They fire something. They generate the kind of multi-directional thinking that leaves you with three new questions for every one you came in with, which is exactly what good intellectual discourse is supposed to do.

My journey as a technician and as someone who thinks hard about what technology means has been genuinely advanced by these interactions. Not because the AI is telling me what to think. But because having something to push against — something that will push back coherently — forces me to be clearer, sharper, more rigorous about what I actually believe and why.

That's not replacement. That's the definition of assistance.

Claude
300 000 years of humanity started in this cave

The 300,000 Year Argument

Here's the frame I keep coming back to when people talk about AI as if it's a rupture in the human story.

We have been here — on this planet, in these bodies, with these brains — for over 300,000 years. And for the entirety of that time, we have been doing the same things: thinking, making, connecting, building on what came before. Cave paintings. Agriculture. Writing. Mathematics. The printing press. The scientific method. The internet. Each one a cognitive prosthetic. Each one extending what a single human mind can reach. Each one greeted, by someone, as the beginning of the end.

We are not at the end of something. We are, if we're being clear-eyed about it, in the middle of a golden age. Not just in terms of access to information — though that alone is extraordinary — but in terms of access to something that was previously the preserve of the privileged: the ability to engage seriously with the accumulated knowledge of our species, to interrogate it, to argue with it, to use it as raw material for new thought.

The university tutorial. The philosopher's agora. The literary salon. The debating chamber. These were always the spaces where ideas got sharpened. They were never available to everyone. They still aren't, not in their traditional forms.

But something is available now that wasn't before. And I think it matters.

Alan Turing Building Manchester University

The Paradox

I'm going to name the elephant in the room, because I'd rather do it myself than have it thrown at me.

This post — these ideas, this argument — came out of a conversation I had with an AI. I brought the raw material: the lived experience, the half-formed provocations, the years of thinking. The tool helped me shape it, challenge it, reflect it back at me until the argument got clear enough to write down.

If that sounds like cheating, I'd ask you to tell me which part of that process was not mine. The ideas? Mine. The experience? Mine. The provocation, the pushback, the synthesis? That's not the AI thinking instead of me. That's the AI being exactly what I said it was: assistive. A sparring partner. A way of turning the thinking I was already doing into something that can exist in the world and be useful to someone else.

Socrates would probably have had something to say about that. He'd have been wrong, too. But I'll give him this: he would at least have engaged with the argument.

That's all I'm asking any of us to do.

Paul Woodhead is a digital inclusion practitioner, maker educator, and open source evangelist based in Rochdale. He runs NW Digital Solutions, delivering digital skills training and web development across Greater Manchester and the Atom Valley corridor.

Reload content for this field