Socrates on ChatGPT
Here’s a quote by the ancient greek philosopher Socrates circa 410 BC:
Their trust in [ChatGPT], produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant…
Okay, maybe it wasn’t about ChatGPT. Depending on the current time period and domain it could equally well have somehow been interpreted to be about Smartphones, Youtube, Social media, Stack Overflow, Google Search, Wikipedia, the Internet, UIs over CLI, textbooks, encyclopedias, sources of news or propaganda, or maybe any source of “external” knowledge that wasn’t self-learnt and “earned”? In truth, Socrates actually said this about the controversial concept of “writing”.
Now while I’ll dare to say he might’ve originally been mistaken as “writing” turned out to be quite a good thing (citation needed); I don’t think that means the larger principle (“not true wisdom”) is wrong for all the other technologies and concepts to which it might be applied.
In general I think most of the above technologies turned out to be more beneficial than initially feared by critics, with the main exceptions being those with the incentive structures to maintain “engagement” or attention rather than to accurately collect and distribute knowledge (for example, social media sites with recommender systems that algorithmicly reinforce echo-chambers and extreme views). To some degree, any sites that are ad-based will have financial incentives that could skew the method or content of knowledge they provide.
With regards to ChatGPT and other Large Language Models (LLM), while I haven’t yet dove in with two feet just yet, I’m definitely dipping my toe into the water. I’m actually cautiously optimistic to see how a future plays out where every person now has a “personal AI research assistant” - although I’m sure many people will continually need to relearn that the current state-of-the-art is only at “intern-level” in that they always require you to double-check and critically evaluate their work and answers. Maybe such mistakes will actually help teach some people to more critically evaluate external knowledge sources, although of course the moment the answers become “good enough” people are wont to stop investing the effort to keep double-checking them.