We’ve probably all encountered AI by now. Some large language model (LLM) AI programs are among the fastest and most comprehensive information tools on the Internet, and arguably, the most “stupid.” Have you ever been harassed by an AI-powered telephone service whose programmer neglected to include the concept of wrong numbers? Or been fed incorrect political information by an AI program that did not know which party or Prime Minister was in power? However if you want to check something like medieval canon law, to ensure the attitude of a character in the novel you’re writing accurately portrays the times: it can take seconds with ChatGPT. Everything has to be fact-checked and sources verified, but tools like ChatGPT, Gemini, and Claude remain remarkable and they’ll improve as the glitches are addressed.
Cultures persist because of a confidence in themselves. Common agreement binds them together, and they endure for as long as their collective understanding is based on reality. But what happens if the assumptions that they make are faulty? The incongruity between the beliefs motivating their behaviour and the actual reality in which they live instigates increasing conflicts until reality asserts itself with a dispassionate shrug and the culture experiences the discomfort of a minor reset or the trauma of a major one.
There has been a lot of discussion in the media and plenty of articles online about artificial intelligence (AI) writing tools over the last couple of months, sparked in large part by OpenAI’s release of ChatGPT on November 30, 2022. OpenAI is a non-profit company that researches AI based in San Francisco, California. Their latest offering, ChatGPT, is an AI tool that interacts with the user in a conversational way, meaning users can ask questions and make requests in regular, everyday English (and other languages) instead of requiring the use of special commands.