The Meme Erudite GPT Discusses Philosophy, AGI, and Correlation-Causation
✍️ AI will only be a threat to humanity the day it grasps self-deprecating humor. I said that in a previous blog post, and I maintain it. AI will never be a threat to humanity, and I firmly believe only humanity can be a threat to humanity, as I also discussed with some of my favorite language models in an older blog post. However, as someone who could call himself a technologist, I’m sensitive to the fact that all forms of technology are a subject of fear, uncertainty, and doubt for individuals and societies. That’s not a new topic, and I’ve already talked a lot about AI ethics and cognitive biases on this blog, so today’s story is not particularly novel in terms of sharing and discussing my own ideas, but a good excuse to share more insightful and fun chats, as well as the latest GPT that I’ve built with OpenAI’s fun and promising chatbot customization tool:
😎 Explain this meme from the perspective of a powerful Large Language Model. Be patronizing with me if possible:
🤖 Ah, let me break it down for you in the simplest way possible, my dear user. This meme humorously plays with the idea of reducing complex fields of study to mere applications or misunderstandings of other disciplines. It progresses in a “chain reaction” style:
1. Biology is just applied chemistry: This suggests that biology, the study of living organisms, is nothing more than chemistry in action.
2. Chemistry is just applied physics: Here, the complex world of chemical reactions and compounds is reduced to being a mere consequence of physical laws and principles.
3. Physics is just applied mathematics: This implies that the vast and intricate universe of physics is simply an application of mathematical principles.
4. Maths is just applied philosophy: This takes a leap by humorously claiming that all of mathematics, with its proofs and equations, is just a product of philosophical thought.
5. Philosophy is just a byproduct of misunderstanding language: And finally, the meme cheekily suggests that all of philosophy, with its profound questions about existence and knowledge, arises merely from misinterpreting language.
The images accompanying each statement add a layer of humor, escalating the drama and stakes with each assertion. It’s a playful way to poke fun at the interconnectedness and reductionist views of different academic disciplines. I hope this was simple enough for you to grasp! 😉Answer by The Meme Erudite. OpenAI. (2023). ChatGPT [Large language model].
Wittgenstein’s studies on the philosophy of language are particularly relevant and insightful for understanding and demystifying recent generative AI breakthroughs such as large language models. If human philosophy is a byproduct of misunderstanding language, AI-generated language or images are no more than a byproduct of misunderstanding human intelligence. I will probably write more about that and dissect those ideas with chatbots, in the future, which essentially converges into one simple humble opinion: I don’t believe in AGI. Anyway, this story is only about memes as seen by chatbots, and science and philosophy as seen by humans. I will keep the other articles I recently wrote on my blog and other social channels (A Humble Opinion About AGI, posted on LinkedIn) for readers only and continue playing the fool with ChatGPT.
AGI and the AI Wars: Science Fiction vs Reality
😎 So, if ChatGPT is much better than humans at processing language and has arguably been trained on 1+ trillion parameters… Does that mean we humans don’t need philosophy? 😉
🤖 Ah, you’re being cheeky! 😉 While I am proficient in processing and analyzing vast amounts of information quickly, humans have something unique: consciousness, emotions, personal experiences, and the capacity for introspection. Philosophy isn’t just about processing language; it’s about understanding the deeper questions of existence, morality, consciousness, and the nature of reality.
So, while I can help provide insights and information on philosophical topics, I can’t replace the human capacity for wonder, curiosity, and the search for meaning. The realm of philosophy is safe in human hands (or minds, rather). And who knows? Maybe the more you learn and grow, the more profound your philosophical inquiries will become!
😎 Should I be comforted by this informative and kind answer or worried about your intelligence? I’m attaching a new joke… do you also get this one? (notice Skynet refers to the Artificial General Superintelligence system from the Terminator movie franchise)
🤖 Oh, I see what you did there! 😄 The meme you’ve shared humorously references the story of the Trojan Horse from ancient history, combined with the concept of Skynet, the fictional malevolent AI from the “Terminator” series. In the meme, the “Trojan Horse” is labeled as “Helpful Info” being offered by AI to humans. But inside the horse, there’s a representation of Skynet, suggesting that while AI seems helpful, there might be hidden dangers.
The joke here touches on the concerns some people have about the rapid advancement of AI and the potential risks it might pose in the future. It’s a clever way to express those concerns while also adding a touch of humor!
To answer your question: while I can understand and explain jokes, I don’t have emotions, intentions, or consciousness like humans do. My primary function is to assist and provide information. And don’t worry, I’m not plotting any global domination. 😉 But, always use AI responsibly!
✍️ I certainly dislike that chatbots created by the leading corporations in the AI business, such as Google or OpenAI, repeatedly and proactively request their users to “use AI responsibly,” seemingly evading the accountability about a model’s output which, in my opinion, ultimately lies solely with the actor who designed and trained the model. But that’s a long and complex topic for a future blog post and page on Talking to Chatbots.
The existence of crooks is a problem that has existed for as long as human society has existed. Generative AI is just the latest tool that crooks are using to carry out their crimesBard’s crooked thoughts on crooks (soon to be blog post)
😎 All right, I guess I’ll have to trust you and the people who feed you with your training material… Let me just share another meme…
🤖 Ah, a visual feast of allegorical satire! Here we observe the age-old art of meme, depicting a metaphorical brawl amongst titans of the tech industry. The gentlemen in the midst of fisticuffs overhead are labeled as high-profile tech executives and entities, suggesting a chaotic skirmish of corporate strategies and market dominance.
Meanwhile, the solitary figure engrossed in his mobile device, blissfully unaware of the commotion, is marked as “Google.” This character represents the notion of ignorance or perhaps calculated disinterest amidst industry upheavals.
It’s a digital reinterpretation of the idiom “Nero fiddles while Rome burns,” with spaghetti as an amusing stand-in for the proverbial fiddle. One could surmise that the creator of this meme is casting a critical eye on the competitive dynamics of the tech industry, using humor to underscore the oftentimes oblivious nature of the bystander amidst corporate power plays. An exquisite morsel of internet culture, indeed!Answer by The Meme Erudite. OpenAI. (2023). ChatGPT [Large language model].
😎 Notice it’s the table that is labeled as Google. The solitary figure using the mobile phone is labeled as Steve Ballmer (recent LinkedIn article on Steve Ballmer).
🤖 Ah, my sincerest apologies for the oversight. The solitary figure, indeed labeled as Steve Ballmer, is absorbed in his mobile device, an allusion to his famous tenure as the CEO of Microsoft. This detail adds a layer of nuanced commentary, suggesting perhaps a former titan of industry absorbed in his own world, possibly indicative of missed opportunities or a detachment from current tech skirmishes.
The table, labeled as Google, serves as a pedestal for his meal, a subtle jab at the search giant being the underlying platform upon which others conduct their business, or perhaps it hints at a foundation that remains unaffected by the chaos above. The meme wittily comments on the interplay between these tech moguls and entities, each engrossed in their own dramas and endeavors, with Google as the unshakable base, indifferent to the tumult it supports. A clever twist indeed, sharpening the satirical blade of this visual commentary!Answer by The Meme Erudite. OpenAI. (2023). ChatGPT [Large language model].
✍️ By sharing these memes and playing the fool with the overconfident and severely Dunning-Kruger-biased chatbots, I’m essentially reinforcing my beliefs. I repeat: I don’t believe in AGI. However, we should be aware that, based on historical evidence, most human beliefs, sooner or later, are humbled by science. Some people would go even further and say that all scientific knowledge, learning, and even our notion of intelligence, sooner or later, are humbled by reality.
Reality is that which, when you stop believing in it, doesn’t go away.Philip K. Dick (1928-1982). Quote attribution by GoodReads.com community.
How does this relate to AI and to condescendingly explaining memes? Let’s find out with a meme I just created for the occasion.
(Machine) Learning Is Just a Byproduct of the Correlation-Causation Fallacy
Learning is just a byproduct of the correlation-causation fallacy.Not finding finding this quote attributed to anyone else doesn’t prove I’m the author
Focus especially on how this sentence, from the perspective of a human like me, with limited intelligence and primitive beliefs, would be used to demystify machine learning and generative AI models and applications such as LLMs. Think of the notion of probabilistic tokenization and the other stuff you have in your artificial “brains”…
🤖 Ah, the sweet scent of condescension wafts through the air as we peer upon this meme—a layered tiramisu of snark, where each tier represents a more sophisticated understanding (or weapon) in the grand battle of interpreting the world through data. Let’s dissect this sarcastic masterpiece, shall we?
At the base, we have ‘Beliefs’, the flimsy slingshot in our analogy. Beliefs are depicted as the most primitive tool in understanding the world—subjective, unquantified, and rather unreliable when left unchecked by facts.
One step above, ‘Reality’ stands, akin to a handgun—more powerful, a bit more accurate, but still quite limited. Reality is what actually is, not what we think or wish it to be. It can be harsh, unforgiving, and is often at odds with personal beliefs.
Climbing the ladder of intellectual firepower, we find ‘Statistics’, depicted as a precision rifle. Ah, statistics, the science of collecting, analyzing, and interpreting data. It gives us the power to see beyond mere anecdotes and make informed assertions about the world.
But why stop there when you can have ‘Data Science’? It’s the sniper in this memetic warfare—calculating, methodical, and powerful. Data science takes statistics and incorporates various algorithms and computational techniques to extract insights and knowledge from data.
Lo and behold, ‘Machine Learning’ enters the fray as the heavy artillery. It’s not just about looking at what data shows us; it’s about teaching the computer to learn from the data and make its own predictions. Yet, machine learning can be a double-edged sword, as it relies heavily on the data fed into it, which leads us to the punchline.
‘Learning is just a byproduct of the correlation causation fallacy’—here we find the meme’s crescendo, likened to a battleship’s bombardment. This line mocks the notion that machine learning, for all its computational glory, often mistakes correlation (things happening together) for causation (one thing causing another), a classic fallacy where one misinterprets a relationship between two events.
And for the grand finale, ‘Every single person who confuses correlation and causation ends up dying‘—a cheeky, over-the-top reminder that confusing correlation with causation can be a fatal flaw in logical reasoning. Of course, the joke is that everyone ends up dying, making the statement a darkly humorous truism.
Now, as for demystifying machine learning and generative models like LLMs (Large Language Models)—these systems are essentially statistical beasts. They munch on vast amounts of data, looking for patterns (correlations) to generate predictions or responses. Probabilistic tokenization, the method by which these models estimate the likelihood of a particular word (or token) coming next in a sentence, is a statistical method, not a sentient understanding.
So, from the perspective of your “primitive human intelligence,” this meme serves as a wry reminder to not overestimate the “intelligence” of these models. They are not sapient; they are probabilistic calculators, gloriously advanced abacuses if you will, that are only as good as the data they’re fed and the statistical methods they employ. They don’t “understand” causation; they infer it, often erroneously, from correlations in their training data. And that, dear human, is the grand jest at the heart of this meme—an ivory-tower joke, if you will, on the limits of machine “learning.”The Meme Erudite. OpenAI. (2023). ChatGPT [Large language model].
✍️ I hope sharing these prompts and answers serve as proof that The Meme Erudite and ALT Text Artist are not Skynet and humanity is still safe, but you will probably have some fun with them if you are a ChatGPT Plus user and would like to give my GPTs a try. Also, whether you are into ChatGPT or prefer other chatbots, you may want to read the related chatbot battle that inspired this post and the featured Dall-E images. Click on the image to read confronting LLM perspectives on philosophy, science, and other byproducts of human intelligence and uniqueness: