Is AI a useful tool or a threat to our humanness?
I recently read two books about AI that have some differing but interesting perspectives.
- Co-Intelligence: Living and Working with AI by Ethan Mollick
- More than Words: How to Think About Writing in the Age of AI by John Warner
One take is more optimistic, while the other is more pessimistic - but both offer viewpoints that are worth considering.
Here are some of my biggest takeaways from both.
Co-Intelligence by Ethan Mollick
Ethan Mollick is a Professor at the Wharton School who is focused on entrepreneurship & innovation and AI. This 2024 book isn't technical but he does a good job of detailing what current LLM AI's are useful and not so useful for. He also publishes a regular Substack newsletter about AI (a recent post compared pros and cons of the current state of the most popular LLM models.)
A key concept that Ethan Mollick introduces are his "4 Rules of AI"
- Always invite AI to the table - experiment; understand its abilities, nuances, and limitations; use as an assistive tool, but not as a crutch
- Be the human in the loop - always check the work and be aware of potential hallucinations and mis-statements ("AI's can focus on 'making you happy' rather than 'being accurate')
- Treat AI like a person (but tell it what kind of person it is) - learn how to give AI some guidance and direction on how to generate the outputs to match your expectations
- Assume this is the worst AI you will ever use - AI solutions are developing so rapidly that current limitations
These are good guidelines that can help you look at AI tools to work with you, as his title suggests, as a "Co-Intelligence." Mr Mollick is optimistic about the future of AI while acknowledging its limitations and weaknesses.
Key quote:
To get the AI to do unique things, you need to understand parts of the culture more deeply than everyone else using the same AI systems. So now, in many ways, humanities majors can produce some of the most interesting "code." Writers are often the best at prompting AI for written material because they are skilled at describing the effects they want prose to create.
I am curious to see how that particular idea will play out.
More than Words: How to Think About Writing in the Age of AI by John Warner
John Warner is a writer and writing teacher. His perspective then, particularly focused on the potential roles of AI with creating text, is much more pessimistic as he surveys some of the real-world impact he sees on the process and craft of writing. He argues that LLMs are really just advanced automations, not any kind of intelligence.
Warner focuses several chapters on the potential impact of AI in education, and here, especially, I think his concerns and criticisms are well-founded. A machine can help us learn, but it can’t learn for us.
He has his own framework for dealing with AI:
- Resist the pull for efficiency and the subsequent urge to turn over inherently human activities (like writing) to AI.
- Renew your sense of taste in the content you consume. Your sense of taste, and that of others, is a uniquely human trait, grown out of our own experiences, and not something to be outsourced. AI, on the other hand, tends to gravitate towards the "average" or most "common."
- Explore - Warner acknowledges that AI in some form is here to stay, and it is therefore key to continue to develop an understanding of it. He advocates finding good “guides,” people who can use their expertise to shed light on both the good and the bad of AI.
… writing is a fully embodied _experience_. When we do it, we are thinking and feeling. We are bringing our unique intelligences to the table and attempting to demonstrate them to the world, even when our intelligences don’t seem too intelligent.
ChatGPT is the opposite, a literal averaging of intelligences, a featureless landscape of pattern-derived text.
My Take
Too often in tech we get caught up in the utopian vision of any new technology. Getting different perspectives on how such a transformative technology like AI is impacting our culture and our world, both positively and negatively, is essential.
I find a lot of value in using AI as a “co-intelligence,” helping me find better information more quickly, and as a research assistant and brainstorming tool. I often rely on it heavily throughout the course of a workday.
I’ll use it for feedback on a piece of writing, but I generally don’t like what gets generated whole cloth based on a prompt. For me, writing is, as Warner states, critical to my “thinking.” My ideas about something tend to end up in unexpected places as I work through the process of writing about it.
For now, I’ll continue to embrace it and “explore” its capabilities, using it as a tool where I find it useful, but try to stay informed about AI's challenges and shortcomings.
Regular Newsletters I'm currently finding useful:
- Ethan Mollick's One Useful Thing
- Every.to
- Stratechery by Ben Thompson
- Mark Watkins' Rhetorica
- Cal Newport's Deep Questions Podcast and other content (while generally focused on "productivity," I always find his AI deep-dive segments very insightful). E.g. "AI and Work (some predictions)"
In what ways are you optimistic and/or pessimistic about AI?
What resources do you count on to learn more about AI and to stay on top of the latest?
Bonus: Service Model by Adrian Tchaikovsky
"Service Model" is a recent Science Fiction novel by Adrien Tchaikovsky that follows a high-end valet robot who accidentally kills his master and ends up on the run in a dystopian post-apocalyptic world. The book has a dark humor throughout as Charles/Uncharles navigates increasingly bizarre situations before confronting the AI that is behind the chaos. (My favorite part is when the robot “librarians” are found to have “sorted” all their binary data in the effort to be more efficient - turning the sum of their collected human knowledge into a long string of 0’s followed by a long string of 1’s).
A fun book that nevertheless touches on the potential worst-case scenarios that both Mollick and Warner consider possible.