Predictions for generative AI & LLMs
tl;dr — short term is super-powered search, templates, and autocomplete; the long term is a fundamental shift of knowledge work towards “management” instead of “execution.”
Today’s goal is to document my short and long term predictions for generative AI and LLMs.
I want to create products that make lives better. I’m listening to Fly Me to the Moon by Shoby today.
Disclaimer: the opinions stated here are my own, not those of Google or the teams and employees within the company.
I work in the space of generative AI and large language models (LLMs). We’ve seen the rapid improvements of the nascent technology from its early variants to the now famous GPT-3/4 and the productized version in ChatGPT. There’s been plenty of opinions about how trustworthy these models are; I will focus instead on predictions for where we will be in 2 years and 10 years out.
What does history say?
When you think of recent radically transformative technologies, what do you think of? The internet? Social media? The smartphone? AI?
How long do you think it took each of those technologies to reach mass adoption? Just in the United States? The quick answer is: probably longer than you expected.
- Internet: ~23 years to reach >80% adoption in 2016 (GeoCities first debuted in 1994).
- Social media: ~14 years to reach >80% adoption in 2017 (MySpace first debuted in 2003).
- Smartphones: ~7 years to reach >60% adoption in 2015 (iPhone debuted in 2007).
Some patterns to note:
1. Even when a technology “hits the acceleration point in their S-curve," it still can take over half a decade for it to reach mass adoption.
2. The time it takes for the masses to adopt new technologies keeps getting shorter. I believe a large factor to this is due to new technologies compounding with their predecessors.
Internet was made more easily available via the preexisting telecom networks. Social media was able to achieve wide reach due to the internet. Smart phones was able to mass appeal to teens and adults with a powerful combination of the internet, social media, and apps. AI is made possible through the vast amounts of data streaming through services that people use through their phones/apps/networks/internet.
In the short term, genAI and LLMs will likely take shape in tools that act like super-powered templates, search, and autocomplete.
I suspect in the next 2–5 years we’ll see unit economics of LLMs gradually stabilize, and the cost of training and calling these models will drop to commoditized levels.
I expect these will be the years of formative regulation as the market wrestles with what these models can do, what data it uses, and data privacy and security. There will be work done to try to authenticate and trace work ownership, and possibly new revenue models that attribute/rev-share back to the sources.
That all said, in these next 2 years, I expect that the way most of us will end up using genAI and LLMs at work or at home will be mostly baked into existing tools and some new entrants as a means to get the job started. It’ll likely take the form of some super-powered “templates," “search,” and “autocomplete" — where genAI or LLMs will get you 80% of the way there with a first stab and can help you fill in some gaps as you go, but you’ll need to eventually take over and finish the job.
For example, Framer.com will let you build a site by just typing out what you want it to do. It’s an amazing way to scaffold the site and get it looking 80–90% there, but there’s a good chance you’ll likely end up tweaking and cleaning up the last 10% manually. ChatGPT can give you a bunch of great suggestions for travel or things to do or how to write a love letter, but you likely need to refine and double check things afterward. This will likely happen across many industries, and we’re already seeing it happen in search, software development, art/media, and legal.
One thing worth calling out here is that for many industries, genAI and LLMs don’t actually change the user problems and jobs-to-be-done, they just make it easier and faster to solve for those use cases. I’d argue that this technology will not be a true market disruptor [as defined in the Innovators Dilemma], but rather a sustaining innovation that will become tablestakes for many industries in the long run (i.e. for most markets, it will not serve or open up a whole new customer segment and become a surprise disruptor, but rather it’ll serve existing customers and become the status quo and an expected feature/service).
In the long term, genAI and LLMs will fundamentally transform how people discover, create, and work.
So how long will it take LLMs and generative AI to achieve mass adoption? I don’t think it’ll happen in 2 years, but I believe it will within a decade.
By that time, I think we will have solved enough kinks in the technology that the promise will become real: using these large models, we will be able to reliably and accurately bottle an approximation of the knowledge, expertise, and reasoning of a set of people (or a business? a job role? a community? a culture?), which we can leverage to radically change how we live our lives.
Keep in mind, behavior change is hard, and even with all the buzz today, there’s a lot of folks who don’t understand what ChatGPT is and why it is meaningful. The simplest analogy I have today for how genAI and LLMs may help is to imagine you had a college intern who was working remotely and was really good at googling things for you. They’re not going to be great at everything and may get many things wrong, but they can definitely take a load off by doing the grunt work of researching, answering questions, and taking the first stab at tasks like putting together designs, documents, and wording.
This is roughly the current state of things today. But just like how a real college intern will eventually gain more experience and get better, these models will too. Eventually we’ll have models that approximate experts in certain fields, with the ability to break down problems while providing human-like creativity.
Imagine if you could have an expert tax accountant/financial analyst, doctor, coach, business executive, marketing director, and data analyst all at your fingertips, just a button or query away? What could you do to run your business better? How could you simplify work and reduce the boring steps of gathering all the information you need to make decisions?
I expect the way we do knowledge work will fundamentally change. People will shift from doing mundane “IC work" of doing every task for the job, to learning more “management skills" as they figure out how to best coax what they want from these tools and models to do the job for them. In the short term these tools will be optional, quirky new ways of working that some folks try; in the long term I believe this will take over and you will be left behind if you don’t learn how to use these tools and models to your advantage.
It’s like how at some point it became a “skill” to know how to search Google effectively to find what you want. In the same way that Google search started saving you a trip to the library, these AI tools and models will save you from going through dozens of articles to find what you want, or save you from drafting multiple revisions of your documents and designs. You’ll “write less" and “review more.” Ironically, this is what happens as you move into senior management at work anyway.
What if…?
Some last parting thoughts and questions to ponder:
- What if this is the eventual answer to the company conundrum of everyone trying to get promoted and not having enough people to do the ground work?
- What if the fear people feel around these models is similar to that of the fresh waves of college graduates coming in with more skills, expertise, and world knowledge than you ever had before?
- What if… change is coming, whether we like it or not? How do we live with it and make it work for the better of all of us?