How to Prepare for the Future of Knowledge Work

Large Language Models are here to stay, and they might change everything. How can we prepare for the future of knowledge work?

How to Prepare for the Future of Knowledge Work
Into the unknown

In this article, I want to discuss the ways in which I think knowledge work will evolve in the coming years, based on the current and upcoming evolutions of AI. More importantly, I want to share some ideas about how to prepare yourself to ride that wave and stay on top of it.

Disclaimer: I'm not an AI expert, so please, if you are, bear with me ;-)

Introduction

With the increasing popularity of Large Language Models (LLMs) such as ChatGPT, Google Bard, Meta LIMA, Bing Chat, Claude, etc, knowledge work is bound to change dramatically in the coming months and years. Without a doubt, millions of jobs will be impacted. These models have the potential to completely change the way we approach, think about, and solve problems. As a matter of fact, the impact can already be felt!

Let's explore the "why", the "how", and ways to best prepare for the future of knowledge work.

A simple way to think about LLMs

Before going any further, let's create a shared mental model for Large Language Models (LLMs). This will be a really superficial and incorrect/incomplete one, but it should be just about enough for our discussion.

Large Language Models are AI neural networks with billions of parameters/weights, trained on mind-boggling quantities of text. They're basically "engines" trained on the largest collections of text their authors could collect/acquire. When they're "trained", those engines "learn" about associations between words, and, by extension, ideas (sort of). Internally, those associations are represented in a way that even the creators of those models don't fully comprehend. What matters is that those associations enable Large Language Models to "make sense" of text that they are given (again, sort of), and to generate natural language text in response. The relationships/associations that Large Language Models make between words/sentences/paragraphs of text are mostly of statistical nature. They associate probabilities with sequences of words and have a probabilistic view of what should come next. In addition, LLMs are improved by huge amounts of human feedback. While training LLMs, their creators feed them specific prompts, generate two different answers, and ask a human to choose the best version. That human feedback is then used to make the LLMs answers better and better over time. That process is called Reinforcement Learning from Human Feedback (RLHF). You can learn more about this here and there.

While the above description is somewhat accurate (if you're a data scientist, please forgive me :p), but needs further simplification. To me, the simplest way to explain what LLMs do is as follows. Given a sequence of words, LLMs can determine the most probable sequence of words that should follow. If you give the sentence "My tailor is" to an LLM and if it has been trained on texts found on the Web, the most likely answer of the LLM should be "rich". LLMs associate prompts (i.e., "questions") and answers through probabilities. That being said, modern LLMs go further than this. They take the context (i.e., the prompt) into account and associate different weights to different parts of the context, leading to much better and more relevant answers.

The why is pretty logical. Having "read" billions of lines of text, LLMs have "observed" billions of associations of words. Some come up way more often than others. Those associations give the AI a very solid idea about what is most likely to come next, given any input. This ability to link series of words with probabilities about what should come next is what enables LLMs to generate responses that are well constructed (from a linguistics point of view!) and that seem to make sense. "Seem" being the key word here!

Current LLMs are powerful, but unreliable

A word of warning before we explore some use cases and ideas about how to leverage LLMs in today's and tomorrow's world.

As we've seen, Large Language Models are (huge) probabilistic "machines". They associate the text they receive with the text that is most likely to appear next, based on what they've "seen" before (i.e., their training and the resulting parameters/weights). This doesn't mean that LLMs have a profound understanding of anything. All they "know" is the relationship between words and sentences. They don't really know anything about ideas/concepts/principles behind the text they receive or generate. Of course, there really is a relationship between those probabilities and the underlying ideas, but that's an overlay that is mostly inaccessible to the current generation of LLMs. That's why LLMs are comically bad at logic and math (among other things!). Yes, LLMs can generate text that looks serious/correct, but it doesn't mean that the output really makes sense or is valid. If you don't believe me, just ask GPT-4 to sort a list, to count or to repeat a letter n times and count how many there are. Always be wary, and use LLMs for what they are good at.

Moreover, LLMs are fed with a mix of real, fake, valid, unbiased, and biased information. They're a mirror of the Web and of our society. People write all sorts of things. Facts, fake news, subjective or objective articles, fiction, non-fiction, jokes, etc. LLMs are thus no better than we are as humans and as a society. They're as biased and as (un)reliable as we are. In fact, given that they don't actually understand ideas and "simply" establish probabilistic relationships between words/sentences, they basically make things up all the time. We're just lucky that they are being fed more "useful" information than garbage (in part thanks to RLHF, the human feedback loop). There's no silver bullet. It's the usual: "Garbage in, garbage out".

That's why the output of LLMs must always be taken with a grain of salt, depending on what you are using it for. If you're writing fiction, then you mostly don't care (unless you want to root your stories in our actual reality/timeline). But for serious and non-fiction work, you will always need to double-check everything LLMs generate for you. You can't blindly trust the output of an LLM, no matter what data set was used to train it. Bias and hallucinations are there and will probably remain.

Last but not least, LLMs live in the past, way more than we humans generally do. LLMs literally take months to train, and by the time they "go live", their training data is outdated.

Now that you have a sense of what LLMs can be good or even great at, as well as what they're certainly not good for, let's move on.

LLMs and the pace of innovation

Albeit far from perfect, LLMs have incredible potential to profoundly change our society. Even when they're roughly right, they can still provide huge improvements over the current status quo. As a child, I witnessed the rise of the World Wide Web and how it changed the world. I also witnessed the evolutions brought by smartphones. And now, the next wave is upon us. And it might be much bigger still!

First, by providing a natural language interface and being able to "interpret" what we ask, LLMs offer a wonderfully simple and powerful user experience. Natural language is obviously what feels most natural to us. By being able to generate natural language, their output is also immediately useful and usable in various contexts.

Thanks to their ease of use, LLMs are accessible and usable by a huge number of humans. Moreover, adding more input and output modalities (e.g., Text-To-Speech and Speech-To-Text) will vastly increase their accessibility, as it will lower the barrier to entry for people who cannot or struggle to read or write.

The fact that so many people can already use LLMs means that the "AI revolution" will happen very quickly. This is not the same story as the innovations that the Internet and the Web enabled. Those took a lot of time to happen because the general population needed time to grasp their potential, and because software developers were needed to make things happen. This time around, people all around the planet are already changing their processes/workflows/habits. Software developers will also be able to quickly enhance existing and upcoming systems to leverage the power of LLMs. Companies around the world are already exploring ways in which they can leverage this technology.

Let's look at some use cases together...

Use cases for LLMs

Since I've started using and relying on LLMs a few months ago, I've kept discovering new ways to leverage their power to improve my workflows. There are countless use cases, many of which are yet to be discovered. Creativity is key to finding more use cases, and that's true for many industries. They can help us identify patterns and make sense of vast amounts of data, and that capability will keep increasing as LLMs gain in maturity.

They will simplify everything revolving around natural language processing (e.g., ideating, summarizing, explaining, learning, translating, generating text, autocompleting words and sentences, writing articles/essays/books, copywriting, editing, etc). Tech giants and many tools already include support for these. This alone will impact day-to-day work for many knowledge workers all around the planet. But this will also heavily impact education.

Personal, domain and industry-specific LLMs trained on specific data sets/data sources (e.g., medical research, legal texts, personal data, etc) will also arise. Those will enable creating specialized "agents" (i.e., sort of robots), able to provide insights/advice/guidance that currently require experts. Not to say that specialists won't be necessary in the future, on the contrary. But this will certainly lower costs, enable more people to benefit from certain insights previously inaccessible to them and make better/more informed decisions. LLMs trained on personal data could yield a landslide of innovation. Imagine having your very own Jarvis; your personal AI coach. AIs that know what you know, know what you want, know what you need, and can help you achieve your goals and get where you want in life.

LLMs can also help us automate tasks, and reduce/eliminate boring or tedious work. They can help us better research/document/explain/search etc. It will certainly become easier and easier to create complex workflows that impact the real world and remove the need for human intervention.

Knowledge work is going to change. It's going to change a lot, and very quickly. That's why it's key to anticipate what can be, and to prepare adequately.

As a software engineer, I can already see how my main field of work is evolving. What required a lot of know-how and expertise yesterday (i.e., which was very valuable) will become more and more of a commodity. I can already rely on LLMs to perform many tasks that were previously tedious, such as generating utility functions, calling APIs, converting between data interchange formats, generating tests, triaging issues, etc. Current LLMs like ChatGPT have been trained on most of the code present on platforms like GitHub, so it's no wonder that they understand code 😂. It's far from perfect, but as I've mentioned before, it's still a huge gain in productivity. This transformation of the "value chain" of software development means that many more people will be able to code in the future, which is really awesome. This evolution might somehow close the gap between NoCode and software development for people with less experience/expertise.

Overall, the rise of Large Language Models presents both opportunities and challenges for knowledge work. As with any new technology, it will require careful consideration to ensure it is used effectively and responsibly. With the increasing popularity of Large Language Models, knowledge work is bound to change dramatically in the coming months and years. We will soon all be able to chat with our documents, ask AIs questions about articles and research papers, discuss with famous authors (including ones that are long gone). etc. Finding information, exploring, connecting & sharing ideas will become much easier. Basically, everything that revolves around natural language is going to accelerate.

And natural language is really only the first step. Audio and video are coming as well, just like image generation came with technology like DALL-E, Stable Diffusion or MidJourney. Interactions with various industries will evolve. Call centers will be transformed/replaced, tourism will evolve, and so will shopping, travel, finance, home automation, transportation, etc.

Larger-scale transformations and more advanced use cases will take more time to become a reality, but they will probably happen. OpenAI has recently introduced plugins for ChatGPT. As the "plugins marketplace" grows, so will the capabilities of that LLM. Others will follow course, and maybe there'll be an "AI plugin war" (like the browser extensions and app store ones). That trajectory, combined with the introduction of additional modalities, could bring LLMs literally everywhere. LLMs of various companies/vendors will communicate with each other, with natural language as a transport mechanism. Machine-to-Machine communication will gain a whole new meaning.

Of course there are major (and justified!) privacy concerns, but I bet it won't take long before AIs answer the phone for us, call us, listen to us through our phones, at home, at work, etc. Alexa, Siri and the like are done for. They're bound to be replaced in a jiffy.

Some might say that LLMs are overhyped, just like chatbots were a few years ago. But nobody really knows how things will unfold. It's actually hard to anticipate all the ways in which LLMs might impact our society, especially if we think about all the possible second-order effects (e.g. what happens when the training data set of the next generation of LLMs will include LLM-generated content?). We need to be prepared. And there's not much time to do so!

Challenges for us, poor humans

LLMs present huge opportunities, but also big challenges for us, mere mortals. We humans need to simplify everything. We are computationally limited. Our brains can only process so much information at a time. When we look at the world, our brains take shortcuts all the time. That's the only way for us to cope with reality. There's just too much information out there. LLMs don't have the same limitations. They can generate answers faster than we can express the questions. They have access to vastly more information than we do.

The first challenge for us in the future will be to even realize when we are faced with AI-generated content. Along with that, we will need to be way more wary about the information we are presented with. Fake news were a joke compared to what's coming next. Remaining sane will require a lot of mental energy, logical and critical thinking. But we will surely be fooled. More and more as the technology progresses. Just like we can't even notice special effects in movies.

Another challenge will be to adapt to the changing environment, as the change will be very swift. Jobs that exist today might disappear tomorrow.

The scary part is that the bad guys are also going to have a lot of fun. Phishing campaigns and AI-driven scams are going to do a lot of damage to companies and individuals across the globe. Prompt injection, "cross-site prompt forgery" and similar attacks will surely arise in the coming years.

On the bright side, it's important to realize that these transformations present incredible opportunities. Our biggest advantage (for now) is that we can actually think while machines still can't. So, what should we do to prepare?

Preparing for a world "driven" by LLMs

My first piece of advice when it comes to change in general, is to embrace it. There's no point in fighting against progress. My dad never wanted to hear about computers and the Internet. He managed to live his life without adapting to that change, but he missed many opportunities.

My second piece of advice is that you should take time to learn more about AI and LLMs. Right now. It does matter. When a technology is bound to become so important in our daily lives, we need to understand it. Not necessarily in depth, but enough to be able to grasp what it is, how it works and what makes it tick. Just like we all learned to use smartphones, app stores and apps, we now need to understand LLMs and the ways in which they will impact our society.

Given that natural language is really at the core of LLMs, I strongly believe that writing skills are of the utmost importance. In a world driven by LLMs, the ability to express ourselves clearly will provide a big advantage. Those who struggle to write will struggle to control LLMs. So, my third piece of advice is to invest in your writing skills. The sooner you do, the better.

Combined with writing skills, prompt engineering is clearly a key skill to acquire and master. There are now tons of courses online about this topic, and there will be more and more. And for good reason. Prompts are the way we ask questions to LLMs. The better the prompts, the better the results. That's why we all need to better understand how to write good prompts and how to construct useful contexts for LLMs to operate. Start collecting good prompts. Those will serve you well.

I also strongly believe that Personal Knowledge Management (PKM) is going to become more and more relevant and important. Both personally and professionally. In a world where there's just too much information and where it becomes truly hard to distinguish between what is real and what isn't, having a safe place where you store what you know and trust does matter. PKM is not only about capturing information from the outside. It's also about helping you filter through the noise, identify what matters, think more clearly and explore. Explore what you find, but also what you think. If you're curious about PKM, then please check out my other posts on this subject.

Another point I have in mind is that I think that generalists will be better prepared for what's coming. Of course, deep expertise in specific domains will still be valuable, but experts will benefit much less from LLMs than generalists will. Where an expert will gain 1% thanks to LLMs, a generalist will gain m * n%. I'm just making things up, but the idea holds. LLMs will be much more valuable in the hands of someone who is knowledgeable about various domains/topics.

One more recommendation I have is to follow the news about AI. Whenever I want to learn about something new, my first step is to find relevant Websites/RSS feed to track. Exposing yourself to this domain will help you quickly learn more about what's happening, who does what, what matters. You'll discover ideas, concepts, etc. The more you learn, the better off you'll be.

Last but not least, practice. Exposing yourself to information is not enough. You need to act. Learn to use LLMs. Use those regularly. Find ways to leverage their power to ease your life. Don't focus on a single one. Try different ones. Notice their differences and similarities. Find what each is better at. Ask them all the same questions and see which one helps you the most.

Conclusion

In this article, I've shared some ideas about LLMs and how they might impact all of us, as well as a few ways you can prepare to ride the wave instead of being left behind.

Whether LLMs are overhyped or not, they're here to stay, so we'll all be better off if we learn to take advantage of those early on. There are many opportunities to seize, many yet to be discovered.

That's it for today! ✨