DeveloPassion's Newsletter #148 - Local Large Language Models (Local LLMs)

Edition 148 of my newsletter, discussing Knowledge Management, Knowledge Work, Zen Productivity, Personal Organization, and more!

DeveloPassion's Newsletter #148 - Local Large Language Models (Local LLMs)

Welcome

Another week, another newsletter! I hope that you all had a great one 🤩

Only a few weeks left before the end of 2023. What a year! This week, I've had a lot of trouble focusing. I felt tired and couldn't find much energy to get to work. This led me to procrastinate time and time again. Writing this, I feel like I've been thinking the same way for weeks. Honesty is important, and the reality is that sometimes, even the best systems fail. And there's no problem with that. It's okay to rest. I don't want to make myself feel bad for feeling tired and bored. I needed to change my mind and recharge.

Alright, let's gooooo 🚀

The lab 🧪

This week I've spent time trying out solutions to run Large Language Models (LLMs) such as Mistral 7B locally. Being able to run such powerful models on a desktop computer feels like magic. As a colleague mentioned: how can so much knowledge be compressed into a 7GB file?! It's mind-blowing.

New articles

No new articles this week

Quotes of the week

  • If you can think and speak and write, you are absolutely deadly. Nothing can get in your way. Writing is the most powerful weapon you can possibly provide someone with — Jordan Peterson
  • My definition of wisdom is knowing the long-term consequences of your actions — Naval Ravikant
  • Pursuing your curiosity is the transition from a life of doing what others tell you to doing what you enjoy — Dan Koe

Running Large Language Models (LLMs) locally

There are more and more ways to run Large Language Models on desktop computers. This week, I've tried the following ones, with a lot of success:

👾 LM Studio - Discover and run local LLMs
Find, download, and experiment with local LLMs
Introducing llamafile – Mozilla Hacks - the Web developer blog
We’re thrilled to announce the first release of llamafile, inviting the open source community to join this groundbreaking project.
GPT4All
Free, local and privacy-aware chatbots
Ollama
Get up and running with large language models, locally.

LM Studio felt the easiest and most powerful. It provides a desktop application through which you can download and use various models coming from HuggingFace and chat with those locally. It also exposes the model through an OpenAI-like API, making it a breeze to write code that leverages the model. It gives me a ton of ideas about ways to leverage that (e.g., an Obsidian plugin interacting with a local LLM to augment my thinking in a privacy-preserving way).

The fact that such powerful LLMs exist and are free and open source is mind-blowing. It's a big win for the world, and it is clearly driving a ton of innovation forward, while giving powerful tools to all of us. I think it would be a shame for all those models to be proprietary and hidden away behind paywalls.

There's also been awesome progress on the ability to let LLMs run code locally:

The Open Interpreter Project
Open Interpreter is a free, open-source code interpreter.

This kind of project paves the way for much more powerful uses of LLMs: interacting with agents that can actually do things for you.

I'm going to keep exploring the possibilities during my spare time, as I strongly believe that those will lead me towards better thinking more productivity. And it will help me be ready for what's yet to come. LLMs just can't be ignored. It's a big shift for humanity.

Thinking and learning

This week, I've also given SDXL Turbo a try. It's an awesome evolution of Text-to-Image Generation by Stability.AI. Now, it's possible to generate images almost as fast as we type. I type fast, so it doesn't always follow me quickly enough, but it's so much more powerful than the initial version was. Before, generating images took many iterations. Now it's MUCH faster.

Introducing SDXL Turbo: A Real-Time Text-to-Image Generation Model — Stability AI
SDXL Turbo is a new text-to-image mode based on a novel distillation technique called Adversarial Diffusion Distillation (ADD), enabling the model to create image outputs in a single step and generate real-time text-to-image outputs while maintaining high sampling fidelity.

I found this cool introduction to LLMs for Hackers:

I've also been looking at Tana for a while. I think it's a super interesting and powerful tool, but it's just not right for me. I don't want to put all my knowledge in a locked box. I want to keep full ownership and control over my knowledge network. But maybe Tana is right for you?

Is Tana Right For You?
Tana is an exciting option when it comes to Archetypes. It sits between an Architect and Gardner the most. It is a great option for an…

Do you spend enough time learning new things? If not, you're probably missing out:

5-Hour Rule: If you’re not spending 5 hours per week learning, you’re being irresponsible
“In my whole life, I have known no wise people (over a broad subject matter area) who didn’t read all the time — none. Zero.” — Charlie…

I agree with Prakash Joshi Pax. Stop trying to create a perfect system. Create and use the simplest possible system, and make it evolve over time, if and when needed:

Stop Trying to Create a Perfect Knowledge Management System
Blindly following someone else’s system won’t lead you anywhere

Check out this great reading list about the neuroscience of reading:

A Reading List About the Neuroscience of Reading - Longreads
“Reading makes us human.”

Are you faking it? Probably not!

Why Everyone Feels Like They’re Faking It
The concept of Impostor Syndrome has become ubiquitous. Critics, and even the idea’s originators, question its value.