How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama

0š¯•¸koji
1 min readDec 3, 2023

--

How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama

prerequisite

sudo apt install curl

What is Ollama?

Get up and running with large language models, locally.

Install Ollama via curl

curl https://ollama.ai/install.sh | sh

Run Ollama

In this case, we will try to run Mistral-7B.
If you want to try another model, you can pick from the following site.
https://ollama.ai/library

ollama serve

And open another Terminal tab and run the following command. The following command will pull a model.

ollama run mistral

If everything works properly, you will see something like below.

My machine has a GPU, RTX3070. So Ollama is using the GPU.

Terminate Ollama

If you want to exit Ollama, you need to type the following.

/exit

Then ctrl + c in a terminal what you ran ollama serve.

--

--

0š¯•¸koji

software engineer works for a Biotechnology Research startup in Brooklyn. #CreativeCoding #Art #IoT #MachineLearning #python #typescript #javascript #reactjs