OLLAMA — Your Local LLM Friend: Installation Tutorial 🖥️🚀

Gurneet Singh
5 min readAug 10, 2024

--

Large Language Models, or LLMs, have transformed many industries with their scale and capabilities. But because of their size and substantial requirements, they are frequently created by IT giants and offered for a fee (or perhaps free for a certain period of time). However, many researchers and hobbyists want to run these powerful models on their local computers, even if they are using a CPU that is ten years old, such a Core 2 Duo or Bulldozer! 💭💻

Setting up an LLM for personal use can be daunting and potentially budget breaking. But here’s where OLLAMA comes into the picture. 🌟 This is something everyone needs and deserves.

Why OLLAMA?🤔

Users can run different LLMs locally using Ollama and save the hassle of complicated configurations. It can work on AMD GPUs (ROCm), Nvidia GPUs, and even your dependable old CPU, making it quite flexible. 🖥️✨

This post will discuss Ollama and offer a detailed installation guide for your local computer. To help you get started, we’ve also included some snippets of Python code. Thus, be sure to finish reading! 🐖🐍

Introduction📜

Major LLMs can be run locally (+1) with Ollama, which is open-source (+1). Consider it an LLM version of Docker. 🐳 It packs data, weights, and model configurations into a single file called a MODELFILE. This means that with just one line of code, you can execute an entire LLM model locally. Ollama is a useful tool for both developers and enthusiasts since it provides transparency (+1) and customization (+1), in contrast to closed-source models like ChatGPT. 🔓🛠️

Windows Installation🪟

One of the easiest setups of all time has been provided by the Ollama developers. Kudos to the team! 🎉 Head over to the download link to get the setup installer for Ollama.

Download Here

Installer Screen

Open the installer as soon as it has finished downloading. The last screen of the installer appears as your initial greeting (😂).

Select “Install,” and the process is completed. Very easy installation-these days, you can’t find enough of these. 🚀✨

Ollama installs itself and launches itself in the background automatically. Don’t worry, the LLMs are currently still asleep. It’s time to rouse them. 😴⏰

Waking Up Your LLMs 🌟🔧

Ollama is installed and ready to help, operating in the background. The adorable little Ollama icon may be accessed by going to the taskbar’s hidden icons. 🦙✨

Ollama Icon (Left Most)

Choose “View Logs” with a right-click on the icon if it doesn’t open immediately. By doing this, a command line interface will launch. It’s time to rouse those LLMs to consciousness and prepare them for action! ⏰🏧

Command Prompt

Now, head over to the Models Library on the Ollama website and browse through all the supported models. Once you’ve found the perfect model, you’re ready to bring it to life! For this article, we’ll install phi3-a lightweight model that’s perfect for running on a CPU. 🖥️⚡

ollama run phi3

It must download and set up the model if this is your first time using it. Thus, novice users should anticipate a brief waiting period until the magic happens. 🧙♂️✨

Model Installation

Upon installation, Ollama will initiate the LLM on the accessible devices automatically. 🚀

Note: It is not advisable to run LLMs on a CPU since this could cause your other tasks to lag. NVIDIA GPUs are the greatest choice because of their strong support. ROCm is required for AMD GPU users to run, so good luck with that! 🍀 However, you can utilize whatever your budget will allow. 💸

The LLM itself will greet you when the installation is finished. A command line interface will launch, and you can immediately begin speaking with it. 🗨️💬There are some GUI interfaces available as well on the Ollama GitHub repo, so head over to for more info.

Post Installation

Send some message to your personal LLM buddy and have a chat, while we move onto its python implementation. 🗣️🗣️

Sample Ollama Output

Python Implementation 🐍💻

Now that your LLM is up and running, it’s time to have some fun with Python! 🎉🐍 Here’s how you can integrate Ollama with your Python projects.

Install Ollama using pip as:

pip install ollama

Then start ollama by importing it in your code. Here is a sample code for you to get started:

import ollama as ol

model = "phi3"
message = [
{
'role': 'user',
'content': 'Crack a Joke on LLM',
},
]

response = ol.chat(model=model, messages=message)
print(response)

You will get a dictionary as a response back, which would be pretty long. For getting the exact response, use the below:

print(response['message']['content'])

And it will surely give you your perfect response (aside from some escape sequences).

Sample Jupyter Notebook

What’s Next? 🌟🚀

Seems very straightforward, doesn’t it? Now that Ollama is operational, you’re prepared to investigate even more options! Visit the Ollama Python Docs for a wealth of ideas and examples if you’re itching for more Python implementations. 🎚

🐍But there’s still more! We’ll explore the wonders of Pandas AI and demonstrate how to work with Ollama locally in our upcoming article. 📊🤖

The next one is going to be amazing, so stay tuned! 🔥✨

Originally published at https://www.linkedin.com.

--

--

Gurneet Singh
Gurneet Singh

Written by Gurneet Singh

Passionate about Data Science, Computer Vision, and IoT | Sharing insights, projects, and stories on the intersection of technology and innovation.

No responses yet