Create Your Own Powerful AI Chatbot in Minutes!

Are you ready to run your very own AI chatbot, similar to ChatGPT, right on your computer? In this guide, I’ll walk you through how to install and run Ollama locally, giving you access to an intuitive user interface and the ability to run multiple AI models simultaneously. Best of all, you’ll be able to set this up on both Windows and Linux!

In this step-by-step guide, we’ll cover everything from the basic setup to installing multiple models. I’ll even throw in some bonus tips, like how to analyze documents and photos using your locally hosted AI chatbot.

Why Install Ollama Locally?

Running AI models locally has several benefits:

  • Privacy: Your data stays on your machine, which is great for sensitive information.
  • Flexibility: You can run different models and customize them as needed.
  • Cost Savings: Avoid recurring subscription fees associated with online AI services.

Now, let’s dive into the installation process. These are the commands that are used in the video above. Follow along with that to get the necessary steps to use each of these!

Setting Up Ollama Locally

For Windows users, we’ll leverage the Windows Subsystem for Linux (WSL) to install and run Docker, which is required for hosting Ollama. For Linux users, we will just use the linux command prompt starting in the next step.

Step 1: Install WSL

Wsl  - - install

Installing Docker

For Windows AND Linux users we need to install docker. Follow the following commands to install docker using the command line:

First, update your existing list of packages:

sudo apt update

Next, install some required packages for use over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Add the GPG key for the official Docker repository:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the Docker repository to your APT sources:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update your existing packages with the changes above:

sudo apt update

Install Docker:

sudo apt install docker-ce

Installing Open WebUI and Ollama

Now that Docker is installed, you can set up Ollama with Open WebUI. Depending on your hardware setup, choose one of the following (NVIDIA or CPU Only) commands to complete the installation.

For NVIDIA GPU Systems:

If you want to utilize GPU resources for more efficient AI processing, follow these steps:

Step 1: Install NVIDIA Container Toolkit

Add the NVIDIA container toolkit repository to your sources list:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Update your package list:

sudo apt update

Then, install the NVIDIA container toolkit:

sudo apt install -y nvidia-container-toolkit

Reboot your system to apply the changes:

sudo reboot

Step 2: Run Ollama with GPU Support

After rebooting, use the following command to run the Docker container with GPU support:

sudo docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

For CPU-Only Systems:

If you’re not using a GPU, use this command:

sudo docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

This command pulls a Docker image that bundles Open WebUI with Ollama, allowing for a streamlined setup in a single step.

Resources

For further information, check out these resources: