Running Ollama in GitHub Codespaces

BlackTechXBlackTechX
3 min read

Learn how to efficiently run Ollama in GitHub Codespaces for free.

What is a Codespace?

A codespace is a cloud-hosted development environment tailored for coding. GitHub Codespaces allows you to customize your project by committing configuration files to your repository, creating a consistent and repeatable environment for all users. For more details, refer to the Introduction to dev containers.

What is Ollama?

Ollama is an open-source project designed to simplify running Large Language Models (LLMs) on local machines. It provides a user-friendly interface and functionality to make advanced AI accessible and customizable.

Setting Up Ollama in GitHub Codespaces

Follow these steps to set up and run Ollama in a GitHub Codespace:

1. Open a Codespace

  • Navigate to your repository on GitHub.

  • Click on the Code button and select Open with Codespaces.

  • If you don't have an existing codespace, create a new one.

2. Install Ollama

  • Open the terminal in your codespace.

  • Run the following command to download and install Ollama:

      curl -fsSL https://ollama.com/install.sh | sh
    

3. Verify the Installation

  • Type ollama in the terminal to verify the installation:

      ollama
    
  • If the installation is successful, you will see a list of available Ollama commands.

4. Start Ollama

  • Run the following command to start Ollama:

      ollama serve
    

5. Run and Chat with Llama 3

  • To run and interact with the Llama 3 model, use the following command:

      ollama run llama3
    

Ollama Model Library

Ollama provides a variety of models that you can download and use. Visit the Ollama model library for a complete list.

Example Models

Here are some example models available for use:

ModelParametersSizeCommand
Llama 38B4.7GBollama run llama3
Llama 370B40GBollama run llama3:70b
Phi 3 Mini3.8B2.3GBollama run phi3
Phi 3 Medium14B7.9GBollama run phi3:medium
Gemma2B1.4GBollama run gemma:2b
Gemma7B4.8GBollama run gemma:7b
Mistral7B4.1GBollama run mistral
Moondream 21.4B829MBollama run moondream
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
LLaVA7B4.5GBollama run llava
Solar10.7B6.1GBollama run solar

Important: Ensure that your system meets the following RAM requirements:

  • At least 8 GB of RAM for 7B models

  • At least 16 GB of RAM for 13B models

  • At least 32 GB of RAM for 33B models

By following these steps, you can set up and run Ollama efficiently within GitHub Codespaces, leveraging its cloud-based environment to explore and utilize powerful LLMs.

Open on GitHub

https://github.com/BlackTechX011/Ollama-in-GitHub-Codespaces

10
Subscribe to my newsletter

Read articles from BlackTechX directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

BlackTechX
BlackTechX