Local LLM's on WSL2

GauravGaurav
1 min read

Below example demonstrates running locally Ollama and ๐Ž๐ฉ๐ž๐ง-๐–๐ž๐›๐”๐ˆ ๐จ๐ง ๐š ๐–๐’๐‹2 containerized environment.

- The model Ollama is running in this example: ๐๐ฐ๐ž๐ง2.5-๐‚๐จ๐๐ž๐ซ:0.5๐›

- Presently it is on CPU as i have ๐ˆ๐ง๐ญ๐ž๐ฅ ๐ˆ๐ซ๐ข๐ฌ๐—๐ž i๐†๐๐” which is not natively supported by Ollama as of now it seems.

- Also, one potential method to determine compatibility of Ollama with Intel IrisXe iGPU is by leveraging the Intel Corporation ๐ข๐ฉ๐ž๐ฑ-๐ฅ๐ฅ๐ฆ ๐š๐œ๐œ๐ž๐ฅ๐ž๐ซ๐š๐ญ๐ข๐จ๐ง ๐ฅ๐ข๐›๐ซ๐š๐ซ๐ฒ.

1
Subscribe to my newsletter

Read articles from Gaurav directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gaurav
Gaurav