Local LLM's on WSL2


Below example demonstrates running locally Ollama and ๐๐ฉ๐๐ง-๐๐๐๐๐ ๐จ๐ง ๐ ๐๐๐2 containerized environment.
- The model Ollama is running in this example: ๐๐ฐ๐๐ง2.5-๐๐จ๐๐๐ซ:0.5๐
- Presently it is on CPU as i have ๐๐ง๐ญ๐๐ฅ ๐๐ซ๐ข๐ฌ๐๐ i๐๐๐ which is not natively supported by Ollama as of now it seems.
- Also, one potential method to determine compatibility of Ollama with Intel IrisXe iGPU is by leveraging the Intel Corporation ๐ข๐ฉ๐๐ฑ-๐ฅ๐ฅ๐ฆ ๐๐๐๐๐ฅ๐๐ซ๐๐ญ๐ข๐จ๐ง ๐ฅ๐ข๐๐ซ๐๐ซ๐ฒ.
Subscribe to my newsletter
Read articles from Gaurav directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
