How to Run Llama-3.1🦙 Locally Using Python🐍 and Hugging Face 🤗
Introduction
The latest Llama🦙 (Large Language Model Meta AI) 3.1 is a powerful AI model developed by Meta AI that has gained significant attention in the natural language processing (NLP) community. It is the most capable open-source llm till date. In this blog, I will guide you through the process of cloning the Llama 3.1 model from Hugging Face🤗 and running it on your local machine using Python. After which you can integrate it in any AI project.
Prerequisites
Python 3.8 or higher installed on your local machine
Hugging Face Transformers library installed (
pip install transformers
)Git installed on your local machine
A Hugging Face account
Step 1: Get access to the model
- Click here https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct to open the official hugging face repository of Meta's Llama-3.1-8B-Instruct (you can use other llama 3.1 models in the same way).
- At the beginning you should be seeing this:
- Submit the form below to get access of the model
- Once you see "You have been granted access to this model", you are good to go...
Step 2: Create an ACCESS_TOKEN
- Go to "Settings" (Bottom right corner of the below image):
- Go to "Access Tokens" click "Create new token"(upper right corner of the image):
- Give read and write permissions and select the repo as shown:
- Copy the token and place it somewhere safe and secure as it will be needed in the future.(note: once you copy it you cannot copy it again, so if you anyhow forget the key, you have to create a new one to begin with :))
Step 3: Clone the LLaMA 3.1 Model
Now run the following command on your favorite terminal. The ACCESS_TOKEN
is the one you copied and the <huggingface-user-name>
is the username of your hugging face account.
git clone https://<huggingface-user-name>:<ACCESS_TOKEN>@huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
This can take a lot of time depending on your internet speed.
Step 4: Install Required Libraries
Once the cloning is done, go to the cloned folder and install all the dependencies from the requirements.txt
. (you can create an virtual-environment using conda(recommended) or virtualenv) You can find out the requirements file in my GitHub provided in the resources section below.
Using conda:
cd Meta-Llama-3.1-8B-Instruct
conda install --yes --file requirements.txt
Using pip:
cd Meta-Llama-3.1-8B-Instruct
pip install -r requirements.txt
Step 5: Run the Llama 3.1 Model
Create a new Python file (e.g., test.py) and paste the location of the model repository you just cloned as the model_id
(such as, "D:\\Codes\\NLP\\Meta-Llama-3.1-8B-Instruct"
). Here is an example:
import transformers
import torch
## Here you paste your cloned repos location
model_id = "D:\\Codes\\NLP\\Meta-Llama-3.1-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
You can set device_map=cuda
if you want use the gpu also.
Step 6: Run the Python Script
python test.py
Output
Issues you can face
OSError: [WinError 126] fbgemm.dll
To solve this error make sure you have Visual Studio installed.
In case you don't have it, click here and install it.
Then restart the computer.
If there is still any errors with pytorch versions, use anaconda or miniconda to configure a new environment with suitable python version and dependencies.
If you are facing any other issue or error feel free to comment below.
Resources
For more details on llama 3.1 check out: https://ai.meta.com/blog/meta-llama-3-1/
My implementation https://github.com/Debapriya-source/llama-3.1-8B-Instruct.git
Conclusion
In this blog, we have successfully cloned the LLaMA-3.1-8B-Instruct model from Hugging Face and run it on our local machine using Python. You can now experiment with the model by modifying the prompt, adjusting hyperparameters, or integrate with your upcoming projects. Happy coding!
Subscribe to my newsletter
Read articles from Debapriya Das directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Debapriya Das
Debapriya Das
About Passionate about driving innovation through technology, I am transitioning into the world of DevOps with a solid foundation in Generative AI, Machine Learning, and Backend Development. My experience spans developing advanced algorithms, designing scalable backend systems, and fostering data-driven decision-making. I am eager to leverage my technical expertise to streamline operations, enhance automation, and implement continuous integration and deployment practices. Committed to fostering a culture of collaboration and efficiency, I aim to contribute to the seamless delivery of high-quality software solutions. Let's connect and explore how we can drive technological excellence together.