DeepSeek R1 in OutSystems Developer Cloud with Amazon Bedrock


The release of DeepSeek R1 has gained a lot of attention. It shows reasoning abilities that match, and sometimes surpass, OpenAI's O1 model, achieving these results with only a fraction of the resources needed to train OpenAI's O1 model. At the time of writing, OpenAI charges $15 for 1 million input tokens and $60 for 1 million output tokens through their API, while DeepSeek charges $0.55 and $2.19 for 1 million input and output tokens, respectively. The best part is that DeepSeek models are open source, and you can run them on a runtime of your choice, like Ollama, vLLM, LM Studio, Azure AI, Amazon Bedrock and many more.
In this tutorial, we will walk through the steps to set up DeepSeek R1 on Amazon Bedrock as an Imported Model and use it in OutSystems Developer Cloud applications.
Alternatively, DeepSeek R1 is also available for marketplace deployment, which is more suitable for production use. Marketplace deployments are set up on SageMaker, offering more configuration options for scalability and security. However, deploying DeepSeek through the marketplace requires a service quota for p5 compute instances, which must be requested first. (It is actually a feature to prevent accidental high costs). That's why we use the Imported Models feature of Bedrock, which doesn't have this restriction but does come with some limitations. See the Important Notes section of this article. For a development and trial environment, it is good enough.
Amazon Bedrock Imported Models
Bedrock is an easy-to-use, fully managed AI service by AWS. It provides access to a wide range of AI models from Amazon Bedrock's model catalog through a unified API. In addition to these catalog models, Bedrock also allows the import of other models.
Imported models must follow specific predefined architectures to be supported by Bedrock. As of this writing, the supported model architectures include:
Mistral
Mixtral
Llama 2, Llama 3, Llama 3.1, Llama 3.2, and Llama 3.3
Flan
For a complete list of supported architectures, see Supported Architectures in the Amazon Bedrock documentation.
Fortunately, DeepSeek R1 is available as a distilled Llama model, which lets us import it into Amazon Bedrock. Distillation means that a teacher model (DeepSeek R1) transfers its "knowledge" to a student model (Llama 3.1). DeepSeek R1 and its distilled models are available on Huggingface for download.
For our walkthrough we will use deepseek-ai/DeepSeek-R1-Distill-Llama-8B.
Steps Outline
The steps are straightforward, but they may take some time because of the model's size.
Upload the model to an S3 bucket
Create a model import job in the Bedrock console
Use the model in ODC
Prerequisites
Before we begin, ensure you meet all the necessary prerequisites.
Environment
AWS Account - You need access to an AWS account with permissions for Amazon Bedrock and Amazon Simple Storage Service.
Git - On your development workstation, you need the git command line tools and git lfs installed.
AWS Credentials - Access Key and Secret Access Key for an IAM user with permissions to use custom models in Amazon Bedrock.
S3 Bucket - An empty S3 bucket in the us-east-1 region (Bedrock only allows model imports from an S3 bucket).
AWS CLI - Optional. Can be use as an alternative to upload model files to S3 instead of the console.
OutSystems Developer Cloud
To follow along, you will need to install two assets from Forge into your OutSystems Developer Cloud development stage.
AWSBedrockRuntime - provides an action to execute the Amazon Bedrock InvokeModel API.
AWS Bedrock Model Invocations - server actions to create a model-specific prompt for model invocation.
With all the prerequisites completed, let's start by uploading the model to an S3 bucket.
Upload Model to S3
Unfortunately, there isn't a direct way to copy a model from Huggingface into an S3 bucket, or at least I haven't found one. First, we download the model from Huggingface to our local computer, and then we upload it to S3.
Run the following command in a terminal window:
git clone --depth 1 https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B
This command will clone the Huggingface repository into a folder named DeepSeek-R1-Distill-Llama-8B. Models are quite large, so the download may take some time.
After the download is complete, upload the entire folder to the S3 bucket you created. You can use either the AWS S3 console or the AWS command line interface if it is installed and set up.
aws s3 cp DeepSeek-R1-Distill-Llama-8B s3://<bucket>/ --recursive
Import Model to Amazon Bedrock
Next, switch to the AWS Bedrock console. Make sure to select the us-east-1 region. In the console's menu, select Imported models and click Import Model.
In the dialog:
Model Details - Model name: DeepSeek-R1-Distill-Llama-8B.
Import job name - Name: Import-DeepSeek-R1-Distill-Llama-8B.
Model import settings - Model import source: Amazon S3 bucket.
Model import settings - S3 location: Choose the DeepSeek-R1-Distill-Llama-8B folder you uploaded to your S3 bucket.
Service access - Choose a method to authorize Bedrock: Create and use a new service role.
Service access - Service role name: AWSBedrockModelImportRole. This role will have permission to access your S3 bucket, and you can reuse it later for other model imports to Bedrock.
Click Import Model to start the job. It will take several minutes to complete, and you can check the status in the Jobs tab.
After the job completes successfully, an entry will appear in the model tab. Click on the entry and copy the model ARN (Amazon Resource Name).
At this stage, you can also open the model in the Playground and interact with it. If you encounter a "Model not Ready" exception, please refer to the Important Notes section of this article.
You will need the model ARN, along with AWS credentials, to use the model from an OutSystems application.
Using the Model
Using the model in an ODC app is straightforward. The AWSBedrockRuntime Forge component wraps the official Amazon SDK for Bedrock and provides an action called InvokeModel for direct model use. This action requires the payload—the request—as binary data. Here's how to create a payload for the DeepSeek R1 model we just imported.
Open the AWS Bedrock Model Invocations library in ODC Studio.
Double-click the server action Deepseek_Llama_Invoke in Logic - Server Actions - Invocation.
Check the Request input parameter, which has an array of Messages with a Role and Content attribute. The Role can be either user or assistant, and the Content is any text. This structure somewhat follows the OpenAI API standard for conversational message prompts.
However, a model does not understand this structure directly. Instead, it uses specialized tokens to distinguish between a system prompt, a user message, or an assistant message. Therefore, the first step is to create a prompt string from this structure, which is done in the GenerateLlamaInvokePayload action.
GenerateLlamaInvokePayload performs the following steps
Creates a StringBuilder objects
If the request has a system prompt it appends
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>" + NewLine() +
Request.System + "<|eot_id|>" + NewLine()
- Iterates over all Message items of the Messages array and appends
"<|start_header_id|>" + Request.Messages.Current.Role + "<|end_header_id|>" + NewLine() +
Request.Messages.Current.Content + "<|eot_id|>" + NewLine()
Execute the StringBuilder_ToString action to get the complete prompt string.
Assign global request parameters, such as MaximumLength, and the prompt string to a local variable.
Serialize the local variable.
Convert the serialized payload to binary data.
Return the binary data payload.
In the Deepseek_Llama_Invoke action, this payload is used along with your AWS credentials to invoke the model.
Now it's up to you to build something on top of it.
Important Notes
Imported models in Amazon Bedrock are removed when they are not used for a couple of minutes and it takes up to 10 seconds depending on the model to cold start it. This causes additional latency and even my lead to a ModelNotReady exception. You can read more in the documentation.
Take some time to review Bedrock pricing for imported models.
Summary
In this tutorial, we imported a distilled DeepSeek R1 model into Amazon Bedrock. We also explored how to create a prompt string from a request structure in ODC to perform a direct model invocation.
I hope you enjoyed it. Feel free to leave a comment with your questions, or even better, tell us what you have built. We greatly appreciate any feedback.
Subscribe to my newsletter
Read articles from Stefan Weber directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Stefan Weber
Stefan Weber
As a seasoned Senior Director at Telelink Business Services EAD, a leading IT full-service provider headquartered in Sofia, Bulgaria, I lead the charge in our Application Services Practice. In this role, I spearhead the development of tailored software solutions using no-code/low-code platforms and cutting-edge cloud-ready/cloud-native solutions based on the Microsoft .NET stack. Throughout my diverse career, I've accumulated a wealth of experience in various capacities, both technically and personally. The constant desire to create innovative software solutions led me to the world of Low-Code and the OutSystems platform. I remain captivated by how closely OutSystems aligns with traditional software development, offering a seamless experience devoid of limitations. While my managerial responsibilities primarily revolve around leading and inspiring my teams, my passion for solution development with OutSystems remains unwavering. My personal focus extends to integrating our solutions with leading technologies such as Amazon Web Services, Microsoft 365, Azure, and more. In 2023, I earned recognition as an OutSystems Most Valuable Professional, one of only 80 worldwide, and concurrently became an AWS Community Builder.