ODC with AWS S3 - Browser to S3
Table of contents
This article explains how to upload and download files directly from a client's browser to an Amazon Simple Storage Service (S3) bucket, bypassing the backend of an OutSystems Developer Cloud application.
By default, an S3 bucket is private and not accessible to the public without authorization. To let a frontend applications store or retrieve a file, the easiest and safest way is to generate pre-signed S3 URLs. These URLs offer limited access and are perfect for use in a frontend application.
This article is part of a series that explores ways to interact with Amazon S3 buckets. Be sure to read the introductory article to understand why S3 can be a valuable addition to your ODC applications and the challenges you might encounter.
Pre-signed URLs
AWS S3 pre-signed URLs offer a way to temporarily access a private object in an Amazon S3 bucket. These URLs include authentication information in the query string, allowing a user to perform a specific action, like reading or writing to an S3 object, without needing to authenticate.
One benefit is that the S3 object targeted by the pre-signed URL doesn't need to exist beforehand. This means you can create a pre-signed URL for a brand new object, allowing you to upload new objects to an S3 bucket using the URL.
When creating a pre-signed URL, you specify how the object will be accessed. This includes at least the request type (GET, PUT, etc.) and can also include additional request parameters like headers and metadata. This request is then signed with your AWS credentials, and the signature is added to the pre-signed URL along with additional parameters.
When using a pre-signed URL to request the object, it's important to execute the request with the same set of parameters specified. This means the request must be of the exact type and include the specified headers and other parameters. Otherwise, the signature validation, and therefore the object authorization, will fail.
Prerequisites
To try out the reference application, you will need the following:
S3 Bucket
AWS Credentials with a policy that allows storing and retrieving objects
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<yourbucketname>/*"
}
]
}
- Configure a CORS policy on the bucket (you might want to restrict it further).
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
- Configure the settings of the reference application in ODC Studio.
With all prerequisites completed, let's begin with the upload process.
Upload
Uploading a file from the frontend application requires some JavaScript coding. The reference application includes a complete example that we will use to walk through the process, which is:
Generate a pre-signed URL for a new object and a PUT request in the backend using the AWSSimpleStorage external logic connector.
Use the pre-signed URL in the frontend to perform the upload using JavaScript's XMLHttpRequest class.
Open the reference application in ODC Studio. On the Interface tab select the AWSS3Upload block.
Open the AWSS3Upload widget tree. Inside you will find a HTML input element of type file named UploadElement.
Besides that you will also find a progress element which is used to display the upload progress.
The widgets CSS uses some design tokens (CSS variables) which can be overridden in the screen or in your application theme. You will also note that the HTML input element is hidden and that all styles apply to the wrapping label element.
OnReady and OnDestroy Event Handlers
In the OnReady event handler there is a single JavaScript element registering a change event with the OnFileUploadChange client action.
const uploadElement = document.getElementById($parameters.UploadFileWidgetId);
uploadElement.addEventListener('change', $actions.OnFileUploadChange);
console.debug('Change listener added');
Likewise, in the OnDestroy event handler the change listener is removed.
const fileElement = document.getElementById($parameters.UploadFileWidgetId);
fileElement.removeEventListener('change', $actions.OnFileUploadChange);
console.debug('Change listener removed');
OnFileUploadChange Event Handler
This event handler is executed when the user selects a file to upload. The Event object contains information about the file selected.
The JavaScript inside the event handler first checks if a file was selected. If not the script returns.
It then calls another Client Action GetPresignedPutUrl — this one is located outside of the widget in the Logic tab — to get a pre-signed URL for a given prefix (a folder) and the filename of the selected file. Given the pre-signed URL it executes the widgets UploadFile client action.
if($parameters.Event.target.files.length == 0) {
return;
}
$actions.GetPresignedPutUrl($parameters.Prefix, $parameters.Event.target.files[0].name)
.then((result => {
$actions.UploadFile(
result.PreSignedUrl,
$parameters.Event.target.files[0].name,
$parameters.Event.target.files[0].type,
$parameters.Event.target.files[0].size,
$parameters.Event.target.files[0]
);
}));
GetPresignedPutUrl Client Action
In the Logic tab you will find the GetPresignedPutUrl client action. It executes the server action Client_GetUploadUrl located directly under Server Actions and returns a generated Pre-signed URL.
The Client_GetUploadUrl uses the GetPreSignedUrl action from the AWSSimpleStorage forge component using credentials stored as application settings.
It creates a pre-signed URL for a PUT request to the specified key. The generated pre-signed URL is valid for 2 hours, as indicated by the Expires attribute.
UploadFile Client Action
The UploadFile client action first sets values for the block’s CurrentFile local variable. This structure contains information about the file and is used later to track the upload progress.
The JavaScript element creates a new XMLHttpRequest instance for a PUT request to the given pre-signed URL.
let xhr = new XMLHttpRequest();
xhr.open('PUT',$parameters.Url,true);
xhr.setRequestHeader("content-disposition", `attachment; filename="${$parameters.FileName}"`);
/*
* Register Event Handlers
*/
xhr.upload.onprogress = (evt) => $actions.OnTransferProgress(evt.loaded);
xhr.upload.onloadstart = (evt) => $actions.OnTransferState('start');
xhr.upload.onload = (evt) => $actions.OnTransferState('success');
xhr.upload.onloadend = (evt) => $actions.OnTransferEnd();
xhr.upload.onerror = (evt) => $actions.OnTransferState('error');
xhr.upload.ontimeout = (evt) => $actions.OnTransferState('timeout');
xhr.upload.onabort = (evt) => $actions.OnTransferState('abort');
/*
* Send Binary Data
*/
xhr.send($parameters.File)
OnTransferState Event Handler
This client action runs whenever an upload has:
encountered an error
been aborted
timed out
succeeded
It then updates the State variable in the CurrentFile structure accordingly.
OnTransferEnd Event Handler
After the upload is finished—whether successful or not—this client action is executed. It triggers the OnFileUploaded event with the uploaded file details as the payload and resets the CurrentFile local variable.
Saving a File Record on Upload Completed
The ClientSide screen manages the OnFileUploaded event from the widget and runs the Client_AddItem server action, which creates a new record in the File entity for the uploaded file.
Summary
Uploading an object from the browser directly to an S3 bucket involves two steps. First, create a pre-signed URL for a PUT request to a new object in the backend. Then, use JavaScript to execute the PUT request with the object's binary data.
The reference implementation is a simple example using XMLHttpRequest. There are many more advanced JavaScript file upload libraries available that you might want to consider, especially for uploading very large files.
Download
Downloading an object using a pre-signed URL directly from the browser is simple. We generate a link that redirects to the pre-signed URL with our object. However, there is one thing to make sure this is working properly.
When uploading a file using the method described above, we include an extra header called content-disposition with the value attachment; filename="<filename>". This header and its value are stored as metadata attached to our S3 object.
When we request that object with a GET request, the header is sent from S3 to our browser, instructing the browser to treat the content as a downloadable file rather than displaying it inline.
On the ClientSide screen in the Interface tab double-click the GetFiles data action.
GetFiles Data Action
The GetFiles data action creates a list of results by first querying the File entity for all stored files. It then goes through each result and generates a pre-signed URL for every file record using the Client_GetDownloadUrl action.
The resulting list is shown as a table on the ClientSide screen, including a Link widget that redirects to the pre-signed URL. Because of the content-disposition header sent from S3, the file is downloaded instead of redirecting the user to the file location.
Streaming
The download URL can be used not only for downloading objects but also as a source for media streaming, like in a video or audio player component. However, be cautious, as a pre-signed URL might only offer partial streaming support. This is because some media player components first make a HEAD request to the URL and then a GET request. Since a pre-signed URL is limited to one request type, this setup will not work.
Summary
Uploading and downloading files to and from an S3 bucket using pre-signed URLs is straightforward. This method avoids some of the challenges mentioned in the introductory article:
We don't use the external logic connector actions for S3 GetObject and PutObject, which have a request or response payload limit of 5.5MB.
We bypass the 28MB request limit from the browser to an ODC application by uploading directly to S3, allowing us to upload even larger files.
The maximum application request timeout is not an issue because we aren't using a server or server action to perform uploads or downloads.
Objects we want to store or retrieve are not passed through our application container, so the container's memory isn't filled with binary data.
However, the described approach has some downsides as well:
This client-side method doesn't work when we need an object stored in S3 for an asynchronous process, like an in-app event handler or a workflow.
The reference application relies on saving an entry to the database after the OnFileUploaded client event is triggered, which can fail. In a production environment, additional steps are necessary to ensure that File entity records in the database match all objects stored in the bucket. S3 Object events, combined with EventBridge and HttpEndpoint targets (webhooks), are a good way to achieve this, but they are beyond the scope of this article and reference application.
When using list operations, such as creating an image gallery, generating pre-signed URLs for each item can cause some unwanted delay.
Using pre-signed URLs provides only partial streaming support for media and documents. This is because most mature media players and document viewers first make a HEAD request to the resource before the actual GET request. A pre-signed URL is limited to one request type, so you cannot create a pre-signed URL that supports both a HEAD and GET request.
I hope you enjoyed reading this article and that I explained the important parts clearly. If not, please let me know by leaving a comment. I invite you to read the other articles in the series about different patterns for storing and retrieving S3 objects.
Subscribe to my newsletter
Read articles from Stefan Weber directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Stefan Weber
Stefan Weber
As a seasoned Senior Director at Telelink Business Services EAD, a leading IT full-service provider headquartered in Sofia, Bulgaria, I lead the charge in our Application Services Practice. In this role, I spearhead the development of tailored software solutions using no-code/low-code platforms and cutting-edge cloud-ready/cloud-native solutions based on the Microsoft .NET stack. Throughout my diverse career, I've accumulated a wealth of experience in various capacities, both technically and personally. The constant desire to create innovative software solutions led me to the world of Low-Code and the OutSystems platform. I remain captivated by how closely OutSystems aligns with traditional software development, offering a seamless experience devoid of limitations. While my managerial responsibilities primarily revolve around leading and inspiring my teams, my passion for solution development with OutSystems remains unwavering. My personal focus extends to integrating our solutions with leading technologies such as Amazon Web Services, Microsoft 365, Azure, and more. In 2023, I earned recognition as an OutSystems Most Valuable Professional, one of only 80 worldwide, and concurrently became an AWS Community Builder.