AWS Boto3 Concepts Essentials

Inception

Hello everyone, This article is part of The Python Series, and it's not dependent on any previous articles. I use this series to publish-out Python + Boto3 Projects & Knowledge.


Overview

Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. You can find the latest, most up to date, documentation at our doc site, including a list of services that are supported.

Boto3 is maintained and published by Amazon Web Services.

boto3 · PyPI

Boto3 is a software development kit that enables you to interact with AWS service APIs in order to create, delete, modify, and fetch resource attributes, among other actions. In another way, it enables you to type code that precisely meet your needs.

Today’s Article, will discuss the basic concept essentials that is required to kickstart with Boto3 SDK by navigating Boto3 Documentations.

Get ready and install the boto3 in your local Or using Google Colab notebook.


Documentation overview

By accessing the Boto3 Documentation, you will figure out there are multiple sections downwards. However, we are interested here in figuring out how to deal with AWS Service APIs.

You will realize that there are plenty of services listed as seen in the screenshot below; each service has multiple calls, e.g. client, resource, paginator, waiter, etc.

Scroll down to discover S3 Service Calls below:

💡
Call concepts are the same for all services.
  • Client
    It provides low-level access to API offered for specific services (i.e. s3 in our case)
    By using the Client call, you are making a direct API request for that service.

  • Resources
    It provides a higher-level access to API offered for specific services (i.e. s3 in our case)
    When you work with resources, you can use predefined methods and attributes that make it easier to manage and manipulate these entities without directly dealing with the underlying API calls.

  • Paginators
    Paginator APIs calls often return large amounts of data for its service. It automatically makes subsequent API calls to retrieve additional data.

  • Waiters
    Waiters in Boto3 are used to wait for a certain condition or state to be met in an AWS resource before proceeding with further actions.

Will discover them more one by one next.

Which one should you choose?

  • You can almost always achieve what you want with a client() call and then extra custom code.

  • Consider trying to higher abstractions resources() first before going all the way to the client.

  • Different AWS services lend themselves to Paginators or Waiters, depending on the context and the task at hand.


S3 Client call

eraki Code Sample

Let’s delve deeper with the Client call for S3 Service. navigate to Boto3 Docs, then S3 Service

The S3 Service Document page Is divided to multiple sections covering all the calls for that service e.g. Client, Paginator, Waiters, and Resources.

💡
It’s the same format for any other AWS service.
  • The First section is The Client Section, The Docs provide a quick brief concept about it, then how to use it by calling the client(‘s3‘)

  • Till this moment, you have just determined the AWS Service and the type of call you gonna use to interact with that service.

  • Scroll down to discover the Client methods that are available for you, As you see below, there’re plenty of them

Let’s Discover Client call in practice:

  • Open-up your code editor and start calling the S3 client.
💡
If you are working locally, as I am, be sure to configure your AWS credentials profile. If you’re using the default profile, you can call the s3 client by using boto3.client(‘s3’) If you’re using a specific profile name, follow the below steps.
  • Configure AWS profile session.
import boto3

# cofigure aws profile session
profile_session = boto3.Session(profile_name='profile_name')
  • Configure Client call
# Configure client
s3_client = profile_session.client('s3')
  • Now, we are ready to interact with S3 Service by using Client call. By navigating into the S3 Client call document, there’s a create_bucket method, let’s discover it.

  • By opening create_bucket method will get a lot of info of how to use this method, as well as provide notes and instructions.

  • Scroll down to discover how to kickstart with this method with the Request Syntax section.

The Request Syntax provides you with all the Parameters you can use with this method. Some of them are required, and the others are not.

  • Scroll down to Parameters to discover what is required with that method, You will find out that the only required param is the Bucket param, which makes sense.

  • Great then, we discover the minimum requirement to use create_bucket method, let’s create a bucket.
bucket_name = 'eraki-us1-dummy-1001'

# Create a bucket
s3_client.create_bucket(
    Bucket=bucket_name,
    ObjectLockEnabledForBucket=True
)
Appending parameters after creating the bucket will not take affect, you should list all params during creation.
  • Let’s create and upload a file there.
# create a txt file
with open('upload.file', 'w') as file:
    file.write('This is upload file content')
  • Get back to S3 Client call methods, There’s a method called upload_file , Let’s check how to use it.

  • Its usage is pretty handy; let’s type down our code
# s3 upload file
s3_client.upload_file(
    Filename='upload.file',  # local file name
    Bucket=bucket_name, 
    Key='uploaded.file'  # file name in the bucket
)
💡
Creating and uploading in a bucket at the same script file won’t work. sperate them, or run line by line, or even use waiters.

Awesome. till now, we did discover how to use Client call and some of its methods e.g. create_bucket, and upload_file, if you wanna more check that out: eraki code sample - S3 client call, and list_bucket


S3 Resource call

Boto3 Resources allow you to interact with AWS in an Object Oriented Programming approach.

On the other hand, a client call includes a large number of methods, a resource is not. Consider a resource as it includes predefined methods and functions that make your life easier while dealing with AWS Resource APIs, in our case, S3.

You can achieve all what you want with Client call. However, Resources provide a high-level and easy way to deal with some issues. I prefer to try out first Resource if it’s not capable of what you want then go with client()

Let’s discover how to use the Resource call:

  • Go to s3 / Resources section / service resources / look for collections.

  • Open buckets collections to see the available collections.

  • Use the code sample below to list buckets using resource

import boto3

session_profile = boto3.Session(profile_name='eraki')
s3_resource = session_profile.resource('s3')

buckets_list = s3_resource.buckets.all()
print(buckets_list) # the result is a set of collection should use list to list them
#s3.bucketsCollection(s3.ServiceResource(), s3.Bucket)
# ^ bucket collections    
#                     ^ using service resource
#                                               ^ retrive s3.Bucket list
print(list(buckets_list))
💡
Must put the list in a variable to be able to print it, line 6.
  • Use dir to list all resource attributes.
print(dir(s3_resource.buckets))
  • Use the code below to iterate on the bucket name

buckets_list = s3_resource.buckets.all()
print(buckets_list)
print(list(buckets_list))
for bucket in buckets_list:
    print(f'Bucket Name: {bucket.name}')
  • Use the code below to upload a file.
# upload a file
with open('upload.file', 'w') as file:
    file.write('This is upload file content')
bucket_file_store = s3_resource.Bucket(name='bucketname')
print(bucket_file_store) 
bucket_file_store.upload_file(Filename='upload.file', Key='uploaded.file')
💡
To get the Bucket attributes i.e. bucket_file_store var! by using the following command
# list all the bucket_file_store attributes 
print(f'\n## var attributes: {dir(bucket_file_store)}', end="\n")
  • Download a file

# Download a file
bucket_file_store.download_file(Filename='downloaded.file', Key='uploaded.file')
  • Filter the output
# list files / with filters
print(dir(bucket_file_store.objects))  # list all available attributes
print(list(bucket_file_store.objects.all()))
print(list(bucket_file_store.objects.filter(Prefix="upload")))  # start with upload

S3 Paginator call

Paginator APIs calls often return large amounts of data for its service. It automatically makes subsequent API calls to retrieve additional data.

For example, here paginators can list objects, However, we did see that in multiple ways. However, pageantors will provide a lot of results. providing each object's properties.

Note that it’s use Client call

Code explanation

  • Configure Paginator call by selecting a list_objects_v2.
import boto3

session_profile = boto3.Session(profile_name='eraki')
s3_paginator = session_profile.client('s3').get_paginator('list_objects_v2')
  • Result:
print(s3_paginator)
<botocore.client.S3.Paginator.ListObjectsV2 object at 0x7fc732cfd8b0>
                    ^ it have the paginator already

It has the paginator already, so there’s no need to define it again, as mentioned in the docs with defining the paginator again

  • List iteration
# list iteration
paginate_result = s3_paginator.paginate(
    Bucket='bucketname'
)

print(list(paginate_result))
  • Check the entire script below:
import boto3
import json

session_profile = boto3.Session(profile_name='eraki')
s3_paginator = session_profile.client('s3').get_paginator('list_objects_v2')

# list iteration
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/paginator/ListObjectsV2.html
paginate_result = s3_paginator.paginate(
    Bucket='bucketname'
)

print(list(paginate_result))
print(f"\n\n{json.dumps(list(paginate_result), indent=4, default=str)}")  # formate the result with json lib

S3 Waiters call

Waiters in Boto3 are used to wait for a certain condition or state to be met in an AWS resource before proceeding with further actions.

I see it’s already understandable, so let’s type our script directly.

import boto3

session_profile = boto3.Session(profile_name='eraki')
s3_waiter = session_profile.client('s3').get_waiter('bucket_exists')

bucket_name = 'thisbucketdoesnotexistyet1001'

print(f'Waiting for bucket: {bucket_name}')
s3_waiter.wait(
    Bucket=bucket_name,
    WaiterConfig={
        'Delay': 10,  # 10 Sec
        'MaxAttempts': 40  # 40 Attempts
    }
)
print(f'Bucket {bucket_name} has been made.')  # will wait untill it's been made.

This script will wait until the mentioned time is finished. at this moment go and create a bucket with that name, and will print that the bucket has been made.


S3 Advanced call

Let’s try out some advanced call examples below

  • First, configure the profile session.
import boto3
import json

session_profile = boto3.Session(profile_name='eraki')
advance_call = session_profile.client('s3')
  • List all attributes.
# list all attributes
print(dir(advance_call), end="\n\n")
  • List lifecycle attributes
print(dir(advance_call.get_bucket_lifecycle), end="\n\n")
  • get bucket lifecycle
# get bucket lifecycle
print(json.dumps(advance_call.get_bucket_lifecycle
                     (Bucket='bucketname'), 
                     indent=4, default=str
                 ), 
       end="\n\n")
  • Another example; Generate a resigned URL for an object with an expiration set to 2 MINs.

# genarate a presigned URL
print('\n Bucket presigned url')
upload_file_url = advance_call.generate_presigned_url(
    ClientMethod='get_object',
    Params={
        'Bucket': 'bucketname',
        'Key': 'upload.file'
    },
    ExpiresIn=120  # 2 MIN
)
print(upload_file_url)

Resources


That's it, Very straightforward, very fast🚀. Hope this article inspired you and will appreciate your feedback. Thank you

3
Subscribe to my newsletter

Read articles from Mohamed El Eraki directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mohamed El Eraki
Mohamed El Eraki

Cloud & DevOps Engineer, Linux & Windows SysAdmin, PowerShell, Bash, Python Scriptwriter, Passionate about DevOps, Autonomous, and Self-Improvement, being DevOps Expert is my Aim.