In this post, we are going to see how we can use existing AWS infrastructure to create thumbnail for uploaded images. In this venture, we are going to use S3 trigger to execure Lambda function to do whatever we want. But in this case, we’ll create thumbnail so that we can save on some bandwidth when only a smaller version of image is required on the frontend.
So let’s see the pre-requisites:
- Some familarity with S3
- Some familarity with Lambda
- Some familarity with AWS CLI
Although the tutorial can be followed through AWS Console, I’ve decided to explore AWS CLI as it can do things quicker in some cases.
This post is crudely divided into 2 sections:
- Test the trigger (you are here)
- Make the thumbnail
While you read this post, take a moment to connect with me on LinkedIn.
Talking about the overview of this section, in this section, we’ll check if trigger is getting invoked properly when a file is uploaded to a bucket. We’ll do the initial setup to achieve all these.
Let’s move to the next subsection.
Create a source bucket
This is the only bucket we’ll create in Test the trigger section. Later on in Make the thumbnail section, we will create another bucket.
$ aws s3api create-bucket --bucket source-bucket-sntshk --region ap-south-1 --create-bucket-configuration LocationConstraint=ap-south-1
Let’s go to the above command’s flags one by one:
awsis the command line interface to the Amazon Web Services API. Each service* in the AWS toolbelt is available as a subcommand to this
s3apiis the command which is used to create and delete bucket among many others.
create-bucketis the sub-command/action to
s3api. Rest of the flags provide option to this create-bucket.
--region ap-south-1provides region to the bucket. Note: Please take note of this region. We’ll have to create Lambda function in the same region for this experiment to work.
--create-bucket-configuration LocationConstraint=ap-south-1is just another required flag in my case. You may not have to pass this flag if you are creating bucket in us-east-1 region.
Once the command has been done executing, it will spit out some output. Ignore this for now.
Upload a test object to bucket
After we are done with creating the bucket, let’s put a test object in the bucket before proceeding to the Lambda trigger part.
$ aws s3api put-object --bucket source-bucket-sntshk --key TestImage.jpg --body TestImage.jpg
Explanation of the command:
put-objectis used to upload individual object to given bucket.
--bucket source-bucket-sntshkprovides the name of the bucket.
--key TestImage.jpgis the prospective name of the file on the S3 side.
--body TestImage.jpgis the name of the file on our local system.
On successful upload. You will see an “ETag” output as shown above.
Create the Lambda function
While creating a Lambda function is fairly simple, it’s pre-requsites are not. That is because we need to give our function some role so that it is able to communicate to AWS services. This is the case where I refrain to use AWS CLI. Things are quicker with console in this case.
Step 1: Go to Lambda console page and push the button called Create function.
Step 2: Choose the option Use a blueprint. This is the reason we are not using CLI, as CLI does not gives a way to use blueprints. With this radio button checked, search for
s3 and choose the python option (you can choose Node.js if you like, all the best). Now click Configure.
Step 3: Type in a function name in the Function name section. And in Execution role section, choose Create a new role with basic Lambda permission. You may choose Use an existing role if you are following this tutorial for the second time.
Basically what role allows our lambda function to do is to give it permission to do certain things on other AWS services. We can have different policies attached inside our roles. The role used in our context does these two things:
- One is to check which new files has been uploaded to the S3 bucket.
- Another is for analytical purpose and to upload logs to CloudWatch.
Step 4: In next section, we see settings related to S3 triggers. Choose the bucket we created in earlier section. For event type, I am selecting All object create events. For Prefix and Suffix option, we have left them blank. But they are pretty self-explanatory and you can enable them if you like.
Before pushing the button Create function, you can review the sample code that is provided by the blueprint.
Step 5: Press the Create function button.
Test the function manually
Time to test if our function is really working before we proceed to the next step. Follow the steps below:
Note: The GUI might change in future. But I suppose there will always an option to “Test” as long as there is an option to view code in the console.
- Navigate to your Lambda function. Scroll down until you see tabs like “Code”, “Test”, “Monitor”, etc. You should be by default at “Code” tab. That’s the tab we need to be in.
- You should see an orange button named “Test” with a caret symbol (downward facing arrow head). On clicking it, you should see options called Configure test event, click that.
- You will be presented with a dialog. Select Create test event, then in Event template, select s3-put, and then give this test a name.
- Now in the event JSON, replace the occurance of
example-bucketwith the name of bucket with created in Create a source bucket section. Replace the occurance of
test%2Fkeywith the test object we uploaded in Upload a test object to bucket section. Then push the “Create” orange button.
Here is a little gif to show step 2-4.
When tested, you should see Status: Succeeded on the upper right of the execution results.
Test the function by uploading something
Please note that if you are getting
AccessDenied error in previous section. You should see next section.
Now that our test have started to pass, we should now proceed to test if it invokes if new object is uploaded to the bucket. Let’s do that.
$ aws s3api put-object --bucket source-bucket-sntshk --key TestImage.png --body .\Screenshot_20201108_004543.png
This command adds new object to the bucket.
You can see function invocation in the “Monitor” tab of our Lambda function. If you don’t see it, try adjusting the time window. Something like last 15 minutes or so.
You can also verify if the function executed by going to the pressing View logs in CloudWatch button and checking out the latest log to see if it throws an error or executes successfully.
Extra: What the AccessDenied
Question: Hi Santosh, I don’t see a success message, instead I see a
AccessDenied message in the Execution results. Why?
Answer: Make sure the role which Lambda create has two policies attached to it. One for writing log to CloudWatch and another for reading from S3 bucket. As part of the exercise, I’m going to deliberately delete S3 policy and attach in the gif below.
In this post, we just checked how we can execute a piece of code when something happens on S3. There are a plathora of services which can be used with Lambda.
In next post, we will see how we can create a thumbnail based on the activity we did in this post.
If you liked this post, please share it with your network. Subscribe to the newsletter below for similar news.