"Unzipped size must be smaller than 105288401 bytes (Service Lambda..." on redeployment


I've been experiencing this issue for a couple of months now. Since I have confirmed it is not an issue on my side I am taking it to AWS to seek for a fix as it really obstructs our development process.

I am using the Serverless Framework to deploy an application through Cloudformation. In this process the files get packaged and uploaded to AWS. There are certain limits in package sizes for Lambda functions which can result in a failed deployment if they are exceeded. In my case the deployments fail even though the limits are not exceeded.

I regularly make changes to my app and redeploy my stack. On most days, it will work without an issue but on some days it won't. There are certain timeframes where deployment of the exact same stack without any changes made will not work at all. One of these timeframes is the past 16 hours. 16.5 hours ago I deployed my stack without issues after adding an AWS SNS Topic. Half an hour later I tried redeploying with a new Lambda function 'NewLambdaFunction' to feed this topic. The redeployment failed with the error

UPDATE_FAILED: MyLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Unzipped size must be smaller than 105288401 bytes (Service: Lambda, Status Code: 400, Request ID: ...)" (RequestToken: ..., HandlerErrorCode: InvalidRequest)

As you can see, the update did not fail due to the creation of my 'NewLambdaFunction' but instead it is referencing a previously created Lambda function 'MyLambdaFunction' as being too big in size. This is obviously not the issue, as the deployment worked before and there were no changes made to 'MyLambdaFunction'. In an attempt to find out where the issue lies, I removed 'NewLambdaFunction' from the Cloudformation template and tried redeploying the exact same stack that was deployed 30 minutes earlier without issue.

From here on I continue getting the error "Unzipped size must be smaller than ...". Upon commenting out 'MyLambdaFunction' and trying redeployment the same error gets thrown, but with another function 'MyOtherLambdaFunction'. The same thing with the other function commented out and the next. I continue playing this game "destroying" my stack until at one point it starts working again.

The issue here is that the errors being thrown are obviously incorrect and obstruct me in the debugging process. The other issue is that this has been happening at random intervals for months now (e.g. same issue on Sep. 17th). The last time this happened I gave up and tried redeploying the same stack the next day. It then worked without issues.

I am expecting this to work again tomorrow but I am not satisfied with the solution "if it happens you should wait and it goes away".

Is this connected to some maintenance work on AWS? Why does this happen regularly?

Thank you very much in advance.

  • Are you using Lambda Layers as part of your function configuration?

  • Yes I am using multpile Lambda Layers for each function to reduce deployment size. Today is one of those days where deployment doesn't work. No changes made.

  • Are any of the version numbers of the referenced layers changing between deployments?

2 Answers


Are you using Python in your Lambda?

If yes, I suggest you to read this excellent post: https://medium.com/@jolodev/my-pain-with-serverless-and-aws-lambda-52278429ae33

It explains very well how your total size is not only your code by the standard packages deployed by Lambda runtime (as layers for some of them). It also explains how to avoid their deployment when you don't need them.

The size of those packages changes as they get updated to new versions. They may get smaller or bigger depending of the goals of the version (optimization vs new features). So, when they get bigger, the total size may exceed the limit imposed by Lambda version. And then you get the message that you report,

So, the advice is to configure your Lambda runtime with minimum packages so that you stay away from this limit.



profile pictureAWS
answered 7 months ago
  • Thanks for the response! This is a good tip but does not explain why deploying the exact same thing works normally and on some days doesn't. I am mainly using node.js and some small python functions that do not play a role in this issue.


The reason you're experiencing this issues is likely caused by your use of Lambda Layers. Per the Lambda Quotas documentation, the maximum unzipped size of a function - including all layers and/or custom runtimes - is 250MB. The number of bytes reported in the error message includes not just your ZIP file, but also the layers your function is using.

I suspect that whatever layers are being included alongside your function are changing between deployments, and the total size of your function and its layer and runtime dependencies is running close to the maximum size.

answered 6 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions