
Setting Up Python Serverless Framework
Installing Serverless Framework
If you haven't installed serverless yet, install the Serverless Framework by executing the following command:
npm install -g serverless
This command installs the Serverless Framework globally on your machine.
Initializing a Serverless Project
Establish a new directory for your project and navigate into it. Initiate a Serverless template by running the following command:
serverless create --template aws-python3 --path my-langchain-project
This command sets up Serverless template with Python 3 runtime.
Dealing with Lambda Size Constraints
Installing Langchain and Lambda Size Increase
As you develop your Langchain application, you may encounter issues with Lambda size, especially after adding the Langchain library. Installing Langchain can significantly increase the Lambda deployment size, potentially leading to deployment failures.
pip install langchain
Attempting Lambda Layers
A prevalent method for handling dependencies is through Lambda Layers. Regrettably, in our scenario, attempts to segregate Langchain using Lambda Layers proved ineffective. Size limitations persisted, leading to recurrent deployment failures.
Docker Image to the Rescue
To overcome the Lambda size limitations, we found a workaround using Docker images. By containerizing our Lambda function, we gained the flexibility to handle larger deployment packages.
Utilizing Docker images in serverless deployments enhances efficiency by encapsulating functions and dependencies. These lightweight, portable containers ensure consistent environments across different platforms, simplifying development and deployment processes. Docker's flexibility optimizes resource utilization and accelerates serverless workflows, fostering a seamless and scalable application development experience.
Note: Docker must be installed local system to utilize it on deployment.
Updated Serverless Framework Configuration
Below is an updated version of the serverless.yml file, incorporating the use of a Docker image:
Code:
service: your_service_name frameworkVersion: "3" provider: name: aws runtime: python3.11 architecture: arm64 stage: ${opt:stage, dev} region: ${opt:region, self:custom.default.region} memorySize: 2048 timeout: 120 logs: restApi: true deploymentBucket: name: ${self:service}-${self:provider.stage} ecr: images: your_doc_name: path: ./ environment: SERVICE_NAME: ${self:service} STAGE: ${self:provider.stage} custom: default: ${file(path_to_custom_config_file_for_different_environments):default} stages: - dev - uat pythonRequirements: dockerizePip: false invalidateCaches: true useDownloadCache: false plugins: - serverless-python-requirements - serverless-deployment-bucket functions: hello: image: name: your_docker_image_name events: - http: path: /hello method: GET
Explanation:
- service - Specifies the name of your Serverless service or application
- frameworkVersion - Specifies the version of the Serverless Framework being used
- provider - Configures the cloud provider and runtime settings for your Serverless service Key Configurations:
- name: Specifies the cloud provider (e.g., AWS)
- runtime: Defines the runtime for your Lambda functions (e.g., Python 3.11)
- architecture: Defines your deployment machines architecture. By default, it will be x86 to work with windows machines. You can also use arm64 for Mac machines
- stage: Specifies the deployment stage (default: dev)
- region: Specifies the AWS region
- memorySize: Sets the memory allocated to Lambda functions
- timeout: Sets the maximum execution time for Lambda functions
- logs: Configures logging settings, enabling logs for the REST API
- deploymentBucket: Defines the S3 bucket used for deployment artifacts
- ecr: The ecr (Elastic Container Registry) configuration specifies Docker images for your Lambda functions. The images section allows you to define different images for various functions
- your_doc_name: Represents the name of your Docker image. You can replace this with a meaningful name for your application
- path: ./: Specifies the path to the directory containing your Dockerfile. In this case, it's set to the current directory (./), indicating that the Dockerfile is present in the same directory as the serverless.yml file
- environment: Defines environment variables accessible to Lambda functions
- custom - Contains custom configurations for your Serverless service Key Configurations:
- default: Loads default configuration from a JSON file based on the deployment stage
- stages: Lists the available deployment stages
- pythonRequirements: Configures options for handling Python requirements
- plugins - Lists the Serverless Framework plugins used in the project
- functions - Lists the Serverless Framework plugins used in the project Key Configurations:
- Function Name: Represents the name of the Lambda function
- Image Configuration: Specifies the Docker image details for the Lambda function
- Events: Defines events that trigger the Lambda function, such as HTTP events in this example
This serverless.yml file provides a comprehensive configuration for deploying a Serverless Langchain application using AWS Lambda with containerized functions. It incorporates settings for the cloud provider, deployment stages, custom configurations, and function definitions.
Dockerfile (Updated)
Let’s include the updated Dockerfile that utilizes the Docker image to overcome Lambda size constraints.
Code:
# Pull base lambda python3.11 docker image FROM public.ecr.aws/lambda/python:3.11 # Copy python requirements file COPY requirements.txt ${LAMBDA_TASK_ROOT} # Install the python packages RUN pip install -r requirements.txt # Copy function code COPY src/* ${LAMBDA_TASK_ROOT} # Set the CMD to your handler CMD [ "handler.hello" ]
Explanation:
This Dockerfile now includes the requirements.txt file and installs the necessary Python packages, addressing the Lambda size increase caused by the Langchain library.
Execution details
For a successful execution, follow these detailed steps:
- Create a Virtual Environment:
python3.11 -m venv . env_name
When utilizing the `pip` command to install packages for our project, it installs them locally on our system. To circumvent this, we opt for creating a virtual environment. This ensures a clean, isolated space for our project's dependencies, preventing conflicts and maintaining a more controlled development environment.
- Activate the Virtual Environment:
source .env_name /bin/activate
-
Deactivate the Virtual Environment (no need to execute):
deactivate
- Configure AWS CLI with the Desired Profile:
aws configure list-profiles export AWS_PROFILE= “your aws profile name”
- Install Python Dependencies:
pip install -r requirements.txt
Executing this command installs all packages specified in the `requirements.txt` file into our Python virtual environment. This ensures that our project has the necessary dependencies isolated within the virtual environment, promoting consistency and ease of management..
- Deploy the Serverless Application:
serverless deploy --stage dev --aws-profile your_profile_name
Conclusion
By leveraging the Serverless Framework and Docker images, we successfully addressed Lambda size constraints when using Langchain in our Serverless application. The combination of Serverless Framework and Docker provides a powerful solution for building and deploying scalable and complex applications. Happy coding!