Detailed process of FastAPI deployment on Docker

Detailed process of FastAPI deployment on Docker

Docker Learning

https://www.cnblogs.com/poloyy/p/15257059.html

Project Structure

.
├── app
│ ├── __init__.py
│ └── main.py
├── Dockerfile
└── requirements.txt

FastAPI application main.py code

from typing import Optional

from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def read_root():
    return {"Hello": "World"}

@app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
    return {"item_id": item_id, "q": q}

Dockerfile

# 1. Start from the official Python base image FROM python:3.9

# 2. Set the current working directory to /code
# This is where you put your requirements.txt file and your application directory WORKDIR /code

# 3. Copy the requirements.txt file first. # Since this file does not change often, Docker will detect it and use the cache in this step, and also enable the cache for the next step. COPY ./requirements.txt /code/requirements.txt

# 4. Run pip command to install dependencies RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# 5. Copy the FastAPI project code COPY ./app /code/app

# 6. Run service CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]

Step 4: Run pip command analysis

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

  • The --no-cache-dir option tells pip not to save the downloaded packages locally, as this is only necessary when pip will be run again to install the same package, but this is not the case when using containers
  • --no-cache-dir is only relevant to pip, not Docker or containers
  • The --upgrade option tells pip to upgrade an already installed package.
  • Because the files copied in the previous step may be detected by the Docker cache, this step will also be used when the Docker cache is available.
  • Using a cache in this step will save a lot of time when building the image again and again during development, instead of downloading and installing all dependencies each time.

Docker Cache

There is an important trick here Dockerfile, first copy only the files of the dependencies, not the FastAPI application code

 ./requirements.txt /code/requirements.txt
  • Docker and other tools build these container images incrementally, adding one layer on top of another.
  • Starting from the top of the Dockerfile (the first line), each instruction in the Dockerfile creates any file
  • Docker and other tools also use internal caches when building images.
  • If the file has not changed since the last time the container image was built, it will reuse the same layer that was created last time, rather than copying the file again and creating a new layer from scratch.
  • Simply avoiding the file copy doesn't necessarily improve things much, but because it uses the cache in that step, it can use the cache in the next step
  • For example, it can use the cache for instructions to install dependencies

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

  • requirements.txt doesn't change often, so by copying the file, Docker can use the cache for this step.
  • Docker will be able to use the cache to download and install these dependencies next, and this is where a lot of time is saved
  • Downloading and installing the package's dependencies may take several minutes, but using the cache it will only take seconds.
  • Since container images are built again and again during the development process to check if the code changes are valid, a lot of cumulative time can be saved

./app /code/app

  • At the end of the Dockerfile, copy the FastAPI application code
  • Since this is what changes most often, put it last, anything after this step will not be able to use the cache

Build Docker Image

Open the command line in the Dockerfile

docker build -t myimage .

View Mirror

docker images

Start the Docker container

docker run -d --name mycontainer -p 80:80 myimage

View Container

docker ps

Visit 127.0.0.1/

Visit 127.0.0.1/docs

Official Docker image with Gunicorn - Uvicorn

  • This image includes an automatic adjustment mechanism to set the number of worker processes based on the available CPU cores
  • It has sensible defaults, but all configuration can still be updated using environment variables or configuration files
  • The number of processes on this image is automatically calculated based on the available CPU cores, and it will try to squeeze as much performance as possible from the CPU
  • But this also means that since the number of processes depends on the CPU the container is running on, the amount of memory consumed will also depend on this
  • Therefore, if your application consumes a lot of memory (for example using a machine learning model), and your server has many CPU cores but little memory, the container may end up using more memory than is available, which can significantly degrade performance (or even crash).

Official Chestnut

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.9
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app

Application Scenario

  1. If you are using Kubernetes and have set up cluster-level replication, you should not use this image and it is better to build the image from scratch.
  2. If your application is simple enough that setting the default number of processes based on CPU works well, you don't want to bother with manually configuring replication at the cluster level, and you won't be running more than one container for your application
  3. or if deploying with Docker Compose, running on a single server, etc.

Using poetry's docker image

# Stage 1: Will only be used to install Poetry and generate requirements.txt with project dependencies from Poetry's pyproject.toml file.
FROM tiangolo/uvicorn-gunicorn:python3.9 as requirements-stage

# Set /tmp as the current working directory; this is where we will generate the file requirements.txt WORKDIR /tmp

# Install poetry
RUN pip install poetry

# COPY ./pyproject.toml ./poetry.lock* /tmp/

# Generate requirements.txt
RUN poetry export -f requirements.txt --output requirements.txt --without-hashes

# This is the final stage, anything after this will remain in the final container image FROM python:3.9

# Set the current working directory to /code
WORKDIR /code

# Copy requirements.txt; this file only exists in the previous Docker stage, that's why --from-requirements-stage is used to copy it COPY --from=requirements-stage /tmp/requirements.txt /code/requirements.txt

# Run the command RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# COPY ./app /code/app

# Run service CMD ["uvicorn", "app.1_Quick Start:app", "--host", "0.0.0.0", "--port", "80"]
  • The first stage Docker is part of the Dockerfile. It is a temporary container image that is only used to generate some files for use in the later stages.
  • When using Poetry, it makes sense to use Docker multi-stage builds
  • Because you don’t actually need to install Poetry and its dependencies in the final container image, you only need the generated requirements.txt file to install the project dependencies.

poetry detailed tutorial

https://www.jb51.net/article/195070.htm

This is the end of this article about FastAPI deployed in Docker. For more related FastAPI deployed in Docker content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed process of installing and deploying onlyoffice in docker
  • Deploy Confluence with Docker
  • How to use docker to deploy spring boot and connect to skywalking

<<:  Detailed explanation of the use of css-vars-ponyfill in IE environment (nextjs build)

>>:  Detailed explanation of the implementation of MySQL auto-increment primary key

Recommend

Web design must also first have a comprehensive image positioning of the website

⑴ Content determines form. First enrich the conten...

Correct modification steps for Docker's default network segment

background A colleague is working on his security...

How to migrate the data directory in mysql8.0.20

The default storage directory of mysql is /var/li...

JavaScript canvas Tetris game

Tetris is a very classic little game, and I also ...

Deploy grafana+prometheus configuration using docker

docker-compose-monitor.yml version: '2' n...

How to add color mask to background image in CSS3

Some time ago, during development, I encountered ...

Teach you to create custom hooks in react

1. What are custom hooks Logic reuse Simply put, ...

4 ways to optimize MySQL queries for millions of data

Table of contents 1. The reason why the limit is ...

MySQL 8.0.2 offline installation and configuration method graphic tutorial

The offline installation method of MySQL_8.0.2 is...

Vue global filter concepts, precautions and basic usage methods

Table of contents 1. The concept of filter 1. Cus...

How to install and configure the Docker Compose orchestration tool in Docker.v19

1. Introduction to Compose Compose is a tool for ...

What is the file mysql-bin.000001 in mysql? Can it be deleted?

After installing MySQL using ports, I found that ...

JavaScript implements select all and unselect all operations

This article shares the specific code for JavaScr...

3 ways to create JavaScript objects

Table of contents 1. Object literals 2. The new k...