React+Koa example of implementing file upload

React+Koa example of implementing file upload

background

When I was writing my final project recently, I was involved in some file upload functions, including normal file upload, large file upload, breakpoint resume, etc.

Server Dependencies

  • koa(node.js framework)
  • koa-router (Koa routing)
  • koa-body (Koa body parsing middleware, which can be used to parse post request content)
  • koa-static-cache (Koa static resource middleware, used to process static resource requests)
  • koa-bodyparser (parse the content of request.body)

Backend configuration cross domain

app.use(async (ctx, next) => {
 ctx.set('Access-Control-Allow-Origin', '*');
 ctx.set(
  'Access-Control-Allow-Headers',
  'Content-Type, Content-Length, Authorization, Accept, X-Requested-With, yourHeaderFeild',
 );
 ctx.set('Access-Control-Allow-Methods', 'PUT, POST, GET, DELETE, OPTIONS');
 if (ctx.method == 'OPTIONS') {
  ctx.body = 200;
 } else {
  await next();
 }
});

Backend configuration static resource access uses koa-static-cache

// Static resource processing app.use(
 KoaStaticCache('./pulbic', {
  prefix: '/public',
  dynamic: true,
  gzip: true,
 }),
);

Backend configuration requst body parse uses koa-bodyparser

const bodyParser = require('koa-bodyparser');
app.use(bodyParser());

Front-end dependencies

  • React
  • Antd
  • axios

Normal file upload

rear end

The backend only needs to use koa-body to configure options, as middleware, and pass it to router.post('url',middleware,callback)

Backend code

 // Upload configuration const uploadOptions = {
// Support file format multipart: true,
 formidable:
  // Upload the directory directly to the public folder. Remember to add / after the folder for easy access
  uploadDir: path.join(__dirname, '../../pulbic/'),
  // Keep file extensions keepExtensions: true,
 },
};
router.post('/upload', new KoaBody(uploadOptions), (ctx, next) => {
 // Get the uploaded file const file = ctx.request.files.file;
 const fileName = file.path.split('/')[file.path.split('/').length-1];
 ctx.body = {
   code:0,
   data:{
    url:`public/${fileName}`
   },
   message:'success'

 }
});

front end

Here I use the formData transmission method. The front end accesses the file selector through <input type='file'/>, and obtains the selected file through the onChange event e.target.files[0]. Then create a FormData object to get the obtained file formData.append('file',targetFile)

Front-end code

   const Upload = () => {
   const [url, setUrl] = useState<string>('')
   const handleClickUpload = () => {
     const fileLoader = document.querySelector('#btnFile') as HTMLInputElement;
     if (isNil(fileLoader)) {
       return;
     }
     fileLoader.click();
   }
   const handleUpload = async (e: any) => {
     //Get the uploaded file const file = e.target.files[0];
     const formData = new FormData()
     formData.append('file', file);
     // Upload file const { data } = await uploadSmallFile(formData);
     console.log(data.url);
     setUrl(`${baseURL}${data.url}`);
   }
   return (
     <div>
       <input type="file" id="btnFile" onChange={handleUpload} style={{ display: 'none' }} />
       <Button onClick={handleClickUpload}>Upload small file</Button>
       <img src={url} />
     </div>
   )
 }

Other options

  • input+form Set the form's action to the backend page, enctype="multipart/form-data", type='post'
  • Using fileReader to read file data for upload is not very compatible

Large file upload

When uploading a file, the request may time out because the file is too large. In this case, you can use the fragmentation method. Simply put, the file is split into small pieces and sent to the server. These small pieces identify which file they belong to and which position. After all the small pieces are transmitted, the backend executes merge to merge these files into a complete file to complete the entire transmission process.

front end

  • Getting the file is the same as before, so I won’t go into details.
  • Set the default fragment size, file slice, each slice name is filename.index.ext, recursive request until the entire file is sent and the request is merged
  const handleUploadLarge = async (e: any) => {
     //Get the uploaded file const file = e.target.files[0];
     // For file segments await uploadEveryChunk(file, 0);
   }
   const uploadEveryChunk = (
     file: File,
     index: number,
   ) => {
     console.log(index);
     const chunkSize = 512; // Slice width // [file name, file suffix]
     const [fname, fext] = file.name.split('.');
     // Get the starting byte of the current slice const start = index * chunkSize;
     if (start > file.size) {
       // When the file size is exceeded, stop recursive uploading return mergeLargeFile(file.name);
     }
     const blob = file.slice(start, start + chunkSize);
     // Name each piece const blobName = `${fname}.${index}.${fext}`;
     const blobFile = new File([blob], blobName);
     const formData = new FormData();
     formData.append('file', blobFile);
     uploadLargeFile(formData).then((res) => {
       // Recursive upload uploadEveryChunk(file, ++index);
     });
   };

rear end

The backend needs to provide two interfaces

Upload

Store each uploaded chunk in a folder with the corresponding name for easy merging later

const uploadStencilPreviewOptions = {
multipart: true,
formidable:
 uploadDir: path.resolve(__dirname, '../../temp/'), // file storage address keepExtensions: true,
 maxFieldsSize: 2 * 1024 * 1024,
},
};

router.post('/upload_chunk', new KoaBody(uploadStencilPreviewOptions), async (ctx) => {
try {
 const file = ctx.request.files.file;
 // [ name, index, ext ] - split the file name const fileNameArr = file.name.split('.');

 const UPLOAD_DIR = path.resolve(__dirname, '../../temp');
 // Directory for storing slices const chunkDir = `${UPLOAD_DIR}/${fileNameArr[0]}`;
 if (!fse.existsSync(chunkDir)) {
  // Create a directory if there is no directory // Create a temporary directory for large files await fse.mkdirs(chunkDir);
 }
 // Original file name.index - the specific address and name of each shard const dPath = path.join(chunkDir, fileNameArr[1]);

 // Move the fragmented files from temp to the temporary directory of the large file uploaded this time await fse.move(file.path, dPath, { overwrite: true });
 ctx.body = {
  code: 0,
  message: 'File uploaded successfully',
 };
} catch (e) {
 ctx.body = {
  code: -1,
  message: `File upload failed: ${e.toString()}`,
 };
}
});

merge

According to the merge request from the front end, the name carried is used to find the folder belonging to the name in the temporary cache of large file blocks. After reading the chunks in index order, the file is merged using fse.appendFileSync(path,data) (append in order means merging), and then the temporary storage folder is deleted to release memory space.

router.post('/merge_chunk', async (ctx) => {
 try {
  const { fileName } = ctx.request.body;
  const fname = fileName.split('.')[0];
  const TEMP_DIR = path.resolve(__dirname, '../../temp');
  const static_preview_url = '/public/previews';
  const STORAGE_DIR = path.resolve(__dirname, `../..${static_preview_url}`);
  const chunkDir = path.join(TEMP_DIR, fname);
  const chunks = await fse.readdir(chunkDir);
  chunks
   .sort((a, b) => a - b)
   .map((chunkPath) => {
    // Merge files fse.appendFileSync(
     path.join(STORAGE_DIR, fileName),
     fse.readFileSync(`${chunkDir}/${chunkPath}`),
    );
   });
  // Delete the temporary folder fse.removeSync(chunkDir);
  // The url to access the image
  const url = `http://${ctx.request.header.host}${static_preview_url}/${fileName}`;
  ctx.body = {
   code: 0,
   data: { url },
   message: 'success',
  };
 } catch (e) {
  ctx.body = { code: -1, message: `Merge failed: ${e.toString()}` };
 }
});

Resume download

During the transmission of large files, if the transmission fails due to refreshing the page or a temporary failure, the file needs to be transmitted from the beginning again, which is a very bad experience for the user. Therefore, it is necessary to mark the location where the transmission failed, and transfer directly here next time. I adopt the method of reading and writing in localStorage.

  const handleUploadLarge = async (e: any) => {
    //Get the uploaded file const file = e.target.files[0];
    const record = JSON.parse(localStorage.getItem('uploadRecord') as any);
    if (!isNil(record)) {
      // For the sake of convenience, we will not consider the collision problem here. To determine whether the files are the same, we can use the hash file method. // For large files, we can use the hash (a file + file size) method to determine whether two files are the same if (record.name === file.name) {
        return await uploadEveryChunk(file, record.index);
      }
    }
    // For file segments await uploadEveryChunk(file, 0);
  }
  const uploadEveryChunk = (
    file: File,
    index: number,
  ) => {
    const chunkSize = 512; // Slice width // [file name, file suffix]
    const [fname, fext] = file.name.split('.');
    // Get the starting byte of the current slice const start = index * chunkSize;
    if (start > file.size) {
      // When the file size is exceeded, stop recursive uploading return mergeLargeFile(file.name).then(()=>{
        // Delete the record after the merge is successful localStorage.removeItem('uploadRecord')
      });
    }
    const blob = file.slice(start, start + chunkSize);
    // Name each piece const blobName = `${fname}.${index}.${fext}`;
    const blobFile = new File([blob], blobName);
    const formData = new FormData();
    formData.append('file', blobFile);
    uploadLargeFile(formData).then((res) => {
      // After each piece of data is successfully transferred, the record location is recorded localStorage.setItem('uploadRecord',JSON.stringify({
        name:file.name,
        index:index+1
      }))
      // Recursive upload uploadEveryChunk(file, ++index);
    });
  };

File Identification

You can calculate the file MD5, hash, etc. When the file is too large, hashing may take a long time. You can take a chunk of the file and hash it with the file size to perform a local sampling comparison. Here is the code for calculating md5 using the crypto-js library and reading the file with FileReader

// Calculate md5 to see if it already exists const sign = tempFile.slice(0, 512);
   const signFile = new File(
    [sign, (tempFile.size as unknown) as BlobPart],
    '',
   );
   const reader = new FileReader();
   reader.onload = function (event) {
    const binary = event?.target?.result;
    const md5 = binary && CryptoJs.MD5(binary as string).toString();
    const record = localStorage.getItem('upLoadMD5');
    if (isNil(md5)) {
     const file = blobToFile(blob, `${getRandomFileName()}.png`);
     return uploadPreview(file, 0, md5);
    }
    const file = blobToFile(blob, `${md5}.png`);
    if (isNil(record)) {
     // Directly upload and record this md5
     return uploadPreview(file, 0, md5);
    }
    const recordObj = JSON.parse(record);
    if (recordObj.md5 == md5) {
     // Start uploading from the recorded position // Resume uploading from breakpoint return uploadPreview(file, recordObj.index, md5);
    }
    return uploadPreview(file, 0, md5);
   };
   reader.readAsBinaryString(signFile);

Summarize

I never knew much about uploading files before. Through this function of my graduation project, I have a preliminary understanding of the front-end and back-end codes for uploading files. Maybe these methods are just some of the options and do not include all. I hope that I can continue to improve in future studies.
This is my first time writing a blog on Nuggets. After participating in the internship, I found that my knowledge is insufficient. I hope to sort out my knowledge system and record my learning process by persisting in blogging. I also hope that the masters will give me advice when I find problems. thx

The above is the details of the example of React+Koa implementing file upload. For more information about React+Koa implementing file upload, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • React implementation example of uploading files to Alibaba Cloud OSS
  • React quill image upload is changed from the default conversion to base64 to upload to the server
  • React Native uses fetch to upload pictures
  • Sample code for uploading pictures to Qiniu in React
  • React+react-dropzone+node.js sample code for image upload
  • React Native implements an example of uploading network pictures to the server
  • ReactNative sample code for implementing image upload function
  • React+ajax+java to realize the function of uploading pictures and previewing
  • Node-based React image upload component implementation example code
  • React example showing file upload progress

<<:  Tutorial on Migrating Projects from MYSQL to MARIADB

>>:  Detailed explanation of MySQL group sorting to find the top N

Recommend

Install mysql5.7.13 using RPM in CentOS 7

0. Environment Operating system for this article:...

Specific use of pthread_create in linux to create threads

pthread_create function Function Introduction pth...

Solve the problem that vue project cannot carry cookies when started locally

Solve the problem that the vue project can be pac...

How to add default time to a field in MySQL

Date type differences and uses MySQL has five dat...

Should I abandon JQuery?

Table of contents Preface What to use if not jQue...

How to handle the tcp_mark_head_lost error reported by the Linux system

Problem Description Recently, a host reported the...

MySQL stored functions detailed introduction

Table of contents 1. Create a stored function 2. ...

Use Smart CSS to apply styles based on the user's scroll position

By adding the current scroll offset to the attrib...

Optimized implementation of count() for large MySQL tables

The following is my judgment based on the data st...

Introduction to who command examples in Linux

About who Displays users logged into the system. ...

How to solve the mysql insert garbled problem

Problem description: When inserting Chinese chara...

HTML design pattern daily study notes

HTML Design Pattern Study Notes This week I mainl...

Some findings and thoughts about iframe

This story starts with an unexpected discovery tod...

Detailed explanation of Nginx timed log cutting

Preface By default, Nginx logs are written to a f...