JavaScript implements large file upload processing

JavaScript implements large file upload processing

Many times when we process file uploads, such as video files, which are as small as tens of MB or as large as 1G+, we will encounter the following problems when sending data using the normal HTTP request method:

1. The file is too large and exceeds the request size limit of the server;
2. The request time is too long and the request times out;
3. The transmission is interrupted and must be re-uploaded, resulting in all previous efforts being wasted.

These problems greatly affect the user experience, so the following introduces a solution for file segmentation and upload based on native JavaScript. The specific implementation process is as follows:

1. Get the file object through DOM, and perform MD5 encryption on the file (file content + file title format), and use SparkMD5 to encrypt the file;
2. Set up sharding. File is based on Blob and inherits the functions of Blob. You can regard File as a subclass of Blob, which is convenient for Blob's slice method to perform file sharding and upload them in sequence.
3. After the fragment files are uploaded, request the merge interface backend to merge the files.

1. Upload file page

<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>File Upload</title>
  <script src="https://cdn.bootcss.com/axios/0.18.0/axios.min.js"></script>
  <script src="https://code.jquery.com/jquery-3.4.1.js"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/spark-md5/3.0.0/spark-md5.js"></script>
  <style>
    /* Custom progress bar style */
    .precent input[type=range] {
      -webkit-appearance: none;
      /*Clear system default style*/
      width: 7.8rem;
      /* background: -webkit-linear-gradient(#ddd, #ddd) no-repeat, #ddd; */
      /*Set the left color to #61bd12 and the right color to #ddd*/
      background-size: 75% 100%;
      /*Set the left and right width ratio*/
      height: 0.6rem;
      /*Height of the bar*/
      border-radius: 0.4rem;
      border: 1px solid #ddd;
      box-shadow: 0 0 10px rgba(0,0,0,.125) inset ;
    }

    /*Drag block style*/
    .precent input[type=range]::-webkit-slider-thumb {
      -webkit-appearance: none;
      /*Clear system default style*/
      height: .9rem;
      /*Drag block height*/
      width: .9rem;
      /*Drag block width*/
      background: #fff;
      /*Drag block background*/
      border-radius: 50%;
      /*Set the appearance to round*/
      border: solid 1px #ddd;
      /*Set border*/
    }

  </style>
</head>

<body>
  <h1>Large file multi-part upload test</h1>
  <div>
    <input id="file" type="file" name="avatar" />
    <div style="padding: 10px 0;">
      <input id="submitBtn" type="button" value="Submit" />
      <input id="pauseBtn" type="button" value="Pause" />
    </div>
    <div class="precent">
      <input type="range" value="0" /><span id="precentVal">0%</span>
    </div>
  </div>
  <script type="text/javascript" src="./js/index.js"></script>
</body>

</html>

2. Upload large files in pieces

$(document).ready(() => {
  const submitBtn = $('#submitBtn'); //Submit button const precentDom = $(".precent input")[0]; //Progress bar const precentVal = $("#precentVal"); //Progress bar value corresponds to dom
  const pauseBtn = $('#pauseBtn'); // Pause button // The size of each chunk is set to 1 megabyte const chunkSize = 1 * 1024 * 1024;
  // Get the slice method and make it compatible const blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice;
  // MD5 encrypt the file (file content + file title format)
  const hashFile = (file) => {
    return new Promise((resolve, reject) => {
      const chunks = Math.ceil(file.size / chunkSize);
      let currentChunk = 0;
      const spark = new SparkMD5.ArrayBuffer();
      const fileReader = new FileReader();
      function loadNext() {
        const start = currentChunk * chunkSize;
        const end = start + chunkSize >= file.size ? file.size : start + chunkSize;
        fileReader.readAsArrayBuffer(blobSlice.call(file, start, end));
      }
      fileReader.onload = e => {
        spark.append(e.target.result); // Append array buffer
        currentChunk += 1;
        if (currentChunk < chunks) {
          loadNext();
        } else {
          console.log('finished loading');
          const result = spark.end();
          // MD5 encryption by content and file name const sparkMd5 = new SparkMD5();
          sparkMd5.append(result);
          sparkMd5.append(file.name);
          const hexHash = sparkMd5.end();
          resolve(hexHash);
        }
      };
      fileReader.onerror = () => {
        console.warn('File reading failed!');
      };
      loadNext();
    }).catch(err => {
      console.log(err);
    });
  }

  // Submit submitBtn.on('click', async () => {
    var pauseStatus = false;
    var nowUploadNums = 0
    // 1. Read file const fileDom = $('#file')[0];
    const files = fileDom.files;
    const file = files[0];
    if (!file) {
      alert('No file obtained');
      return;
    }
    // 2. Set the sharding parameter attributes and get the file MD5 value const hash = await hashFile(file); //File hash 
    const blockCount = Math.ceil(file.size / chunkSize); // Total number of shards const axiosPromiseArray = []; // axiosPromise array // File upload const uploadFile = () => {
      const start = nowUploadNums * chunkSize;
      const end = Math.min(file.size, start + chunkSize);
      // Build the form const form = new FormData();
      // The blobSlice.call(file, start, end) method is used to perform file slicing form.append('file', blobSlice.call(file, start, end));
      form.append('index', nowUploadNums);
      form.append('hash', hash);
      // Ajax submits fragments, and the content-type is multipart/form-data
      const axiosOptions = {
        onUploadProgress: e => {
          nowUploadNums++;
          // Determine whether the upload of the fragment is completed if (nowUploadNums < blockCount) {
            setPrecent(nowUploadNums, blockCount);
            uploadFile(nowUploadNums)
          } else {
            // 4. After all shards are uploaded, request to merge the shard files axios.all(axiosPromiseArray).then(() => {
              setPrecent(blockCount, blockCount); // All uploads completed axios.post('/file/merge_chunks', {
                name: file.name,
                total: blockCount,
                hash
              }).then(res => {
                console.log(res.data, file);
                pauseStatus = false;
                alert('Upload successful');
              }).catch(err => {
                console.log(err);
              });
            });
          }
        },
      };
      // Add to Promise array if (!pauseStatus) {
        axiosPromiseArray.push(axios.post('/file/upload', form, axiosOptions));
      }

    }
    //Set the progress bar function setPrecent(now, total) {
      var prencentValue = ((now / total) * 100).toFixed(2)
      precentDom.value = prencentValue
      precentVal.text(prencentValue + '%')
      precentDom.style.cssText = `background:-webkit-linear-gradient(top, #059CFA, #059CFA) 0% 0% / ${prencentValue}% 100% no-repeat`
    }
    // Pause pauseBtn.on('click', (e) => {
      pauseStatus = !pauseStatus;
      e.currentTarget.value = pauseStatus ? 'Start' : 'Pause'
      if (!pauseStatus) {
        uploadFile(nowUploadNums)
      }
    })
    uploadFile();
  });
})

3. File upload and merging shard file interface (node)

const Router = require('koa-router');
const multer = require('koa-multer');
const fs = require('fs-extra');
const path = require('path');
const router = new Router();

const { mkdirsSync } = require('../utils/dir');
const uploadPath = path.join(__dirname, 'upload');
const chunkUploadPath = path.join(uploadPath, 'temp');
const upload = multer({ dest: chunkUploadPath });

// File upload interface router.post('/file/upload', upload.single('file'), async (ctx, next) => {
  const { index, hash } = ctx.req.body;
  const chunksPath = path.join(chunkUploadPath, hash, '/');
  if(!fs.existsSync(chunksPath)) mkdirsSync(chunksPath);
  fs.renameSync(ctx.req.file.path, chunksPath + hash + '-' + index);
  ctx.status = 200;
  ctx.res.end('Success');
}) 
// Merge fragment file interface router.post('/file/merge_chunks', async (ctx, next) => {
  const { name, total, hash } = ctx.request.body;
  const chunksPath = path.join(chunkUploadPath, hash, '/');
  const filePath = path.join(uploadPath, name);
  // Read all chunks
  const chunks = fs.readdirSync(chunksPath);
  // Create storage file fs.writeFileSync(filePath, ''); 
  if (chunks.length !== total || chunks.length === 0) {
    ctx.status = 200;
    ctx.res.end('The number of slice files does not match');
    return;
  }
  for (let i = 0; i < total; i++) {
    // Append to the file fs.appendFileSync(filePath, fs.readFileSync(chunksPath + hash + '-' +i));
    // Delete the chunk used this time    
    fs.unlinkSync(chunksPath + hash + '-' +i);
  }
  fs.rmdirSync(chunksPath);
  // The files are merged successfully and the file information can be stored in the database.
  ctx.status = 200;
  ctx.res.end('Success');
})

The above is the basic process of uploading files in pieces. The upload progress bar, pause and start upload operations are added during the process. See the detailed code

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • JavaScript data type conversion example (converting other types to strings, numeric types, and Boolean types)
  • Example of converting JavaScript flat array to tree structure
  • javascript implements web version of pinball game
  • JavaScript to implement the web version of the snake game
  • An article to introduce you to Java Script

<<:  When setting up Jenkins in Docker environment, the console log shows garbled Chinese characters when building tasks

>>:  How to compare two database table structures in mysql

Recommend

Detailed explanation of the calculation method of flex-grow and flex-shrink in flex layout

Flex(彈性布局) in CSS can flexibly control the layout...

How to set mysql5.7 encoding set to utf8mb4

I recently encountered a problem. The emoticons o...

How to Enable or Disable Linux Services Using chkconfig and systemctl Commands

This is an important (and wonderful) topic for Li...

Tutorial on using Multitail command on Linux

MultiTail is a software used to monitor multiple ...

Summary of the use of CSS scope (style splitting)

1. Use of CSS scope (style division) In Vue, make...

Quickly learn MySQL basics

Table of contents Understanding SQL Understanding...

HTML page header code is completely clear

All the following codes are between <head>.....

Several implementation methods of the tab bar (recommended)

Tabs: Category + Description Tag bar: Category =&...

Linux traceroute command usage detailed explanation

Traceroute allows us to know the path that inform...

Summary of essential Docker commands for developers

Table of contents Introduction to Docker Docker e...

A useful mobile scrolling plugin BetterScroll

Table of contents Make scrolling smoother BetterS...

How to quickly clean up billions of data in MySQL database

Today I received a disk alarm exception. The 50G ...

Example of using MySQL to count the number of different values ​​in a column

Preface The requirement implemented in this artic...

Detailed explanation of CSS3 Flex elastic layout example code

1. Basic Concepts //Any container can be specifie...

Detailed process of installing various software in Docker under Windows

1. Install MySQL # Download mysql in docker docke...