backgroundWhen I was writing my final project recently, I was involved in some file upload functions, including normal file upload, large file upload, breakpoint resume, etc. Server Dependencies
Backend configuration cross domainapp.use(async (ctx, next) => { ctx.set('Access-Control-Allow-Origin', '*'); ctx.set( 'Access-Control-Allow-Headers', 'Content-Type, Content-Length, Authorization, Accept, X-Requested-With, yourHeaderFeild', ); ctx.set('Access-Control-Allow-Methods', 'PUT, POST, GET, DELETE, OPTIONS'); if (ctx.method == 'OPTIONS') { ctx.body = 200; } else { await next(); } }); Backend configuration static resource access uses koa-static-cache// Static resource processing app.use( KoaStaticCache('./pulbic', { prefix: '/public', dynamic: true, gzip: true, }), ); Backend configuration requst body parse uses koa-bodyparserconst bodyParser = require('koa-bodyparser'); app.use(bodyParser()); Front-end dependencies
Normal file upload rear endThe backend only needs to use koa-body to configure options, as middleware, and pass it to router.post('url',middleware,callback) Backend code // Upload configuration const uploadOptions = { // Support file format multipart: true, formidable: // Upload the directory directly to the public folder. Remember to add / after the folder for easy access uploadDir: path.join(__dirname, '../../pulbic/'), // Keep file extensions keepExtensions: true, }, }; router.post('/upload', new KoaBody(uploadOptions), (ctx, next) => { // Get the uploaded file const file = ctx.request.files.file; const fileName = file.path.split('/')[file.path.split('/').length-1]; ctx.body = { code:0, data:{ url:`public/${fileName}` }, message:'success' } }); front endHere I use the formData transmission method. The front end accesses the file selector through <input type='file'/>, and obtains the selected file through the onChange event e.target.files[0]. Then create a FormData object to get the obtained file formData.append('file',targetFile) Front-end code const Upload = () => { const [url, setUrl] = useState<string>('') const handleClickUpload = () => { const fileLoader = document.querySelector('#btnFile') as HTMLInputElement; if (isNil(fileLoader)) { return; } fileLoader.click(); } const handleUpload = async (e: any) => { //Get the uploaded file const file = e.target.files[0]; const formData = new FormData() formData.append('file', file); // Upload file const { data } = await uploadSmallFile(formData); console.log(data.url); setUrl(`${baseURL}${data.url}`); } return ( <div> <input type="file" id="btnFile" onChange={handleUpload} style={{ display: 'none' }} /> <Button onClick={handleClickUpload}>Upload small file</Button> <img src={url} /> </div> ) } Other options
Large file uploadWhen uploading a file, the request may time out because the file is too large. In this case, you can use the fragmentation method. Simply put, the file is split into small pieces and sent to the server. These small pieces identify which file they belong to and which position. After all the small pieces are transmitted, the backend executes merge to merge these files into a complete file to complete the entire transmission process. front end
const handleUploadLarge = async (e: any) => { //Get the uploaded file const file = e.target.files[0]; // For file segments await uploadEveryChunk(file, 0); } const uploadEveryChunk = ( file: File, index: number, ) => { console.log(index); const chunkSize = 512; // Slice width // [file name, file suffix] const [fname, fext] = file.name.split('.'); // Get the starting byte of the current slice const start = index * chunkSize; if (start > file.size) { // When the file size is exceeded, stop recursive uploading return mergeLargeFile(file.name); } const blob = file.slice(start, start + chunkSize); // Name each piece const blobName = `${fname}.${index}.${fext}`; const blobFile = new File([blob], blobName); const formData = new FormData(); formData.append('file', blobFile); uploadLargeFile(formData).then((res) => { // Recursive upload uploadEveryChunk(file, ++index); }); }; rear endThe backend needs to provide two interfaces UploadStore each uploaded chunk in a folder with the corresponding name for easy merging later const uploadStencilPreviewOptions = { multipart: true, formidable: uploadDir: path.resolve(__dirname, '../../temp/'), // file storage address keepExtensions: true, maxFieldsSize: 2 * 1024 * 1024, }, }; router.post('/upload_chunk', new KoaBody(uploadStencilPreviewOptions), async (ctx) => { try { const file = ctx.request.files.file; // [ name, index, ext ] - split the file name const fileNameArr = file.name.split('.'); const UPLOAD_DIR = path.resolve(__dirname, '../../temp'); // Directory for storing slices const chunkDir = `${UPLOAD_DIR}/${fileNameArr[0]}`; if (!fse.existsSync(chunkDir)) { // Create a directory if there is no directory // Create a temporary directory for large files await fse.mkdirs(chunkDir); } // Original file name.index - the specific address and name of each shard const dPath = path.join(chunkDir, fileNameArr[1]); // Move the fragmented files from temp to the temporary directory of the large file uploaded this time await fse.move(file.path, dPath, { overwrite: true }); ctx.body = { code: 0, message: 'File uploaded successfully', }; } catch (e) { ctx.body = { code: -1, message: `File upload failed: ${e.toString()}`, }; } }); mergeAccording to the merge request from the front end, the name carried is used to find the folder belonging to the name in the temporary cache of large file blocks. After reading the chunks in index order, the file is merged using fse.appendFileSync(path,data) (append in order means merging), and then the temporary storage folder is deleted to release memory space. router.post('/merge_chunk', async (ctx) => { try { const { fileName } = ctx.request.body; const fname = fileName.split('.')[0]; const TEMP_DIR = path.resolve(__dirname, '../../temp'); const static_preview_url = '/public/previews'; const STORAGE_DIR = path.resolve(__dirname, `../..${static_preview_url}`); const chunkDir = path.join(TEMP_DIR, fname); const chunks = await fse.readdir(chunkDir); chunks .sort((a, b) => a - b) .map((chunkPath) => { // Merge files fse.appendFileSync( path.join(STORAGE_DIR, fileName), fse.readFileSync(`${chunkDir}/${chunkPath}`), ); }); // Delete the temporary folder fse.removeSync(chunkDir); // The url to access the image const url = `http://${ctx.request.header.host}${static_preview_url}/${fileName}`; ctx.body = { code: 0, data: { url }, message: 'success', }; } catch (e) { ctx.body = { code: -1, message: `Merge failed: ${e.toString()}` }; } }); Resume downloadDuring the transmission of large files, if the transmission fails due to refreshing the page or a temporary failure, the file needs to be transmitted from the beginning again, which is a very bad experience for the user. Therefore, it is necessary to mark the location where the transmission failed, and transfer directly here next time. I adopt the method of reading and writing in localStorage. const handleUploadLarge = async (e: any) => { //Get the uploaded file const file = e.target.files[0]; const record = JSON.parse(localStorage.getItem('uploadRecord') as any); if (!isNil(record)) { // For the sake of convenience, we will not consider the collision problem here. To determine whether the files are the same, we can use the hash file method. // For large files, we can use the hash (a file + file size) method to determine whether two files are the same if (record.name === file.name) { return await uploadEveryChunk(file, record.index); } } // For file segments await uploadEveryChunk(file, 0); } const uploadEveryChunk = ( file: File, index: number, ) => { const chunkSize = 512; // Slice width // [file name, file suffix] const [fname, fext] = file.name.split('.'); // Get the starting byte of the current slice const start = index * chunkSize; if (start > file.size) { // When the file size is exceeded, stop recursive uploading return mergeLargeFile(file.name).then(()=>{ // Delete the record after the merge is successful localStorage.removeItem('uploadRecord') }); } const blob = file.slice(start, start + chunkSize); // Name each piece const blobName = `${fname}.${index}.${fext}`; const blobFile = new File([blob], blobName); const formData = new FormData(); formData.append('file', blobFile); uploadLargeFile(formData).then((res) => { // After each piece of data is successfully transferred, the record location is recorded localStorage.setItem('uploadRecord',JSON.stringify({ name:file.name, index:index+1 })) // Recursive upload uploadEveryChunk(file, ++index); }); }; File IdentificationYou can calculate the file MD5, hash, etc. When the file is too large, hashing may take a long time. You can take a chunk of the file and hash it with the file size to perform a local sampling comparison. Here is the code for calculating md5 using the crypto-js library and reading the file with FileReader // Calculate md5 to see if it already exists const sign = tempFile.slice(0, 512); const signFile = new File( [sign, (tempFile.size as unknown) as BlobPart], '', ); const reader = new FileReader(); reader.onload = function (event) { const binary = event?.target?.result; const md5 = binary && CryptoJs.MD5(binary as string).toString(); const record = localStorage.getItem('upLoadMD5'); if (isNil(md5)) { const file = blobToFile(blob, `${getRandomFileName()}.png`); return uploadPreview(file, 0, md5); } const file = blobToFile(blob, `${md5}.png`); if (isNil(record)) { // Directly upload and record this md5 return uploadPreview(file, 0, md5); } const recordObj = JSON.parse(record); if (recordObj.md5 == md5) { // Start uploading from the recorded position // Resume uploading from breakpoint return uploadPreview(file, recordObj.index, md5); } return uploadPreview(file, 0, md5); }; reader.readAsBinaryString(signFile); Summarize I never knew much about uploading files before. Through this function of my graduation project, I have a preliminary understanding of the front-end and back-end codes for uploading files. Maybe these methods are just some of the options and do not include all. I hope that I can continue to improve in future studies. The above is the details of the example of React+Koa implementing file upload. For more information about React+Koa implementing file upload, please pay attention to other related articles on 123WORDPRESS.COM! You may also be interested in:
|
<<: Tutorial on Migrating Projects from MYSQL to MARIADB
>>: Detailed explanation of MySQL group sorting to find the top N
0. Environment Operating system for this article:...
pthread_create function Function Introduction pth...
Solve the problem that the vue project can be pac...
Date type differences and uses MySQL has five dat...
Table of contents Preface What to use if not jQue...
Problem Description Recently, a host reported the...
Table of contents 1. Background running jobs 2. U...
Table of contents 1. Create a stored function 2. ...
By adding the current scroll offset to the attrib...
The following is my judgment based on the data st...
About who Displays users logged into the system. ...
Problem description: When inserting Chinese chara...
HTML Design Pattern Study Notes This week I mainl...
This story starts with an unexpected discovery tod...
Preface By default, Nginx logs are written to a f...