PrefaceFile uploading is a problem that front-end developers often encounter during development. Maybe you can implement related functions, but after finishing, do you think that the code implementation is a bit "inadequate"? Do you really understand file uploads? How to upload large files and resume uploads during power outages? What are the commonly used formats for front-end and back-end communication? How is file upload progress control and how is it implemented on the server? Next, let’s start learning the hand-to-hand series! ! ! If there are any deficiencies, please feel free to give me your advice. Next, follow the following figure to study and discuss You are all set, let’s get started! ! ! Front-end structurePage display Project Dependencies Backend structure (node + express)Directory Structure Simple encapsulation of Axios let instance = axios.create(); instance.defaults.baseURL = 'http://127.0.0.1:8888'; instance.defaults.headers['Content-Type'] = 'multipart/form-data'; instance.defaults.transformRequest = (data, headers) => { const contentType = headers['Content-Type']; if (contentType === "application/x-www-form-urlencoded") return Qs.stringify(data); return data; }; instance.interceptors.response.use(response => { return response.data; }); File upload is generally based on two methods, FormData and Base64 File upload based on FormData//Front-end code// Mainly shows the core code for uploading based on ForData upload_button_upload.addEventListener('click', function () { if (upload_button_upload.classList.contains('disable') || upload_button_upload.classList.contains('loading')) return; if (!_file) { alert('Please select the file to upload first~~'); return; } changeDisable(true); // Pass the file to the server: FormData let formData = new FormData(); // Add fields according to the background needs formData.append('file', _file); formData.append('filename', _file.name); instance.post('/upload_single', formData).then(data => { if (+data.code === 0) { alert(`The file has been uploaded successfully~~, you can access this resource based on ${data.servicePath}~~`); return; } return Promise.reject(data.codeText); }).catch(reason => { alert('File upload failed, please try again later~~'); }).finally(() => { clearHandle(); changeDisable(false); }); }); File upload based on BASE64BASE64 specific methodexport changeBASE64(file) => { return new Promise(resolve => { let fileReader = new FileReader(); fileReader.readAsDataURL(file); fileReader.onload = ev => { resolve(ev.target.result); }; }); }; Specific implementation upload_inp.addEventListener('change', async function () { let file = upload_inp.files[0], BASE64, data; if (!file) return; if (file.size > 2 * 1024 * 1024) { alert('The uploaded file cannot exceed 2MB~~'); return; } upload_button_select.classList.add('loading'); // Get Base64 BASE64 = await changeBASE64(file); try { data = await instance.post('/upload_single_base64', { // encodeURIComponent(BASE64) prevents special characters from being garbled during transmission. At the same time, the backend needs to use decodeURIComponent to decode file: encodeURIComponent(BASE64), filename: file.name }, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }); if (+data.code === 0) { alert(`Congratulations, the file has been uploaded successfully. You can access it based on the ${data.servicePath} address~~`); return; } throw data.codeText; } catch (err) { alert('Unfortunately, the file upload failed. Please try again later~~'); finally upload_button_select.classList.remove('loading'); } **});** In the above example, the backend will generate a random name for the file received from the frontend and save it. However, some companies will do this step on the frontend and send the generated name to the backend. Next, let's implement this function. The front end generates a file name and passes it to the back endHere you need to use the SparkMD5 plug-in mentioned above. I won’t go into details about how to use it. Please refer to the documentation. Encapsulates the method of reading file stream const changeBuffer = file => { return new Promise(resolve => { let fileReader = new FileReader(); fileReader.readAsArrayBuffer(file); fileReader.onload = ev => { let buffer = ev.target.result, spark = new SparkMD5.ArrayBuffer(), HASH, suffix; spark.append(buffer); // Get the file name HASH = spark.end(); // Get the suffix name suffix = /\.([a-zA-Z0-9]+)$/.exec(file.name)[1]; resolve({ buffer, HASH, suffix, filename: `${HASH}.${suffix}` }); }; }); }; Upload server related code upload_button_upload.addEventListener('click', async function () { if (checkIsDisable(this)) return; if (!_file) { alert('Please select the file to upload first~~'); return; } changeDisable(true); // Generate the HASH name of the file let { filename } = await changeBuffer(_file); let formData = new FormData(); formData.append('file', _file); formData.append('filename', filename); instance.post('/upload_single_name', formData).then(data => { if (+data.code === 0) { alert(`The file has been uploaded successfully~~, you can access this resource based on ${data.servicePath}~~`); return; } return Promise.reject(data.codeText); }).catch(reason => { alert('File upload failed, please try again later~~'); }).finally(() => { changeDisable(false); upload_abbre.style.display = 'none'; upload_abbre_img.src = ''; _file = null; }); }); Upload progress controlThis function is relatively simple. The request library used in this article is axios. Progress control is mainly implemented based on the onUploadProgress function provided by axios. Let's take a look at the implementation principle of this function. Listening for xhr.upload.onprogress The object obtained after the file is uploaded Specific implementation (function () { let upload = document.querySelector('#upload4'), upload_inp = upload.querySelector('.upload_inp'), upload_button_select = upload.querySelector('.upload_button.select'), upload_progress = upload.querySelector('.upload_progress'), upload_progress_value = upload_progress.querySelector('.value'); // Verify whether it is in an operable state const checkIsDisable = element => { let classList = element.classList; return classList.contains('disable') || classList.contains('loading'); }; upload_inp.addEventListener('change', async function () { let file = upload_inp.files[0], data; if (!file) return; upload_button_select.classList.add('loading'); try { let formData = new FormData(); formData.append('file', file); formData.append('filename', file.name); data = await instance.post('/upload_single', formData, { //Callback function xhr.upload.onprogress in file upload onUploadProgress(ev) { let { loaded, total } = ev; upload_progress.style.display = 'block'; upload_progress_value.style.width = `${loaded/total*100}%`; } }); if (+data.code === 0) { upload_progress_value.style.width = `100%`; alert(`Congratulations, the file has been uploaded successfully. You can access the file based on ${data.servicePath}~~`); return; } throw data.codeText; } catch (err) { alert('Unfortunately, the file upload failed. Please try again later~~'); finally upload_button_select.classList.remove('loading'); upload_progress.style.display = 'none'; upload_progress_value.style.width = `0%`; } }); upload_button_select.addEventListener('click', function () { if (checkIsDisable(this)) return; upload_inp.click(); }); })(); Large file uploadLarge files are usually uploaded in slices, which can increase the speed of file upload. The front end slices the file stream and then communicates with the back end for transmission. It is usually combined with breakpoint transmission. At this time, the back end generally provides three interfaces. The first interface obtains the uploaded slice information, the second interface transmits the front end slice file, and the third interface tells the back end to merge the files after all slices are uploaded. Slicing can be done in two ways: fixed number and fixed size. We combine the two here. // Implement file slicing processing "fixed number & fixed size" let max = 1024 * 100, count = Math.ceil(file.size / max), index = 0, chunks = []; if (count > 100) { max = file.size / 100; count = 100; } while (index < count) { chunks.push({ // The file itself has a slice method, see the following figure file: file.slice(index * max, (index + 1) * max), filename: `${HASH}_${index+1}.${suffix}` }); index++; } Send to the server chunks.forEach(chunk => { let fm = new FormData; fm.append('file', chunk.file); fm.append('filename', chunk.filename); instance.post('/upload_chunk', fm).then(data => { if (+data.code === 0) { complate(); return; } return Promise.reject(data.codeText); }).catch(() => { alert('The current slice upload failed, please try again later~~'); clear(); }); }); File upload + power-off resume + progress control upload_inp.addEventListener('change', async function () { let file = upload_inp.files[0]; if (!file) return; upload_button_select.classList.add('loading'); upload_progress.style.display = 'block'; // Get the HASH of the file let already = [], data = null, { HASH, suffix } = await changeBuffer(file); // Get the uploaded slice information try { data = await instance.get('/upload_already', { params: { HASH } }); if (+data.code === 0) { already = data.fileList; } } catch (err) {} // Implement file slicing processing "fixed number & fixed size" let max = 1024 * 100, count = Math.ceil(file.size / max), index = 0, chunks = []; if (count > 100) { max = file.size / 100; count = 100; } while (index < count) { chunks.push({ file: file.slice(index * max, (index + 1) * max), filename: `${HASH}_${index+1}.${suffix}` }); index++; } // Successful upload processing index = 0; const clear = () => { upload_button_select.classList.remove('loading'); upload_progress.style.display = 'none'; upload_progress_value.style.width = '0%'; }; const complate = async () => { //Control progress bar index++; upload_progress_value.style.width = `${index/count*100}%`; // When all slices are uploaded successfully, we merge the slices if (index < count) return; upload_progress_value.style.width = `100%`; try { data = await instance.post('/upload_merge', { HASH, count }, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }); if (+data.code === 0) { alert(`Congratulations, the file has been uploaded successfully. You can access the file based on ${data.servicePath}~~`); clear(); return; } throw data.codeText; } catch (err) { alert('Slice merging failed, please try again later~~'); clear(); } }; // Upload each slice to the server chunks.forEach(chunk => { // Already uploaded, no need to upload again if (already.length > 0 && already.includes(chunk.filename)) { complate(); return; } let fm = new FormData; fm.append('file', chunk.file); fm.append('filename', chunk.filename); instance.post('/upload_chunk', fm).then(data => { if (+data.code === 0) { complate(); return; } return Promise.reject(data.codeText); }).catch(() => { alert('The current slice upload failed, please try again later~~'); clear(); }); }); }); Server code (large file upload + breakpoint resume)// Upload large file slices & merge slices const merge = function merge(HASH, count) { return new Promise(async (resolve, reject) => { let path = `${uploadDir}/${HASH}`, fileList = [], suffix, isExists; isExists = await exists(path); if (!isExists) { reject('HASH path is not found!'); return; } fileList = fs.readdirSync(path); if (fileList.length < count) { reject('the slice has not been uploaded!'); return; } fileList.sort((a, b) => { let reg = /_(\d+)/; return reg.exec(a)[1] - reg.exec(b)[1]; }).forEach(item => { !suffix ? suffix = /\.([0-9a-zA-Z]+)$/.exec(item)[1] : null; fs.appendFileSync(`${uploadDir}/${HASH}.${suffix}`, fs.readFileSync(`${path}/${item}`)); fs.unlinkSync(`${path}/${item}`); }); fs.rmdirSync(path); resolve({ path: `${uploadDir}/${HASH}.${suffix}`, filename: `${HASH}.${suffix}` }); }); }; app.post('/upload_chunk', async (req, res) => { try { let { fields, files } = await multiparty_upload(req); let file = (files.file && files.file[0]) || {}, filename = (fields.filename && fields.filename[0]) || "", path = '', isExists = false; // Create a temporary directory to store slices let [, HASH] = /^([^_]+)_(\d+)/.exec(filename); path = `${uploadDir}/${HASH}`; !fs.existsSync(path) ? fs.mkdirSync(path) : null; // Store the slice in a temporary directory path = `${uploadDir}/${HASH}/${filename}`; isExists = await exists(path); if (isExists) { res.send({ code: 0, codeText: 'file is exists', originalFilename: filename, servicePath: path.replace(__dirname, HOSTNAME) }); return; } writeFile(res, path, file, filename, true); } catch (err) { res.send({ code: 1, codeText: err }); } }); app.post('/upload_merge', async (req, res) => { let { HASH, count } = req.body; try { let { filename, path } = await merge(HASH, count); res.send({ code: 0, codeText: 'merge success', originalFilename: filename, servicePath: path.replace(__dirname, HOSTNAME) }); } catch (err) { res.send({ code: 1, codeText: err }); } }); app.get('/upload_already', async (req, res) => { let { HASH } = req.query; let path = `${uploadDir}/${HASH}`, fileList = []; try { fileList = fs.readdirSync(path); fileList = fileList.sort((a, b) => { let reg = /_(\d+)/; return reg.exec(a)[1] - reg.exec(b)[1]; }); res.send({ code: 0, codeText: '', fileList: fileList }); } catch (err) { res.send({ code: 0, codeText: '', fileList: fileList }); } }); SummarizeThis is the end of this article on how to manage large file uploads and breakpoint resume based on js. For more relevant js large file uploads and breakpoint resume content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: MySQL study notes on handling duplicate data
>>: Implementation of Docker private warehouse registry deployment
In a project, you often need to use environment v...
1 Overview System centos8, use httpd to build a l...
This article example shares the specific code of ...
Table of contents SSH protocol SSH Connection pro...
Preface I have been working on some front-end pro...
Table of contents Implementing HTML Add CSS Imple...
This article uses an example to describe the simp...
Table of contents Prototype chain diagram Essenti...
As usual, today I will talk about a very practica...
Putting aside databases, what is dialect in life?...
After reinstalling the system today, I reinstalle...
Table of contents A murder caused by ERR 1067 The...
You may already know that the length 1 of int(1) ...
Links to the current page. ------------------- Com...
Uninstall the old version of MySQL (skip this ste...