How to write asynchronous tasks in modern JavaScript

How to write asynchronous tasks in modern JavaScript

Preface

In this article, we'll explore the evolution of asynchronous JavaScript and how it changes the way we write code. We'll start from the earliest days of web development and work our way up to modern asynchronous patterns.

As a programming language, JavaScript has two main characteristics that are very important to understanding how our code works. The first is its synchronous nature, which means the code will run line by line, and the second is single-threaded, with only one command executing at any time.

As the language evolves, new artifacts that allow asynchronous execution emerge on the scene. Developers tried different approaches when solving more complex algorithms and data flows, resulting in new interfaces and patterns.

Synchronous Execution and the Observer Pattern

As mentioned in the introduction, JavaScript usually runs the code you write line by line. Even in its first few years, the language had exceptions to this rule, although few, and you probably already know them: HTTP requests, DOM events, and time intervals.

If we respond to a user click on an element by adding an event listener, whatever the language interpreter was running will stop and then run the code written in the listener callback before returning to normal flow.

Like intervals or network requests, addEventListener, setTimeout, and XMLHttpRequest are the first artifacts that web developers have access to asynchronous execution.

While these are exceptions to synchronous execution in JavaScript, it's important to understand that the language is still single-threaded. We can break this synchronization, but the interpreter will still run the code one line at a time.

For example, inspecting a network request.

var request = new XMLHttpRequest();
request.open('GET', '//some.api.at/server', true);

// observe for server response
request.onreadystatechange = function() {
 if (request.readyState === 4 && xhr.status === 200) {
 console.log(request.responseText);
 }
}

11request.send();

Regardless of what happens, when the server comes back up, the method assigned to onreadystatechange will be called before the fetcher's code sequence.

A similar situation occurs when reacting to user interactions.

const button = document.querySelector('button');

// observe for user interaction
button.addEventListener('click', function(e) {
 console.log('user click just happened!');
})

You may notice that we are connecting to an external event and passing a callback to tell our code what to do when the event occurs. More than a decade ago, “What is a callback?” was a very anticipated interview question because this pattern was everywhere in many code bases.

In each of these cases, we are responding to external events. Be it reaching a certain time interval, user action or server response. We cannot create asynchronous tasks ourselves, we always observe events that happen outside of our scope.

That's why this style of coding is called the observer pattern, and in this case it's best represented by the addEventListener interface. Soon, event emitter libraries or frameworks that exposed this pattern began to flourish.

NODE.JS and event emitters

Node.js is a good example, its official website describes itself as an "asynchronous event-driven JavaScript runtime", so event emitters and callbacks are first-class citizens. It even has an EventEmitter constructor implemented.

const EventEmitter = require('events');
const emitter = new EventEmitter();

// respond to events
emitter.on('greeting', (message) => console.log(message));

// send events
emitter.emit('greeting', 'Hi there!');

This is not only a general method of asynchronous execution, but also a core pattern and convention of its ecosystem. Node.js has ushered in a new era of writing JavaScript in different environments, even outside the web. Of course asynchronous situations are also possible, such as creating new directories or writing files.

const { mkdir, writeFile } = require('fs');

const styles = 'body { background: #ffdead; }';

mkdir('./assets/', (error) => {
 if (!error) {
 writeFile('assets/main.css', styles, 'utf-8', (error) => {
  if (!error) console.log('stylesheet created');
 })
 }
})

You may notice that the callback function receives the error as the first argument and if there was an expected response data, it receives it as the second argument. This is called the error-first callback pattern, and it became a convention that authors and contributors made for packages and libraries.

Promises and endless callback chains

As Web development faces more complex problems, the need for better asynchronous artifacts emerges. If we look at the last code snippet, we can see a repeated callback chain that does not scale well as the number of tasks increases.

For example, we only add two steps, file reading and style preprocessing.

const { mkdir, writeFile, readFile } = require('fs');
const less = require('less')

readFile('./main.less', 'utf-8', (error, data) => {
 if (error) throw error
 less.render(data, (lessError, output) => {
 if (lessError) throw lessError
 mkdir('./assets/', (dirError) => {
  if (dirError) throw dirError
  writeFile('assets/main.css', output.css, 'utf-8', (writeError) => {
  if (writeError) throw writeError
  console.log('stylesheet created');
  })
 })
 })
16})

We can see that writing programs becomes increasingly complex and the code becomes more difficult to understand due to multiple callback chains and repeated error handling.

Promises, Wrappers, and Chain Patterns

When Promises were first announced as a new addition to the JavaScript language, they didn’t attract much attention. They are not a new concept, as other languages ​​have had similar implementations for decades. In fact they have changed the semantics and structure of most projects I've worked on since it came out.

Not only did Promises introduce a built-in solution for developers to write asynchronous code, but they also ushered in a new phase of web development, becoming the foundation upon which later new features of the web specification, such as fetch, were built.

Migrating from callback approach to promise based approach is becoming more and more common in projects such as libraries and browsers, even Node.js is slowly starting to migrate to it.

For example, wrapping Node's readFile method:

const { readFile } = require('fs');

const asyncReadFile = (path, options) => {
 return new Promise((resolve, reject) => {
  readFile(path, options, (error, data) => {
   if (error) reject(error);
   else resolve(data);
  })
 });
}

Here, we hide the callback by executing it inside the Promise constructor, calling resolve on success and reject with an error object defined.

When a method returns a Promise object, we can follow its successful resolution by passing a function to then whose argument is the value of the Promise being resolved, in this case data.

If an error is thrown during the method, the catch function (if it exists) will be called.

Note: If you need a deeper understanding of how Promises work, I recommend reading Jake Archibald's article "JavaScript Promises: An Introduction" on Google's web development blog.

Now we can use these new methods and avoid callback chains.

asyncRead('./main.less', 'utf-8')
 .then(data => console.log('file content', data))
 .catch(error => console.error('something went wrong', error))

It has native methods for creating asynchronous tasks and tracking their possible results with a clear interface, which gets rid of the observer pattern. Promise-based code seems to be a solution to poorly readable and error-prone code.

With better syntax highlighting and clearer error messages helping the coding process, it becomes more predictable for developers to write code that is easier to understand and performs better, making it easier to spot possible pitfalls.

The adoption of Promises was so widespread in the community that Node.js quickly released built-in versions of its I/O methods to return Promise objects, such as importing file operations from fs.promises.

It even provides a promisify tool to wrap functions that follow the error-first callback pattern and convert them to Promise-based functions.

But do Promises help in all cases?

Let's reevaluate our style preprocessing task written using Promises.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

readFile('./main.less', 'utf-8')
 .then(less.render)
 .then(result =>
  mkdir('./assets')
   .then(writeFile('assets/main.css', result.css, 'utf-8'))
 )
 .catch(error => console.error(error))

There is obviously less verbosity in the code, especially in terms of error handling, since we now rely on catch, but Promises somewhat fail to provide clear indentation of code directly related to the chaining of actions.

In fact, this is implemented in the first then statement after the call to readFile. What happens after these lines of code is that a new scope is created in which we can first create the directory and then write the result to the file. This results in a break in the rhythm of the indentation, making it difficult to determine the sequence of instructions at first glance.

NOTE: Please note that this is a sample program and we have control over certain methods, they all follow industry practice but this may not always be the case. Our coding style can be easily broken by more complex concatenation or by introducing different libraries.

Happily, the JavaScript community has once again learned from the syntax of other languages ​​and added a notation that helps asynchronous tasks to be chained together in most cases without being as easy to read as synchronous code.

Async and Await

A Promise is defined as an unresolved value when executed, and creating a Promise instance is an "explicit" call to this artifact.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

readFile('./main.less', 'utf-8')
 .then(less.render)
 .then(result =>
  mkdir('./assets')
   .then(writeFile('assets/main.css', result.css, 'utf-8'))
 )
 .catch(error => console.error(error))

Inside an asynchronous method, we can use the await reserved word to determine the resolution of the Promise before continuing execution.

Let's rewrite the code snippet using this syntax.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

async function processLess() {
 const content = await readFile('./main.less', 'utf-8')
 const result = await less.render(content)
 await mkdir('./assets')
 await writeFile('assets/main.css', result.css, 'utf-8')
}

11processLess()

Note: Notice that we need to move all of our code into a method because we can’t use await outside of the scope of an async function.

Whenever an async method finds an await statement, it stops executing until the promise is resolved.

Despite being executed asynchronously, expressing it with async/await makes the code look as if it is synchronous, which is something that is easy for developers to read and understand.

What about error handling? We can use try and catch which have been in the language for a long time.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

async function processLess() {
 const content = await readFile('./main.less', 'utf-8')
 const result = await less.render(content)
 await mkdir('./assets')
 await writeFile('assets/main.css', result.css, 'utf-8')
}

try {
 processLess()
} catch (e) {
 console.error(e)
}

We can rest assured that any errors thrown during the procedure will be handled by the code in the catch statement. Now we have a readable and clear code.

Subsequent operations on the return value do not need to be stored in a variable such as mkdir that will not disrupt the rhythm of the code; nor do they need to create a new scope to access the value of result in later steps.

It is safe to say that Promises are a fundamental artifact introduced in the language and are necessary to enable the async/await notation in JavaScript, which you can use in modern browsers and the latest versions of Node.js.

NOTE: Recently at JSConf, Ryan Dahl, creator and first contributor to Node, expressed regret for not adhering to Promises in its early development, mainly because Node's goal was to create event-driven servers and file managers, for which the Observer pattern was better suited.

in conclusion

The introduction of Promises to web development was intended to change the way we sequence operations in our code, and it changed the way we understand code and the way we write libraries and packages.

But getting rid of callback chains is harder to solve, and I think having to pass methods to then doesn't help us get rid of them after years of getting used to the observer pattern and approaches adopted, such as Node.js.

As Nolan Lawson explains in his excellent article “We Have a Problem with Promises Cascading”, old callback habits are hard and fast! In it he explains how to avoid these pitfalls.

I think Promises are an intermediate step that allow for a natural way to generate asynchronous tasks, but don't help us progress further towards better code patterns, and sometimes you need to get more comfortable with improved language syntax.

As we tried to solve more complex problems with JavaScript, we saw the need for a more mature language, and we experimented with architectures and patterns that we hadn’t seen online before.

We still don't know what the ECMAScript specification will look like in a few years, as we keep expanding JavaScript governance beyond the web and trying to solve more complex puzzles.

It's hard to say right now what we need from the language to truly transform these difficult problems into simpler programs, but I'm happy with how the Web and JavaScript itself are pushing the technology forward, trying to adapt to challenges and new environments. I feel like JavaScript is much more “async-friendly” now than it was a decade ago when I first started writing code in the browser.

Original article: https://www.smashingmagazine.com/2019/10/asynchronous-tasks-modern-javascript/

This concludes this article on how to write asynchronous tasks in modern JavaScript. For more information about writing asynchronous tasks in JavaScript, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Detailed explanation of asynchronous process implementation in single-threaded JavaScript
  • Analyze the characteristics of JS single-threaded asynchronous io callback
  • Javascript asynchronous programming: Do you really understand Promise?
  • Detailed explanation of the initial use of Promise in JavaScript asynchronous programming
  • JS asynchronous execution principle and callback details
  • Detailed explanation of asynchronous generators and asynchronous iterations in Node.js
  • Learn asynchronous programming in nodejs in one article
  • Detailed explanation of asynchronous programming knowledge points in nodejs
  • A brief discussion on the three major issues of JS: asynchrony and single thread

<<:  How to remount the data disk after initializing the system disk in Linux

>>:  Centos7 install mysql5.6.29 shell script

Recommend

Use CSS3 to implement button hover flash dynamic special effects code

We have introduced how to create a waterfall layo...

Sample code for modifying the input prompt text style in html

On many websites, we have seen the input box disp...

VMware WorkStation 14 pro installation Ubuntu 17.04 tutorial

This article records the specific method of insta...

React dva implementation code

Table of contents dva Using dva Implementing DVA ...

MySQL 8.0.2 offline installation and configuration method graphic tutorial

The offline installation method of MySQL_8.0.2 is...

How to understand SELinux under Linux

Table of contents 1. Introduction to SELinux 2. B...

How to migrate mysql storage location to a new disk

1. Prepare a new disk and format it with the same...

Vue component organization structure and component registration details

Table of contents 1. Component Organization 2. Co...

Alibaba Cloud Server Ubuntu Configuration Tutorial

Since Alibaba Cloud's import of custom Ubuntu...

jQuery treeview tree structure application

This article example shares the application code ...

Implementation example of JS native double-column shuttle selection box

Table of contents When to use Structural branches...

Distributed monitoring system Zabbix uses SNMP and JMX channels to collect data

In the previous article, we learned about the pas...