Vue.js performance optimization N tips (worth collecting)

Vue.js performance optimization N tips (worth collecting)

This article mainly refers to the topic shared by Guillaume Chau, a core member of Vue.js, at the 19th Vue conf in the United States: 9 Performance secrets revealed, in which nine Vue.js performance optimization techniques were mentioned.

After watching his sharing PPT, I also read the relevant project source code. After gaining a deep understanding of its optimization principles, I applied some of the optimization techniques to my daily work and achieved quite good results.

This sharing is very practical, but not many people seem to know or pay attention to it. So far, the project has only a few hundred stars. Although it has been two years since the master shared, the optimization techniques are not outdated. In order to let more people understand and learn the practical skills, I decided to do some secondary processing on his sharing, elaborate on the optimization principles, and make some expansion and extension.

This article is mainly aimed at Vue.js 2.x version. After all, Vue.js 2.x will still be the mainstream version in our work for some time.

I suggest that you pull the source code of the project and run it locally when studying this article to see the difference in effects before and after optimization.

Functional components

The first technique, functional components, you can check out this live example

The component code before optimization is as follows:

<template>
  <div class="cell">
    <div v-if="value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>

<script>
export default {
  props: ['value'],
}
</script>

The optimized component code is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>

Then we rendered 800 components before and after the optimization of the parent component, and triggered the update of the components by modifying the data inside each frame. We opened the Chrome Performance panel to record their performance and obtained the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that script execution time before optimization is longer than that after optimization. As we know, the JS engine is a single-threaded operation mechanism, and the JS thread will block the UI thread. Therefore, when the script execution time is too long, it will block the rendering and cause the page to freeze. The optimized script has a shorter execution time, so its performance is better.

So why does the execution time of JS become shorter when using functional components? This starts with the implementation principle of functional components. You can think of it as a function that can render and generate a DOM based on the context data you pass.

Functional components are different from ordinary object type components. They are not regarded as real components. We know that during patch process, if a node is a component vnode , the initialization process of the subcomponent will be recursively executed; while render of functional components generates ordinary vnode , and there will be no recursive subcomponent process, so the rendering cost will be much lower.

Therefore, functional components will not have states, responsive data, lifecycle hook functions, etc. You can think of it as stripping out a part of the DOM in the ordinary component template and rendering it through a function. It is a kind of reuse at the DOM level.

Child component splitting

The second technique, subcomponent splitting, you can check out this online example.

The component code before optimization is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <div>{{ heavy() }}</div>
  </div>
</template>

<script>
export default {
  props: ['number'],
  methods: {
    heavy () {
      const n = 100000
      let result = 0
      for (let i = 0; i < n; i++) {
        result += Math.sqrt(Math.cos(Math.sin(42)))
      }
      return result
    }
  }
}
</script>

The optimized component code is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <ChildComp/>
  </div>
</template>

<script>
export default {
  components:
    ChildComp: {
      methods: {
        heavy () {
          const n = 100000
          let result = 0
          for (let i = 0; i < n; i++) {
            result += Math.sqrt(Math.cos(Math.sin(42)))
          }
          return result
        },
      },
      render (h) {
        return h('div', this.heavy())
      }
    }
  },
  props: ['number']
}
</script>

Then we render 300 components before and after the optimization of the parent component, and trigger the update of the components by modifying the data inside each frame. We open the Chrome Performance panel to record their performance and get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that the time to execute script after optimization is significantly less than before optimization, so the performance experience is better.

So why is there a difference? Let's look at the components before optimization. The example simulates a time-consuming task through a heavy function, and this function is executed once every rendering, so each component rendering will take a long time to execute JavaScript.

The optimized way is to encapsulate the execution logic of this time-consuming task heavy function with the child component ChildComp . Since Vue is updated at the component granularity, although each frame causes the parent component to be re-rendered through data modification, ChildComp will not be re-rendered because there is no responsive data change inside it. Therefore, the optimized components will not perform time-consuming tasks in each rendering, so the JavaScript execution time will naturally be reduced.

However, I have some different opinions on this optimization method. For details, please click on this issue. I think that using calculated properties for optimization in this scenario is better than splitting into subcomponents. Thanks to the caching feature of the calculated properties themselves, time-consuming logic will only be executed during the first rendering, and there is no additional overhead of rendering subcomponents when using calculated properties.

In actual work, there are many scenarios where calculated properties are used to optimize performance. After all, it also embodies the optimization idea of ​​exchanging space for time.

Local variables

For the third trick, local variables, you can check out this online example.

The component code before optimization is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{{ result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result () {
      let result = this.start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(this.base))) + this.base * this.base + this.base + this.base * 2 + this.base * 3
      }
      return result
    },
  },
}
</script>

The optimized component code is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{{ result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result ({ base, start }) {
      let result = start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(base))) + base * base + base + base * 2 + base * 3
      }
      return result
    },
  },
}
</script>

Then we render 300 components before and after the optimization of the parent component, and trigger the update of the components by modifying the data inside each frame. We open the Chrome Performance panel to record their performance and get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that the time to execute script after optimization is significantly less than before optimization, so the performance experience is better.

The main difference here is the implementation difference of the calculated property result of the components before and after optimization. The component before optimization accesses this.base many times during the calculation process, while the component after optimization will use the local variable base to cache this.base before calculation, and then directly access base variable.

So why does this difference cause a performance difference? The reason is that every time you access this.base , since this.base is a responsive object, its getter will be triggered, and then the dependency collection related logic code will be executed. If similar logic is executed too often, like in the example, hundreds of components are updated in hundreds of cycles, each component triggers computed to be recalculated, and then the dependency collection related logic is executed multiple times, the performance will naturally decrease.

From the demand point of view, it is enough for this.base to perform dependency collection once, so we only need to return its getter evaluation result to the local variable base . When base is accessed again later, getter will not be triggered, and the dependency collection logic will not be followed, so the performance is naturally improved.

This is a very practical performance optimization technique. Because when many people are developing Vue.js projects, they are accustomed to writing this.xxx directly when taking variables, because most people don’t pay attention to what is done behind accessing this.xxx . When the number of accesses is small, performance issues are not prominent, but once the number of accesses increases, such as multiple accesses in a large loop, similar to the example scenario, performance issues will arise.

When I was optimizing the performance of ZoomUI's Table component, I used the optimization technique of local variables when render table body , and wrote a benchmark for performance comparison: when rendering a 1000 * 10 table, the performance of ZoomUI Table's updated data re-rendering was nearly doubled that of ElementUI's Table.

Reuse DOM with v-show

The fourth tip is to reuse DOM using v-show . You can check out this online example.

The component code before optimization is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on">
      <Heavy :n="10000"/>
    </div>
    <section v-else class="off">
      <Heavy :n="10000"/>
    </section>
  </div>
</template>

The optimized component code is as follows:

<template functional>
  <div class="cell">
    <div v-show="props.value" class="on">
      <Heavy :n="10000"/>
    </div>
    <section v-show="!props.value" class="off">
      <Heavy :n="10000"/>
    </section>
  </div>
</template>

Then we render 200 components before and after the optimization of the parent component, and trigger the update of the components by modifying the data inside each frame. We open the Chrome Performance panel to record their performance and get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that the time to execute script after optimization is significantly less than before optimization, so the performance experience is better.

The main difference before and after optimization is that the v-show directive is used instead of the v-if directive to replace the visibility of components. Although v-show and v-if are similar in performance and both control the visibility of components, there is still a big gap in their internal implementation.

v-if directive will be compiled into a ternary operator during the compilation phase, which is used for conditional rendering. For example, the component template before optimization is compiled to generate the following rendering function:

function render() {
  with(this) {
    return _c('div', {
      staticClass: "cell"
    }, [(props.value) ? _c('div', {
      staticClass: "on"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1) : _c('section', {
      staticClass: "off"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1)])
  }
}

When the value of the condition props.value changes, the corresponding component update will be triggered. For the nodes rendered by v-if , since vnode of the old and new nodes are inconsistent, the old vnode nodes will be removed and new vnode nodes will be created during the comparison process of the core diff algorithm. Then a new Heavy component will be created, and Heavy component will go through the process of initializing itself, rendering vnode , patch , etc.

Therefore, using v-if will create a new Heavy subcomponent every time a component is updated. When more components are updated, it will naturally cause performance pressure.

When we use the v-show directive, the optimized component template is compiled to generate the following rendering function:

function render() {
  with(this) {
    return _c('div', {
      staticClass: "cell"
    }, [_c('div', {
      directives: [{
        name: "show",
        rawName: "v-show",
        value: (props.value),
        expression: "props.value"
      }],
      staticClass: "on"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1), _c('section', {
      directives: [{
        name: "show",
        rawName: "v-show",
        value: (!props.value),
        expression: "!props.value"
      }],
      staticClass: "off"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1)])
  }
}

When the value of the condition props.value changes, the corresponding component update will be triggered. For the nodes rendered by v-show , since the old and new vnode are the same, they only need to patchVnode all the time. So how does it display and hide DOM nodes?

It turns out that during the patchVnode process, the hook function corresponding to v-show instruction will update internally, and then it will set the style.display value of the DOM element it acts on according to the value bound to v-show instruction to control the visibility.

Therefore, compared to v-if which constantly deletes and creates new DOM, v-show only updates the visibility of the existing DOM. Therefore, the overhead of v-show is much smaller than that of v-if . The more complex the internal DOM structure, the greater the performance difference.

However, the performance advantage of v-show over v-if is in the component update phase. If it is only in the initialization phase, the performance of v-if is higher than v-show . The reason is that it only renders one branch, while v-show renders both branches and controls the visibility of the corresponding DOM through style.display .

When using v-show , all components inside the branches will be rendered, and the corresponding lifecycle hook functions will be executed. When using v-if , components inside the branches that are not hit will not be rendered, and the corresponding lifecycle hook functions will not be executed.

Therefore, you need to understand their principles and differences so that you can use appropriate instructions in different scenarios.

KeepAlive

The fifth tip is to use the KeepAlive component to cache the DOM. You can check out this online example.

The component code before optimization is as follows:

<template>
  <div id="app">
    <router-view/>
  </div>
</template>

The optimized component code is as follows:

<template>
  <div id="app">
    <keep-alive>
      <router-view/>
    </keep-alive>
  </div>
</template>

When we click the button to switch between Simple page and Heavy Page, different views will be rendered, and the rendering of Heavy Page is very time-consuming. We open Chrome's Performance panel to record their performance, and then perform the above operations before and after optimization, and we get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that the time to execute script after optimization is significantly less than before optimization, so the performance experience is better.

In a non-optimized scenario, every time we click a button to switch the route view, the component will be re-rendered. The rendered component will go through the component initialization, render , patch and other processes. If the component is complex or deeply nested, the entire rendering will take a long time.

After using KeepAlive , vnode and DOM of the component wrapped by KeepAlive will be cached after the first rendering. Then, when the component is rendered again next time, the corresponding vnode and DOM will be directly obtained from the cache and then rendered. There is no need to go through a series of processes such as component initialization, render and patch again, which reduces the execution time of script and improves the performance.

However, using the KeepAlive component is not without cost, because it will take up more memory for caching, which is a typical application of the space-for-time optimization idea.

Deferred features

The sixth tip is to use Deferred component to delay rendering components in batches. You can check out this online example.

The component code before optimization is as follows:

<template>
  <div class="deferred-off">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <Heavy v-for="n in 8" :key="n"/>

    <Heavy class="super-heavy" :n="9999999"/>
  </div>
</template>

The optimized component code is as follows:

<template>
  <div class="deferred-on">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <template v-if="defer(2)">
      <Heavy v-for="n in 8" :key="n"/>
    </template>

    <Heavy v-if="defer(3)" class="super-heavy" :n="9999999"/>
  </div>
</template>

<script>
import Defer from '@/mixins/Defer'

export default {
  mixins:
    Defer(),
  ],
}
</script>

When we click the button to switch between Simple page and Heavy Page, different views will be rendered, and the rendering of Heavy Page is very time-consuming. We open Chrome's Performance panel to record their performance, and then perform the above operations before and after optimization, and we get the following results.

Before optimization:

After optimization:

By comparing these two pictures, we can find that before optimization, when we switch from Simple Page to Heavy Page, when a Render is nearing the end, the page is still rendered as Simple Page, which gives people a feeling of page lag. After the optimization, when we switch from Simple Page to Heavy Page, the Heavy Page is already rendered at the front of the page in one Render, and the Heavy Page is rendered progressively.

The difference between before and after optimization is mainly that the latter uses the Defer mixin . Let’s take a look at how it works:

export default function (count = 10) {
  return {
    data () {
      return {
        displayPriority: 0
      }
    },

    mounted () {
      this.runDisplayPriority()
    },

    methods: {
      runDisplayPriority() {
        const step = () => {
          requestAnimationFrame(() => {
            this.displayPriority++
            if (this.displayPriority < count) {
              step()
            }
          })
        }
        step()
      },

      defer (priority) {
        return this.displayPriority >= priority
      }
    }
  }
}

The main idea of Defer is to split the rendering of a component into multiple times. It maintains the displayPriority variable internally, and then increments it at each frame rendering through requestAnimationFrame , up to count . Then, inside the component using Defer mixin , you can use v-if="defer(xxx)" to control the rendering of certain blocks when displayPriority increases to xxx .

When you have components that take time to render, it is a good idea to use Deferred for progressive rendering. It can avoid the phenomenon that render is stuck due to a long JS execution time.

Time slicing

The seventh tip is to use Time slicing slicing technology. You can check out this online example.

The code before optimization is as follows:

fetchItems ({ commit }, { items }) {
  commit('clearItems')
  commit('addItems', items)
}

The optimized code is as follows:

fetchItems ({ commit }, { items, splitCount }) {
  commit('clearItems')
  const queue = new JobQueue()
  splitArray(items, splitCount).forEach(
    chunk => queue.addJob(done => {
      // Submit data in time slices requestAnimationFrame(() => {
        commit('addItems', chunk)
        done()
      })
    })
  )
  await queue.start()
}

We first create 10,000 fake data by clicking Genterate items button, and then click Commit items button to submit the data Time-slicing turned on and off respectively. We open the Chrome Performance panel to record their performance and get the following results.

Before optimization:

After optimization:

By comparing these two pictures, we can find that the total script execution time before optimization is less than that after optimization. However, from the actual visual experience, when clicking the Submit button before optimization, the page will be stuck for about 1.2 seconds. After optimization, the page will not be completely stuck, but there will still be a feeling of rendering lag.

So why does the page freeze before optimization? Because too much data was submitted at one time, the internal JS execution time was too long, blocking the UI thread and causing the page to freeze.

After optimization, the page still has some lags because we split the data at a granularity of 1,000 items. In this case, there is still pressure to re-render the components. We observed that the fps was only a dozen or so, which caused some lags. Usually, as long as the fps of the page reaches 60, the page will be very smooth. If we split the data into 100 pieces, the fps can basically reach more than 50. Although the page rendering becomes smoother, the total submission time for completing 10,000 pieces of data is still longer.

Time slicing technology can be used to avoid page freezes. Usually, we will add a loading effect when processing such time-consuming tasks. In this example, we can turn on loading animation and then submit the data. By comparison, we found that before optimization, due to too much data being submitted at one time, JS had been running for a long time, blocking the UI thread, and the loading animation would not be displayed. After optimization, because we split the data into multiple time slices, the single JS running time became shorter, so loading animation had a chance to be displayed.

One thing to note here is that although we use requestAnimationFrame API to split the time slice, using requestAnimationFrame itself cannot guarantee full-frame operation. requestAnimationFrame guarantees that the corresponding passed-in callback function will be executed after each browser redraw. To ensure full frame, the only way is to make JS run no more than 17ms in a Tick.

Non-reactive data

The eighth tip is to use Non-reactive data . You can check out this online example.

The code before optimization is as follows:

const data = items.map(
  item => ({
    id: uid++,
    data: item,
    vote: 0
  })
)

The optimized code is as follows:

const data = items.map(
  item => optimizeItem(item)
)

function optimizeItem (item) {
  const itemData = {
    id: uid++,
    vote: 0
  }
  Object.defineProperty(itemData, 'data', {
    // Mark as non-reactive
    configurable: false,
    value: item
  })
  return itemData
}

Still using the previous example, we first create 10,000 fake data by clicking Genterate items button, and then click Commit items button to submit the data with Partial reactivity turned on and off respectively. We open Chrome's Performance panel to record their performance, and we get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we can see that the time to execute script after optimization is significantly less than before optimization, so the performance experience is better.

The reason for this difference is that when data is submitted internally, the newly submitted data will be defined as responsive by default. If the sub-attributes of the data are in object form, the sub-attributes will be recursively made responsive as well. Therefore, when a lot of data is submitted, this process becomes a time-consuming process.

After the optimization, we manually changed the object attribute data in the newly submitted data configurable to false . In this way, when getting the object attribute array through Object.keys(obj) during walk , data will be ignored, and defineReactive will not be set for the data attribute. Since data points to an object, this will reduce the recursive responsive logic, which is equivalent to reducing the performance loss of this part. The larger the amount of data, the more obvious the effect of this optimization will be.

In fact, there are many similar optimization methods. For example, some data we define in the component does not necessarily have to be defined in data . Some data is not used in the template, nor do we need to monitor its changes. We just want to share this data in the context of the component. At this time, we can simply mount this data to the component instance this , for example:

export default {
  created() {
    this.scroll = null
  },
  mounted() {
    this.scroll = new BScroll(this.$el)
  }
}

This way we can share scroll object in the component context, even though it is not a responsive object.

Virtual scrolling

The ninth tip is to use Virtual scrolling . You can check out this online example.

The code of the component before optimization is as follows:

<div class="items no-v">
  <FetchItemViewFunctional
    v-for="item of items"
    :key="item.id"
    :item="item"
    @vote="voteItem(item)"
  />
</div>

The optimized code is as follows:

<recycle-scroller
  class="items"
  :items="items"
  :item-size="24"
>
  <template v-slot="{ item }">
    <FetchItemView
      :item="item"
      @vote="voteItem(item)"
    />
  </template>
</recycle-scroller>

Still using the previous example, we need to open View list , and then click Genterate items button to create 10,000 fake data (note that the online example can only create 1,000 data at most. In fact, 1,000 data cannot well reflect the optimization effect, so I modified the source code limitation, ran it locally, and created 10,000 data), and then click Commit items button in Unoptimized and RecycleScroller cases to submit the data, scroll the page, and open Chrome's Performance panel to record their performance. You will get the following results.

Before optimization:

After optimization:

Comparing these two pictures, we find that in the non-optimized case, the fps of 10,000 data is only in the single digit when scrolling, and only in the dozen when not scrolling. The reason is that in the non-optimized scenario, too many DOMs are rendered, and the rendering itself is under great pressure. After optimization, even with 10,000 data items, the fps can reach more than 30 in the scrolling case, and can reach 60 full frames in the non-scrolling case.

The reason for this difference is the way virtual scrolling is implemented: it only renders the DOM within the viewport. In this way, the total number of DOMs rendered will be very small, and the performance will naturally be much better.

The virtual scrolling component was also written by Guillaume Chau. Students who are interested can study its source code implementation. Its basic principle is to listen to scroll events, dynamically update the DOM elements that need to be displayed, and calculate their displacement in the view.

The virtual scrolling component is not without cost, because it needs to be calculated in real time during the scrolling process, so there will be a certain script execution cost. Therefore, if the amount of data in the list is not very large, it is sufficient to use normal scrolling.

Summarize

Through this article, I hope you can learn nine performance optimization techniques for Vue.js and apply them to actual development projects. In addition to the above techniques, there are also commonly used performance optimization methods such as lazy loading images, lazy loading components, asynchronous components, etc.

Before optimizing performance, we need to analyze where the performance bottleneck is so that we can take appropriate measures. In addition, performance optimization requires data support. Before you do any performance optimization, you need to collect data before optimization so that you can see the optimization effect through data comparison after optimization.

I hope that in future development, you will no longer be satisfied with just meeting requirements, but will think about the possible performance impact of each line of code when writing it.

References

[1] vue-9-perf-secrets slide: https://slides.com/akryum/vueconfus-2019

[2] vue-9-perf-secrets shared speech video: https://www.vuemastery.com/conference/vueconf-us-2019/9-performance-secrets-revealed/

[3] vue-9-perf-secrets project source code: https://github.com/Akryum/vue-9-perf-secrets

[4] vue-9-perf-secrets online demo address: https://vue-9-perf-secrets.netlify.app/

[5] vue-9-perf-secrets discussion issue: https://github.com/Akryum/vue-9-perf-secrets/issues/1

[6] vue-virtual-scroller project source code: https://github.com/Akryum/vue-virtual-scroller

This concludes this article about nine performance optimization tips for Vue.js (worth collecting). For more relevant Vue.js performance optimization tips, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • A brief discussion on Vue initialization performance optimization
  • A brief discussion on Vue performance optimization: digging deep into arrays
  • Methods for optimizing Vue performance
  • Twelve performance optimization tips for Vue development

<<:  How to check whether a port is occupied in LINUX

>>:  A brief discussion on MySQL count of rows

Recommend

Several methods to execute sql files under mysql command line

Table of contents The first method: When the MySQ...

Vue custom bullet box effect (confirmation box, prompt box)

This article example shares the specific code of ...

A brief analysis of the use of zero copy technology in Linux

This article discusses several major zero-copy te...

How to implement logic reuse with Vue3 composition API

Composition API implements logic reuse steps: Ext...

Using js to implement simple switch light code

Body part: <button>Turn on/off light</bu...

Tutorial on disabling and enabling triggers in MySQL [Recommended]

When using MYSQL, triggers are often used, but so...

Tips on disabling IE8 and IE9's compatibility view mode using HTML

Starting from IE 8, IE added a compatibility mode,...

Linux nohup to run programs in the background and view them (nohup and &)

1. Background execution Generally, programs on Li...

In-depth analysis of MySQL execution plans

Preface In the previous interview process, when a...

Div picture marquee seamless connection implementation code

Copy code The code is as follows: <html> &l...

Pure CSS header fixed implementation code

There are two main reasons why it is difficult to...

Detailed steps to modify MySQL stored procedures

Preface In actual development, business requireme...