提高績效的樂趣

這篇文章是英文的,你可以用谷歌翻譯它點擊這裡

Some time ago I was part of a project that consisted of improving the performance of an app running on a very slow device. The app would run smoothly on a standard laptop, where it was developed. It was not a complex app, it had been written for the last months and all the requirements were finally being completed.

However, the app was not intended for the power of a laptop, that was only the development platform. As it also often happens when developing for mobile, when testing on the target device, some of the fancy animations and effects suddenly became clunky. This lead to a poor user experience and a big drop in the final satisfaction.

The application had not been released yet, and due to the performance issues, it was at high risk of being discarded. It was even discussed to write it again from scratch. Especially in one of the models of the target device, the performance was so slow that in some transitions the application would just give up and straight crash, which was not acceptable.

The application was composed mostly of JavaScript, using a popular frontend framework, some of the popular libraries of the time, with also heavy usage of internal code. The decision was there to be made, either put the efforts in improving the performance of the existing code to a usable state, or study alternatives for a new rewrite.

A rewrite would cost months, probably less than the original time, which had included API discovery and consolidation of the app flows. But certainly, that was just a rough estimation. It is often tempting to underestimate a rewrite and think about the ideal scenario. The performance issue would not be trivial to solve even with new code.

The other option, optimizing the existing code, was also risky. The performance in the worse device model was so bad that was quite discouraging. For sure it didn't look like a simple task. There were still some pending requirements being developed.

One of the biggest concerns was to allocate limited resources effectively. It could be a waste of time finalizing the existing app if it would be completely rewritten, except for the lessons learned.

After consideration, and an exploration phase regarding the existing code, it was decided to try to go through the optimization route. A complete rewrite would just be too risky, without more guarantees of a better performance than sticking with the existing libraries.

It was at that time that I learned about Svelte, which was a great prospect as a substitute. Svelte promised a lot better performance by not using the virtual DOM. But because the rewrite was not considered, I just kept it in the toolbox for the future.

The next step would be to study both the device (all models), its firmware, and the app code, and try to squeeze performance at every part we could find.

The device was closed source, and we couldn't get much more information. The client knew some of the internals but they wouldn't share more than necessary. At least we knew the web engine, a popular one, and its version. That was known from before, and it gave insights into what JS and CSS features would be supported.

Once that was considered, we started to analyze the app code. It didn't have very obvious flaws, it followed the common best practices and patterns described by the library authors.

But this device was setting such hard constrain that the problem space changed completely. We had to question many of the best practices, some for the sake of readability, to favor more performant approaches by consuming less memory, CPU, or using less the rendering engine.

We started by optimizing the build process. The project was using the webpack module bundler. There were some tricks out there to optimize the built artifacts with some of the used libraries, but the big step here was that it gave a clear view over which dependencies the app was using. This didn't give direct information about what parts were running slower, but we saw that there was a lot of code included that was not just used.

This tool was a tremendous help in removing superfluous stuff from the production artifacts. By this alone, we saw huge reductions in the app's initial loading times, which was one of the main issues in the better models. Generalizing the solution, looking for available tools with low effort and high value for improving performance was a good first step.

Apart from this, a few tweaks were made to improve the minification. We spent some time automating the process of building the debuggable builds and deploying them to the device. This helped a lot to focus on solving the performance problems instead of getting frustrated in manual steps each time the app was built. Remove distractions early on, even if not much time was gained, was a real good decision in hindsight.

After this step the problem became trickier. There were no obvious optimizations out there, so we had to start debugging the app at runtime and identify which parts were slower.

At that point, all libraries were used and depend on. I remember reading that optimization boils down to doing less unnecessary work somewhere.

So once we saw the parts that were performing worse, we decided to replace them with custom-made code for these cases. Many of the UI or utilities that the app was using, were built over stacks of abstractions as most of the software. On many occasions, when we want to build something that is generic enough, we have to pay the abstraction cost.

This was a fantastic approach to trim down the build even further. Many small parts were trivial to rewrite which allowed a much simpler base and less memory consumption. So this became a systemic and iterative process in which we measured several flows, analyzed flame charts or memory profiles, and then could detect which parts were dragging the rest.

After a few cases, we gained some intuition for the next cases, but it was important to keep having measurements in order to avoid wasting time in useless optimizations. This process was slow but gave the best final results.

After several rounds, we were surprised to realize we had removed many of what we thought were core dependencies, and replaced them by code that just did exactly what was required.

In many cases, it was an exercise of bringing simplicity at the cost of decreasing the flexibility for the future in some libraries, which was not a problem in this case. We also found a couple of funny and counter-intuitive cases related to CSS animations.

This was an excellent experience that improved the performance of the app big time, to the point where the app was finally usable on all the device models. It was a great professional satisfaction to improve an app, even without adding any features, just in the performance aspect.

In software engineering, there can be more joy in refactoring, improving performance, or fixing bugs than actually adding new features.

It brought me a lot of interest in performance in general, for example using better algorithms for certain scenarios. Also gave more motivation to understand the entire stack better. Some of the best tools for investigating JavaScript performance would be Chrome Devtools or Lighthouse, but there are different tools depending on the used libraries.

As an annex, I would recommend watching this video that has also sparked interest in reviewing the performance of my development environment. The talk is focused on the Linux operating system.

返回帖子列表
Loading ...