Back to Mikado Github

Benchmark: Template Rendering Engines


Library Memory Create Replace Update Order Repaint Append Remove Toggle Clear Score Index

Operations per second, higher values are better except the memory test.
Hover through the table header or the forms to get more information.
To measure memory you need to run in Chrome browser.
The test "innerHTML" just uses the browsers native feature element.innerHTML = "..." with no library.

Single Tests:
Test Modes: * The test applies de-referencing to mimic real incoming data from a server.

Test Environment:

Every test will run in its own isolated/dedicated browser instance (iframe) and post back results via message channel. The benchmark is using fixed randomness (srand) which will apply for every test. So every test has to solve exactly the same task. This is a real world benchmark running in your browser, no synthetics. It applies de-referencing on "recycle" und "keyed" tests to mimic realistic data coming from an extern authority like a server. The data-driven test covers the feature of internal data processing explicitly. You will find a lot of synthetic benchmarks out there which didn't cover this specific real world behavior. Keep this in mind.

The test by default just measure the time it takes to construct and transfer the whole render task to the browser. A library is able to optimize just this specific part. When enabling the option "Force reflow" the benchmark will also measure the time taken by the recalculation (by forcing reflow at the end of each test cycle). Since this part couldn't be optimized by a library it adds way too much unrelated workload to the test and will drastically reduce the potential of getting meaningful results.


Results:

The result values are based on full timings, no median was applied here. Using median is widely used, but the truth is median timeseries results effectively cuts out the garbage collector and leads into false results. The index is a stable statistic rank with a maximum possible value of 100, that requires a library to be the best in each test category (regardless how much better). The score value is based on median factorization (apply median on average results is fine), here a score of 100 represents the statistical midfield.