008 Computing Jargon of the Week

Denormalize

Denormalization is a strategy used on a previously-normalized database to increase performance. In computing, denormalization is the process of trying to improve the read performance of a database, at the expense of losing some write performance, by adding redundant copies of data or by grouping data.

https://en.wikipedia.org/wiki/Denormalization

What does this mean?

Contrary to normalization, denormalising a database is the process of duplicating key columns over multiple tables in a database to avoid costly joins.

It is much quicker for a query to execute if it has all the data available in one table. As a result, a denormalized database avoids having to join various tables together to get the required information from various sources, which can be computationally expensive.

While denormalization introduces a level of data redundancy, it can provide for much quicker SQL queries. As such this can lead to a better user experience with quicker database queries leading to faster feedback, consequently keeping the user more engaged with the application.

Whilst this makes for quicker read performance, it comes at the expense of write performance. Key data points will need to be replicated over numerous tables for optimum read efficiencies. This adds a potential risk for data integrity. Therefore, a developer would need to ensure that they are maintaining consistency across all tables.

Further reading

https://rubygarage.org/blog/database-denormalization-with-examples

https://www.techopedia.com/definition/29168/denormalization

If you enjoyed reading this post, consider reading some of my other definition posts:

Normalize

Propagate

Tautology

1 thought on “008 Computing Jargon of the Week”

Comments are closed.