Cache

Server

(Update : 2023-02-18)

Language :

Concept

Temporary storage that stores frequently used data or values in advance

It is used to reduce the load on the server or database and to increase performance.

Applying it to data that is frequently changed and deleted can cause performance degradation.

Basic Behavior

  1. When the client requests data, it checks the cache for that data.
  2. Returns caching data if there is data in the cache (Cache Hit).
  3. If there is no data in the cache (Cache Miss), the actual data is taken, returned, and the data is stored in the cache.

Applicable to

The same output (result) is guaranteed for a given input (factor) regardless of the number of calls.

Called repeatedly without fluctuating frequently. (ex: Thumbnail)

It takes a long time to look up in the database.

The calculation is complicated and takes a long time to process.

The cache hit rate is high.

  • Cache Hit Ratio = cache hit count / (cache hit count + cache miss count) x 100
    = Cache hits / 100 total requests and accesses

kind

Local Cache

Store inside the server.

  • It is fast because it operates inside the server.
  • Use server resources (memory, disk).

Data cannot be shared between servers.

  • In the case of a distributed server, a problem of data integrity breaking may occur.

kind

  • Ehcache, Caffeine

Global Cache

Use Cache Server separately.

Data can be shared between servers.

  • Because network traffic occurs, it is slower than Local Cache.

kind

  • Redis, Memcached

CDN (Content Delivery Network), Web Caching

A server network that physically distributes proxy servers to provide web content cached on the nearest proxy server relative to your location, thereby speeding up web page load.

Traffic is distributed to each server.

Considerations

In general, since the cache is stored in memory (RAM), indiscriminate storage can cause a shortage of capacity and cause the system to go down.

If the cache server fails, traffic will flock to the database, which can cause the database to go down due to overload.Be prepared for the database to hold out while the cache server recovers from the failure.(Cache servers are also deployed hierarchically)

You should consider the Expire Time or Time-To-Live (TTL) policy and deletion algorithms for how long you want to keep the cache.

  • If the expiration cycle is too short, Cache Stampede can occur in large traffic environments.
  • Cache Stampede: Expires cache under heavy load, resulting in duplicate database read and cache write operations momentarily
  • If the expiration period is too long, a lack of memory may occur or data integrity may be broken.

It does not store important or sensitive information.

Local Cache VS Global Cache

  • Even if data consistency is broken, you can choose Local Cache for parts that do not affect your business, and Global Cache for parts where data consistency is important.
  • Cache using JVM memory may not be suitable for Cloud environments (Docker, AWS EC2, etc.).
  • You can use Local Cache as the first level cache and Global Cache as the second level cache.
  • Considering that Local Cache uses memory and Global Cache uses a lot of network traffic, combining one or more can increase efficiency.

Read Cache Strategy

It is better to perform Cache Warming in advance.

  • Artificial pre-caching for first-time visitors
  • Always make sure you get a Cache Hit.
  • Failure to do so may result in a large amount of Cache Miss when first-time visitors (traffic) proliferate, putting a heavy load on the database.(Thundering Herd)
  • Prevent Thundering Herd by using TTL properly.

It is suitable for cases where there are many repetitive read operations.

  • Services that perform the same query repeatedly

Cache Aside (Look Aside) Pattern

the most common strategy

Processing order

  1. When a request is received, the server checks the cache first.
  2. If there is no data in the cache, look up the database.
  3. Data retrieved from the database is cached and returned.

The cache and database are used separately.

  • Only data that needs a cache can be stored.
  • It is configured against cache failure.
  • A problem of maintaining data consistency may occur.

Read Through Pattern

Delegate data synchronization to a library or cache provider.

Data is read through the cache.

  • Minimize direct database access.
  • If there is a problem with the cache, it can fall into a full service outage.
  • The availability of cache services should be increased by using replication, clusters, etc.

Processing order

  1. When a request is received, the server queries the cache for data.
  2. If there is no data, the cache queries the data directly from the database, stores it, and returns it.

Write Cache Strategy

It must be cleaned up using TTLs because unnecessary, infrequently used data can waste resources.

When used with the Read Through pattern, a read strategy, the latest data is always available in the cache.

Write Back (Write Behind) Pattern

Data is collected into the Cache Store and stored in the database through batch scheduler operations, rather than stored immediately.

It is suitable for cases where there is a lot of writing and less reading.

Processing order

  1. All data is cached.
  2. Data in the cache is stored in a database every certain period.

The cache also serves as a queue, reducing the write load that the database receives.

If the cache fails, data can be lost.(Highly likely to lose data)

  • Replication, Cluster should be used to increase the availability of cache services.

Write Through Pattern

Delegate data synchronization to a library or cache provider.

It is suitable for cases where there are many write and reading tasks.

Processing order

  1. All data is cached.
  2. The cache is stored directly in the database.

The write operation occurs twice.

  • It writes slower than Write Back.
  • It is suitable for services with less writing work.

Data consistency is stable.

Comparison

Cache Comparison
EhcacheCaffeinRedisMemcached
TypeLocalLocalGlobalGlobal
BaseOff Heap MemoryIn MemoryIn MemoryIn Memory
ThreadThread-safe XThread-safeSingle ThreadMulti Thread
ShareOXOO
JSR-107OORedis Client Redisson?
Deletion algorithmLRU(Least Recent Used) LFU (Least Frequency Used) FIFO(First In First Out)Windows TinyLfu Uninstall PolicyTTL, LRU, LFU, RANDOMTTL, LRU
AdvantagesThere are many functions.Best storage and removal performance among Local Cache.Data persistent chat, real-time streaming, SNS feeds, server intercommunication advantageIt's simple. It's fast.
WeaknessThe performance is lower than that of Caffein.The setup is cumbersome.It only provides simple cache functionality.Memory fragmentation, single thread, and many other basic features need to be reinforcedData cannot be permanently stored Only String can be stored
  • Local Cache (Ehcache, Caffein) uses JVM Memory to eliminate Cache whenever deployed in a Cloud environment.
  • Memcached is multi-threaded and fast.Only String with a maximum value of 1MB can be stored.The expiration time is limited to a maximum of 30 days.
  • Redis supports more features than Memcached and has a lot of data because it is popular.Various data structures and deletion algorithms are available.There is little difference in Read/Wirte speed, but the search speed is faster on Redis.

Reference

Clean up everything in the cache

Cache

Introducing Spring Boot Caching (Redis, Ehcache)

Redis5 Design Summary

[REDIS] Summary of Cache Design Strategy Guidelines

Cache Strategy

Cache - Redis, EhCache or Caffeine?

Cache enforcement patterns and management strategies

민갤

Back-End Developer

백엔드 개발자입니다.