WikiGalaxy

Personalize

Cache Invalidation and Eviction Policies

Introduction

Cache invalidation and eviction policies are critical in system design, especially for applications requiring high performance and scalability. They ensure that the cache holds the most relevant data and that stale data is removed in a timely manner.

Cache Invalidation

Cache invalidation is the process of removing or updating outdated data in the cache to ensure data consistency between the cache and the main data store.

Eviction Policies

Eviction policies determine which data should be removed from the cache when it reaches its storage limit. Common policies include LRU (Least Recently Used), FIFO (First In First Out), and LFU (Least Frequently Used).

Example 1: Least Recently Used (LRU)

Concept

LRU evicts the least recently accessed items first. It is based on the principle that data not accessed recently is less likely to be accessed soon.


import java.util.LinkedHashMap;
import java.util.Map;

public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private final int capacity;

    public LRUCache(int capacity) {
        super(capacity, 0.75f, true);
        this.capacity = capacity;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() > capacity;
    }
}
        

Explanation

This example demonstrates an LRU cache using a LinkedHashMap in Java. The cache removes the oldest entry when the capacity is exceeded.

Console Output:

Output depends on cache operations

Example 2: First In First Out (FIFO)

Concept

FIFO evicts the oldest entry first, similar to a queue. It assumes the oldest data is the least useful over time.


import java.util.LinkedList;
import java.util.Queue;

public class FIFOCache<K> {
    private final int capacity;
    private final Queue<K> queue;

    public FIFOCache(int capacity) {
        this.capacity = capacity;
        this.queue = new LinkedList<>();
    }

    public void add(K key) {
        if (queue.size() == capacity) {
            queue.poll();
        }
        queue.offer(key);
    }
}
        

Explanation

This FIFO cache example uses a queue to manage entries. The oldest entry is removed when the cache size limit is reached.

Console Output:

Output depends on cache operations

Example 3: Least Frequently Used (LFU)

Concept

LFU evicts the least frequently accessed data. It is based on the assumption that frequently accessed data will continue to be accessed often.


import java.util.HashMap;
import java.util.Map;
import java.util.PriorityQueue;

public class LFUCache<K, V> {
    private final int capacity;
    private final Map<K, V> values;
    private final Map<K, Integer> counts;
    private final PriorityQueue<K> minHeap;

    public LFUCache(int capacity) {
        this.capacity = capacity;
        this.values = new HashMap<>();
        this.counts = new HashMap<>();
        this.minHeap = new PriorityQueue<>((a, b) -> counts.get(a) - counts.get(b));
    }

    public void put(K key, V value) {
        if (values.size() >= capacity) {
            K leastUsed = minHeap.poll();
            values.remove(leastUsed);
            counts.remove(leastUsed);
        }
        values.put(key, value);
        counts.put(key, counts.getOrDefault(key, 0) + 1);
        minHeap.offer(key);
    }
}
        

Explanation

This LFU cache example uses a priority queue to track access frequency. The least frequently accessed entry is removed when the cache is full.

Console Output:

Output depends on cache operations

Example 4: Random Replacement

Concept

Random replacement evicts a random entry when the cache is full. It is simple but may not be efficient in all scenarios.


import java.util.ArrayList;
import java.util.List;
import java.util.Random;

public class RandomCache {
    private final int capacity;
    private final List cache;
    private final Random random;

    public RandomCache(int capacity) {
        this.capacity = capacity;
        this.cache = new ArrayList<>(capacity);
        this.random = new Random();
    }

    public void add(K key) {
        if (cache.size() == capacity) {
            cache.remove(random.nextInt(capacity));
        }
        cache.add(key);
    }
}
        

Explanation

This example demonstrates a random replacement policy where a random entry is evicted when the cache is full.

Console Output:

Output depends on cache operations

Example 5: Time-to-Live (TTL)

Concept

TTL eviction policy removes entries after a specified time duration, ensuring that only fresh data is available in the cache.


import java.util.HashMap;
import java.util.Map;

public class TTLCache<K, V> {
    private final long ttl;
    private final Map<K, CacheEntry<V>> cache;

    public TTLCache(long ttl) {
        this.ttl = ttl;
        this.cache = new HashMap<>();
    }

    public void put(K key, V value) {
        long expiryTime = System.currentTimeMillis() + ttl;
        cache.put(key, new CacheEntry<>(value, expiryTime));
    }

    public V get(K key) {
        CacheEntry<V> entry = cache.get(key);
        if (entry != null && System.currentTimeMillis() < entry.expiryTime) {
            return entry.value;
        }
        cache.remove(key);
        return null;
    }

    private static class CacheEntry<V> {
        V value;
        long expiryTime;

        CacheEntry(V value, long expiryTime) {
            this.value = value;
            this.expiryTime = expiryTime;
        }
    }
}
        

Explanation

This TTL cache example removes entries after a predefined time interval, ensuring that only fresh data is available in the cache.

Console Output:

Output depends on cache operations

logo of wikigalaxy

Newsletter

Subscribe to our newsletter for weekly updates and promotions.

Privacy Policy

 • 

Terms of Service

Copyright © WikiGalaxy 2025