🎓How I Study AIHISA
📖Read
📄Papers📰Blogs🎬Courses
💡Learn
🛤️Paths📚Topics💡Concepts🎴Shorts
🎯Practice
📝Daily Log🎯Prompts🧠Review
SearchSettings
Designing an O(1) LRU Cache with Doubly Linked List | Daily Problem Log | How I Study AI

Designing an O(1) LRU Cache with Doubly Linked List

Solved on 2026-03-08leetcodemedium
LRU Cache ↗
data-structuredoubly-linked-listhash-maphash-tablelinked-listdesigndoubly-linked-listlru-cachehash-mapc++data-structuresalgorithm-design
TimeO(1)
SpaceO(n)

Write-up

Problem Summary

We need to design an LRU (Least Recently Used) cache that supports two operations in O(1) average time. The cache has a fixed capacity and must evict the least recently used item when it becomes full. The operations are get(key) which returns the value for a key or -1 if it doesn’t exist, and put(key, value) which inserts or updates a key. Constraints are 1 <= capacity <= 3000, keys in [0, 10⁴], values in [0, 10⁵], and up to 2*10⁵ total operations.

My Approach

The best-known pattern for an LRU cache couples two structures that each excel at a separate part of the job.

  • A hash map for O(1) lookup by key, mapping key to a node in the list.
  • A doubly linked list to maintain recency order. The most recently used item lives near one end, and the least recently used near the other.

Why combine both. The map answers “do we have this key” in constant time. The list answers “what’s the oldest key” and can move nodes to the most-recently-used position in constant time. That movement is the soul of LRU. Every get and put that touches a key must mark it as most recently used.

There are two clean ways to maintain order. Either push most recent items to the head or to the tail. I prefer a tail-as-MRU layout because eviction becomes “remove from head->next,” which is easy to see. Two sentinel nodes (dummy head and dummy tail) eliminate edge cases like empty list or single element removals.

On get(key)

  • If the key is not present, return -1.
  • If present, move its node to the MRU position near the tail, then return the value.

On put(key, value)

  • If the key already exists, update the value and move it to the MRU position.
  • If it doesn’t exist and the cache is full, evict the LRU node at the head side and erase its key from the map. Then insert the new node at the tail side and record it in the map.

Each step is O(1) because the map lookup/erase/insert is O(1) on average, and the list operations are pointer manipulations with no traversal.

Why This Works

An LRU cache must always identify the least recently used key at eviction time and must update the recency of any accessed key. The doubly linked list maintains a strict total order from least recent to most recent, and the sentinel nodes give constant-time removal and insertion without special handling. The hash map guarantees we can locate a node by key in constant time and attach it to the list operations.

Correctness comes from maintaining these invariants throughout all operations.

  • Every key present in the cache appears exactly once in the list and exactly once in the map.
  • The list order reflects access recency, with the head side as least recent and tail side as most recent.
  • On get and on updating put, we remove the node from its current position and reinsert it at the MRU position.
  • On insertion when full, we evict the node at the LRU position and remove it from the map, keeping capacity exact.

With these invariants, the LRU property holds at all times, and return values are correct since the map directly references the stored node values.

Code Walkthrough

Below is a clean C++ implementation that follows the approach with an unordered_map and a custom doubly linked list with sentinel nodes. The operations are all O(1) on average.

C++
1#include <unordered_map>
2using namespace std;
3
4class LRUCache {
5 struct Node {
6 int key;
7 int value;
8 Node* prev;
9 Node* next;
10 Node(int k, int v): key(k), value(v), prev(nullptr), next(nullptr) {}
11 };
12
13 int capacity_;
14 unordered_map<int, Node*> index_; // key -> node
15 Node* head_; // dummy head (LRU side)
16 Node* tail_; // dummy tail (MRU side)
17
18 // Remove a node from the list in O(1)
19 void detach(Node* node) {
20 Node* p = node->prev;
21 Node* n = node->next;
22 p->next = n;
23 n->prev = p;
24 }
25
26 // Insert a node right before tail_ (MRU position) in O(1)
27 void attachToMRU(Node* node) {
28 node->prev = tail_->prev;
29 node->next = tail_;
30 tail_->prev->next = node;
31 tail_->prev = node;
32 }
33
34 // Move an existing node to the MRU position in O(1)
35 void moveToMRU(Node* node) {
36 detach(node);
37 attachToMRU(node);
38 }
39
40 // Evict the least recently used node from head_ side in O(1)
41 void evictLRU() {
42 Node* lru = head_->next;
43 detach(lru);
44 index_.erase(lru->key);
45 delete lru;
46 }
47
48public:
49 LRUCache(int capacity): capacity_(capacity) {
50 head_ = new Node(-1, -1);
51 tail_ = new Node(-1, -1);
52 head_->next = tail_;
53 tail_->prev = head_;
54 }
55
56 int get(int key) {
57 auto it = index_.find(key);
58 if (it == index_.end()) return -1;
59 Node* node = it->second;
60 moveToMRU(node);
61 return node->value;
62 }
63
64 void put(int key, int value) {
65 auto it = index_.find(key);
66 if (it != index_.end()) {
67 Node* node = it->second;
68 node->value = value;
69 moveToMRU(node);
70 return;
71 }
72
73 if ((int)index_.size() == capacity_) {
74 evictLRU();
75 }
76
77 Node* node = new Node(key, value);
78 attachToMRU(node);
79 index_[key] = node;
80 }
81
82 ~LRUCache() {
83 // Cleanup allocated nodes to prevent leaks
84 Node* cur = head_;
85 while (cur) {
86 Node* nxt = cur->next;
87 delete cur;
88 cur = nxt;
89 }
90 }
91};

Key sections explained.

  • The Node struct stores key and value, and maintains prev and next pointers. Keeping the key on the node is useful for eviction when we need to erase the map entry.
  • The list uses two sentinel nodes, head_ and tail_. The least recently used element is head_->next. The most recently used is tail_->prev. This avoids conditional checks for null when inserting or removing at ends.
  • detach removes a node from the list in O(1) by rewiring neighbors. No traversal occurs.
  • attachToMRU inserts a node just before tail_. That spot is reserved for the most recent element.
  • moveToMRU composes those operations to reposition a node after access.
  • evictLRU removes the LRU node at head_->next, erases its key from the map, and frees memory.
  • get and put both call moveToMRU on successful access or update, which maintains recency.

About time and space.

  • Average O(1) time per get and put. The constant-time operations are hash lookups and pointer rewiring. The unordered_map guarantees average constant time.
  • Space is O(capacity), dominated by the list nodes and the hash map.

A note on the provided vector-index implementation. You might see alternate code that represents a doubly linked list using a vector of Node* and stores integer indices l and r for neighbors. While the idea is similar, using magic values like 1e9 as sentinels and mixing indices with pointers introduces fragility and can lead to undefined behavior. The pointer-based sentinel approach here is simpler to reason about and less error-prone in C++.

Alternative Approaches

All options trade implementation complexity for performance guarantees. When strict O(1) is required, the hash map plus doubly linked list wins.

  • std::list plus unordered_map

    • Keep a std::list<pair<int,int>> for order and values, and an unordered_map<int, list<pair<int,int>>::iterator> for fast lookup.
    • On access, splice the node to the end. On insert, push_back and record the iterator. On full, pop_front and erase. It’s a standard and concise approach in C++.
    • Pick this when you want standard-library containers to manage pointers safely.
  • Ordered dictionary in high-level languages

    • Python’s OrderedDict or Java’s LinkedHashMap maintain insertion order and allow moving keys to the end on access. Implementing LRU becomes very short.
    • Great for readability in those ecosystems. Time complexity remains O(1) average.
  • Single-linked list plus extra map to previous nodes

    • Maintain a map from key to the previous node to allow O(1) removals from the middle. It’s workable but fussier than a doubly linked list.
    • Consider it only if memory is extremely tight and you want to save one pointer per node, though the additional map entry offsets that gain.
  • Balanced BST or heap by timestamp

    • Assign a timestamp to each access and keep a min-heap or tree keyed by timestamp for eviction. This leads to O(log n) operations and complicates updates when timestamps change.
    • Only reasonable if O(log n) is acceptable or you must support extra queries like “k least recently used” efficiently.
  • Array or deque with linear search

    • Store pairs in an array and move items on access. Finding items requires O(n) time or a secondary map, and deletions shift elements.
    • Not suitable for the required performance guarantees here.

Pitfalls and Edge Cases

  • Forgetting to update recency on get

    • Access must move the node to MRU. Missing this step breaks eviction order.
  • Not storing key on the node

    • During eviction, you need the key to erase from the map. If the node doesn’t carry the key, you’ll have to backtrack somehow.
  • Handling capacity 1

    • Test sequences like put(1,1), put(2,2), get(1), put(2,3), get(2) to ensure repeated evictions behave as expected.
  • Mismanaging pointers at the ends

    • Without sentinels, edge cases multiply. Sentinel head/tail simplify detach and insert. If you must go without them, double-check null handling.
  • Re-inserting existing keys as new nodes

    • On put for an existing key, just update and move. Creating a duplicate node for the same key leaves stale entries in the map and violates invariants.
  • Erasing the wrong key at eviction

    • Always erase using the key from the node you evict, not a guessed key or external variable.
  • Over-reliance on magic numbers or index sentinels

    • Using integers like 1e9 as a sentinel can lead to out-of-bounds access if not handled carefully. Pointer-based sentinels are safer in C++.
  • Assuming worst-case O(1) for unordered_map

    • The guarantee is average O(1). Adversarial inputs can degrade performance, but for this problem’s constraints it’s the right choice.

Further Thinking

  • Design a Most Frequently Used (MFU) cache

    • Replace recency with frequency. Consider a structure similar to LFU.
  • Implement an LFU (Least Frequently Used) cache with O(1) operations

    • Combine a key->node map with frequency buckets. Think of a doubly linked list of lists.
  • Cache with TTL expiration

    • Each key has a time-to-live. Maintain a min-heap by expiry for O(log n) expirations plus an LRU list for recency.
  • LRU with variable-sized items

    • Capacity is measured in bytes, not count. Evict repeatedly until total size fits. Track size per node.
  • Thread-safe LRU cache

    • Introduce locks or lock striping. Keep map and list operations atomic relative to each other.
  • Disk-backed LRU

    • On eviction, spill to disk. On miss, fetch from disk into memory. Think of OS page cache behaviors.
  • Multi-queue caching

    • Maintain multiple recency queues to improve scan resistance. Items graduate between queues.
  • LRU with admission control

    • Not every miss should be admitted. Use TinyLFU-like filters to decide whether to insert.

Conditions

  • 1 <= capacity <= 3000
  • 0 <= key <= 10⁴
  • 0 <= value <= 10⁵
  • At most 2 * 10⁵ calls will be made to get and put.

My Notes

I should take care of updating the tail when the size == 1

Solution

C++
1class LRUCache { // Class definition for LRU Cache
2
3public:
4 struct Node { // Node to maintain the doubly linked list
5 int l, r, key, value; // left, right pointers, key, value
6 };
7
8 vector<Node*> nodes; // Vector to store node pointers
9 unordered_map<int,int> mp; // Map for storing key and corresponding node index
10
11 int tail, head, cacheSize, currentSize; // Variables to track nodes in the cache
12
13 LRUCache(int capacity) {
14 cacheSize = capacity; // Set the cache capacity
15 currentSize = 0; // Initialize current size to 0
16 Node* node = new Node(-1, 1e9, -1, -1); // Dummy node
17 tail = head = 0; // Initialize head and tail pointers
18 nodes.push_back(node); // Add the dummy node to the vector
19 }
20
21 void moveToLast(int id) { // Move the accessed node to the end (tail)
22 if (id == tail) return; // If it's already at the tail, do nothing
23 Node* node = nodes[id]; // Get the node to be moved
24 if (!(node->l == -1 && node->r == 1e9)) {
25 Node* right = nodes[node->r]; // Get the right neighbor of the node
26 right->l = node->l; // Update the left pointer of the right neighbor
27 if (node->l != -1) {
28 // Left neighbor exists
29 Node* left = nodes[node->l]; // Get left neighbor
30 left->r = node->r; // Update the right pointer of the left neighbor
31 }
32 }
33 node->l = tail; // Link this node to tail
34 node->r = nodes[tail]->r; // Set its right to the tail's right
35 nodes[tail]->r = id; // Set the current tail's right to this node
36 tail = id; // Update tail to this node
37 }
38
39 void createNode(int key, int value) { // Create a new node and insert it at the tail
40 Node* node = new Node(-1, 1e9, key, value); // Create new node
41 int id = nodes.size(); // Get id (index) of the new node
42 nodes.push_back(node); // Add the new node to the vector
43 mp[key] = id; // Store key and its index in the map
44 moveToLast(id); // Move the newly created node to the tail
45 currentSize++; // Increment current size
46 }
47
48 int get(int key) { // Function to retrieve value for a key
49 if (mp.count(key) == 0) return -1; // Key not found, return -1
50 int ret = nodes[mp[key]]->value; // Get value associated with the key
51 moveToLast(mp[key]); // Move this node to the tail as it's recently accessed
52
53 return ret; // Return the value
54 }
55
56 void evict() { // Function to remove the least recently used node
57 int target = nodes[head]->r; // Get the least recently used node ID
58 if (target == 1e9) return; // No node exists to evict
59
60 // Adjust the links for eviction
61 nodes[head]->r = nodes[target]->r; // Bypass the node being evicted
62 if (nodes[target]->r < nodes.size()) {
63 nodes[nodes[target]->r]->l = head; // Update left pointer of the right neighbor
64 }
65 if (tail == target) tail = nodes[target]->l; // Update tail if needed
66 mp.erase(nodes[target]->key); // Remove from map
67 currentSize--; // Decrease the size
68 }
69
70 void put(int key, int value) { // Function to insert or update a key-value pair
71 if (mp.count(key) == 0) { // Key does not exist
72 // New key needs adding
73 if (currentSize == cacheSize) { // Check if cache is full
74 evict(); // Evict the least recently used node
75 }
76 createNode(key, value); // Create and insert the new node
77 } else {
78 // Key exists, update its value
79 nodes[mp[key]]->value = value; // Update value
80 moveToLast(mp[key]); // Move this node to the tail
81 }
82 }
83};
84
85/**
86 * Your LRUCache object will be instantiated and called as such:
87 * LRUCache* obj = new LRUCache(capacity);
88 * int param_1 = obj->get(key);
89 * obj->put(key,value);
90 */

Key Points

  • ✓The solution implements a Least Recently Used (LRU) Cache using a combination of a hashmap and a doubly linked list.
  • ✓The hashmap provides O(1) access time to find nodes, while the doubly linked list maintains the order of usage for efficient eviction.
  • ✓The moveToLast function is key to keeping the nodes in the correct order as they are accessed or added.

Recommended Approaches

Doubly Linked List with HashmapO(1)Used

This is the most commonly used approach for LRU caching that combines a doubly linked list to maintain the order of usage and a hashmap for O(1) access. This ensures that both get and put operations are efficient.

Array with Ordered MapO(n) for insertion and deletion due to potential shifting of elements

An alternative approach could use an ordered map to maintain order and automatically evict the least recently used item. However, this may not provide O(1) complexity for all operations depending on the underlying structure of the ordered map.

Queue for Order MaintenanceO(n)

A queue could be used to maintain the order of the recently used keys but would require O(n) operations for find and delete, making it inefficient for this type of cache.

Segment TreeO(log n)

This is a less traditional approach. By maintaining a segment tree, you could theoretically manage updates and retrieves but with higher complexity for basic LRU operations.

Array with Linear SearchO(n)

Using an array to store pairs and performing linear search to find if a key exists, while maintaining order. It's intuitive but results in O(n) complexity for gets and puts due to the need for searching and maintaining B.

Key Idea for Similar Problems

Use a hash map for O(1) key lookup and a doubly linked list to maintain recency order. Each access moves the node to the most-recently-used position and evictions remove from the least-recently-used end. Sentinel nodes keep inserts and removals constant-time and error-free, meeting the O(1) average time target for both get and put.

Related Topics

  • →data-structure
  • →doubly-linked-list
  • →hash-map
← Back to Daily LogPublished 3/8/2026