adllm Insights logo adllm Insights logo

Optimizing IndexedDB: High-Performance Bulk Writes of Small Records in PWAs

Published on by The adllm Team. Last modified: . Tags: IndexedDB PWA Performance JavaScript Web Storage Bulk Operations Web Development

Progressive Web Apps (PWAs) increasingly rely on client-side storage for rich offline experiences and improved performance. IndexedDB, a powerful low-level browser API, is the standard choice for storing significant amounts of structured data. However, when it comes to persisting a large number of small records – a common scenario in caching, data synchronization, or tracking user activity – developers often encounter performance bottlenecks. Naive approaches to bulk writing can lead to slow operations, UI jank, and a degraded user experience.

This article dives deep into proven strategies for optimizing IndexedDB transaction performance, specifically targeting the challenge of bulk writing numerous small records. We’ll explore core concepts, best practices, and practical code examples to help you build highly responsive and efficient PWAs.

Understanding IndexedDB Transactions and Overhead

All data operations in IndexedDB (reads, writes, deletes) occur within the context of a transaction. Transactions ensure data atomicity: if one operation fails, the entire transaction is rolled back, maintaining data integrity. Transactions can be readonly or readwrite, with readwrite transactions locking the involved object stores to prevent concurrent modifications.

While essential, each transaction incurs an inherent overhead. This overhead, though small for a single transaction, becomes a significant performance drag when multiplied across hundreds or thousands of individual operations. For bulk writes, creating a new transaction for each small record is the primary anti-pattern to avoid.

Core Strategy: Batching Operations in a Single Transaction

The most impactful optimization for bulk writes is to batch as many write operations (put() or add()) as possible within a single readwrite transaction. This drastically reduces cumulative transaction overhead.

Consider the task of writing 1,000 small records.

Inefficient Approach: One Transaction Per Record

This example demonstrates the slow method of using a separate transaction for each record. Do not use this approach for bulk writes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
// AVOID THIS PATTERN FOR BULK WRITES
async function addRecordsIndividually(db, records) {
  const startTime = performance.now();
  for (const record of records) {
    const tx = db.transaction('myStore', 'readwrite');
    const store = tx.objectStore('myStore');
    store.put(record); // Each put() is in its own transaction
    await new Promise((resolve, reject) => {
      tx.oncomplete = () => resolve();
      tx.onerror = () => reject(tx.error);
    });
  }
  const endTime = performance.now();
  console.log(
    `Individually: ${records.length} records in ${endTime - startTime}ms`
  );
}

Optimized Approach: Single Transaction for All Records

Here, all put() operations are grouped into one transaction.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// PREFERRED PATTERN FOR BULK WRITES
async function addRecordsInBatch(db, records) {
  const startTime = performance.now();
  // Create a single transaction for all write operations.
  const tx = db.transaction('myStore', 'readwrite');
  const store = tx.objectStore('myStore');

  for (const record of records) {
    store.put(record); // Add record to the current transaction
  }

  return new Promise((resolve, reject) => {
    tx.oncomplete = () => {
      const endTime = performance.now();
      console.log(
        `Batched: ${records.length} records in ${endTime - startTime}ms`
      );
      resolve();
    };
    tx.onerror = () => {
      console.error('Batch write transaction error:', tx.error);
      reject(tx.error);
    };
  });
}

The performance difference between these two approaches is typically an order of magnitude or more. The single transaction minimizes setup/teardown costs and allows the browser’s IndexedDB engine to optimize the overall write process.

Leveraging Transaction Durability Modes (Chromium Browsers)

Chromium-based browsers (like Chrome and Edge) offer a durability option when creating transactions. This hint influences how the browser handles flushing data to disk and can impact write performance. For a detailed explanation and benchmarks, see the Chrome Developers blog post on transaction durability. The possible values are:

  • default: The browser’s default behavior, balancing performance and durability.
  • strict: Higher durability, potentially at the cost of performance. Ensures data is flushed before oncomplete is fired.
  • relaxed: Lower durability guarantees in exchange for potentially higher performance. Data might be flushed with a delay.

For bulk writes of non-critical data (e.g., caches that can be rebuilt), or when maximum throughput is essential and a small risk of data loss on power failure is acceptable, using relaxed durability can provide a noticeable speed boost.

To use this, specify the durability in the transaction options:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
async function addRecordsWithRelaxedDurability(db, records) {
  const startTime = performance.now();
  // Note: Durability hints are primarily for Chromium-based browsers.
  const tx = db.transaction('myStore', 'readwrite', {
    durability: 'relaxed',
  });
  const store = tx.objectStore('myStore');

  for (const record of records) {
    store.put(record);
  }

  return new Promise((resolve, reject) => {
    tx.oncomplete = () => {
      const endTime = performance.now();
      console.log(
        `Relaxed Durability: ${records.length} records in ` +
        `${endTime - startTime}ms`
      );
      resolve();
    };
    tx.onerror = (event) => {
      console.error('Relaxed durability transaction error:', event.target.error);
      reject(event.target.error);
    };
  });
}

Always test the impact, as results can vary based on browser version and system conditions. Firefox and Safari do not currently support the durability hint; they will ignore it.

Managing Callbacks Efficiently

Each IndexedDB operation (put(), add(), get(), etc.) is an IDBRequest object that can have onsuccess and onerror handlers. For bulk operations involving thousands of put() calls within a single transaction, attaching individual handlers to each request adds overhead.

Instead, rely on the transaction’s oncomplete and onerror events. The oncomplete event fires only after all requests in the transaction have successfully completed. The onerror event on the transaction signals that something went wrong with one of the requests or the transaction itself, causing it to abort.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
async function addRecordsWithTransactionEvents(db, records) {
  const tx = db.transaction('myStore', 'readwrite');
  const store = tx.objectStore('myStore');

  records.forEach(record => {
    const request = store.put(record);
    // It's generally not necessary to attach individual onsuccess/onerror
    // handlers here for bulk writes if you handle transaction events.
    // request.onerror = (event) => { /* handle individual error if needed */ };
  });

  return new Promise((resolve, reject) => {
    tx.oncomplete = () => {
      console.log('Bulk write transaction completed successfully.');
      resolve();
    };

    tx.onerror = (event) => {
      console.error('Bulk write transaction failed:', event.target.error);
      reject(event.target.error);
    };

    // Optional: tx.onabort can also be useful for debugging
    tx.onabort = (event) => {
      console.warn('Bulk write transaction aborted:', event.target.error);
      reject(event.target.error); // Or a custom error
    };
  });
}

This approach simplifies code and reduces the number of JavaScript callbacks the browser needs to manage.

Chunking Extremely Large Datasets

While a single transaction is ideal for many records, attempting to write an extremely large number (e.g., hundreds of thousands or millions) in one go can lead to issues:

  • Memory Pressure: The browser might consume excessive memory holding all operations and data.
  • Transaction Timeout: Browsers may impose limits on how long a transaction can run.
  • Browser Instability: Very large transactions can sometimes destabilize the browser.

For such massive datasets, it’s prudent to break the data into manageable chunks (e.g., 1,000 to 10,000 records per chunk) and process each chunk in its own transaction.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
async function addRecordsInChunks(db, allRecords, chunkSize = 5000) {
  console.log(`Starting chunked insert of ${allRecords.length} records.`);
  const totalChunks = Math.ceil(allRecords.length / chunkSize);

  for (let i = 0; i < allRecords.length; i += chunkSize) {
    const chunk = allRecords.slice(i, i + chunkSize);
    const currentChunkNumber = Math.floor(i / chunkSize) + 1;
    
    console.log(`Processing chunk ${currentChunkNumber}/${totalChunks}`);
    
    const tx = db.transaction('myStore', 'readwrite', {
      // Consider 'relaxed' durability for each chunk if applicable
      // durability: 'relaxed', 
    });
    const store = tx.objectStore('myStore');

    chunk.forEach(record => {
      store.put(record);
    });

    await new Promise((resolve, reject) => {
      tx.oncomplete = resolve;
      tx.onerror = () => reject(tx.error);
    });
    console.log(`Chunk ${currentChunkNumber} completed.`);
  }
  console.log('All chunks processed successfully.');
}

Finding the optimal chunkSize might require some experimentation based on record size and target devices.

Offloading Data Preparation to Web Workers

While IndexedDB operations are asynchronous and don’t directly block the main thread for their I/O, intensive data preparation before writing can. If you need to perform complex transformations or generate many small records from a larger dataset, this computation can freeze the UI.

Web Workers allow you to run JavaScript in a background thread, preventing UI jank. You can prepare your records in a Web Worker and then send the prepared data back to the main thread for the actual IndexedDB write operations.

Main Thread Code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// main.js
const myWorker = new Worker('worker.js');

myWorker.onmessage = async (event) => {
  const { preparedRecords, task } = event.data;
  if (task === 'RECORDS_PREPARED') {
    console.log('Received prepared records from worker. Writing to IDB...');
    // Assume 'db' is an open IndexedDB connection
    await addRecordsInBatch(db, preparedRecords);
    console.log('Finished writing records from worker to IDB.');
  }
};

// Example: Simulating raw data that needs processing
const rawDataToProcess = { count: 10000, baseValue: 'item-' }; 
myWorker.postMessage({ command: 'PREPARE_RECORDS', data: rawDataToProcess });

Worker Code (worker.js):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
// worker.js
self.onmessage = (event) => {
  const { command, data } = event.data;

  if (command === 'PREPARE_RECORDS') {
    console.log('Worker: Preparing records...');
    const preparedRecords = [];
    for (let i = 0; i < data.count; i++) {
      preparedRecords.push({ 
        id: `${data.baseValue}${i}`, 
        timestamp: Date.now(),
        payload: Math.random() 
      });
    }
    // Send prepared records back to the main thread
    self.postMessage({ task: 'RECORDS_PREPARED', preparedRecords });
    console.log('Worker: Finished preparing records.');
  }
};

Note that direct IndexedDB access from Web Workers is also possible and can be very effective, allowing the worker to handle both preparation and writing. This further offloads the main thread but requires careful management of the DB connection and transactions within the worker.

Efficient Data Structures and Avoiding Large Clones

IndexedDB stores JavaScript objects using the structured clone algorithm. While this is flexible, cloning very large or complex objects can be computationally expensive and block the main thread. When dealing with “small records,” ensure they are indeed stored as individual, reasonably sized objects. Avoid the temptation to aggregate many small conceptual records into a single, massive JavaScript object or array that you then put() into IndexedDB as one entry. This can negate the benefits of small record handling by introducing a large cloning cost.

It’s better to perform multiple put() operations for individual small records (within a single transaction) than to put() one enormous composite object.

Considering Helper Libraries (e.g., Dexie.js)

Libraries like Dexie.js provide a more ergonomic, Promise-based wrapper around IndexedDB and often include built-in optimizations for common operations. For bulk writes, Dexie.js offers bulkPut() and bulkAdd() methods that are highly optimized internally. Another popular lightweight, promise-based wrapper is idb by Jake Archibald.

Here’s how you might use Dexie.js for a bulk write:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
// Assuming 'db' is a Dexie instance:
// const db = new Dexie('MyDatabase');
// db.version(1).stores({ myStore: '++id,someIndex' });

async function addRecordsWithDexie(db, records) {
  const startTime = performance.now();
  try {
    // Dexie's bulkPut handles batching and optimization internally.
    await db.table('myStore').bulkPut(records);
    const endTime = performance.now();
    console.log(
      `Dexie bulkPut: ${records.length} records in ${endTime - startTime}ms`
    );
  } catch (error) {
    console.error('Dexie bulkPut failed:', error);
  }
}

Using such a library can simplify your code and reduce the chances of manual implementation errors, while often providing excellent performance.

Common Pitfalls to Avoid

  • One Transaction Per Record: The cardinal sin of IndexedDB bulk writes.
  • Ignoring Transaction oncomplete or onerror: Relying only on individual request success can be misleading if the overall transaction fails or aborts.
  • Synchronous Main Thread Blocking: Performing lengthy data preparation directly on the main thread before writing.
  • Storing Enormous Single Objects: Leads to high structured cloning costs.
  • Forgetting Error Handling: Unhandled errors in requests or transactions lead to silent failures and data inconsistencies.
  • Not Testing on Diverse Devices: Performance characteristics can vary significantly between desktop and mobile, or high-end vs. low-end devices.

Debugging and Profiling Performance

  • Browser Developer Tools:
    • Application Tab: Inspect IndexedDB databases, object stores, indexes, and data. (e.g., Chrome DevTools Application Panel, Firefox Storage Inspector).
    • Performance Profiler: Identify long JavaScript tasks. Look for patterns related to IndexedDB operations.
    • Console: Log timings, record counts, and transaction lifecycle events.
  • Timing: Use console.time('label') and console.timeEnd('label') or performance.now() to measure the duration of your bulk write operations.
  • Error Inspection: Always log event.target.error in onerror handlers for detailed error information.

Conclusion

Optimizing IndexedDB for bulk writes of small records is crucial for building fast, responsive PWAs that handle large amounts of client-side data. The cornerstone of this optimization is batching operations within a single transaction. By combining this with strategies like using relaxed durability where appropriate, efficient callback management, chunking for massive datasets, offloading preparation to Web Workers, and leveraging well-built libraries, you can achieve significant performance gains.

Always measure and profile your specific use case on target devices to find the optimal balance of these techniques. By applying these best practices, you’ll ensure your PWA’s data persistence layer is a source of strength, not a bottleneck.