Skip to main content
← Back to Blog
Tutorial

Request Deduplication: Preventing Double Form Submissions

A
Abe Reyes
February 6, 20265 min read

Request Deduplication: Preventing Double Form Submissions

Here's a scenario every web developer has encountered: a user fills out a contact form, clicks "Submit," nothing seems to happen for a second, so they click again. Now you have two identical messages in your inbox and the user gets two confirmation emails.

Or worse: they're placing an order. Double-click on "Pay Now." Two charges on their credit card.

Disabling the submit button after the first click helps, but it's a frontend-only solution. It doesn't protect against network retries, browser refresh-resends, or API clients that retry on timeout.

You need server-side deduplication. Here's how I built it.

The Approach: Fingerprint and Check

The idea is simple:

  1. For every incoming request, create a unique fingerprint of its content
  2. Check if we've seen that fingerprint recently
  3. If yes, reject as duplicate. If no, process normally

The implementation has three parts: fingerprinting, storage, and the check itself.

Step 1: SHA-256 Fingerprinting

Every request gets reduced to a 32-character hash:

import { createHash } from 'crypto';

function createRequestFingerprint(
  data: Record<string, unknown>,
  userId?: string
): string {
  // Sort keys for deterministic serialization
  const keys = Object.keys(data).sort();

  // Build a stable string representation
  const serialized = keys
    .map(key => `${key}:${String(data[key])}`)
    .join('|');

  // Scope to user if authenticated
  const content = userId
    ? `user:${userId}|${serialized}`
    : serialized;

  // Hash and truncate
  return createHash('sha256')
    .update(content)
    .digest('hex')
    .substring(0, 32);
}

A few design decisions here:

Sorted keys ensure that {name: "Abe", email: "abe@test.com"} and {email: "abe@test.com", name: "Abe"} produce the same fingerprint. Order shouldn't matter.

User scoping means two different users submitting the same form content won't collide. User A's "Contact me" and User B's "Contact me" are treated as separate requests.

Truncation to 32 chars keeps storage small while maintaining collision resistance. With SHA-256, 32 hex characters give us 128 bits of entropy — more than enough.

Step 2: Atomic Check with Redis

This is the critical part. We need a test-and-set operation that's atomic — meaning no two requests can both think they're "first":

async function checkAndMarkRequest(
  fingerprint: string,
  operation: string
): Promise<boolean> {
  const key = `dedup:${operation}:${fingerprint}`;

  // SET with NX + EX: atomic test-and-set with auto-expiry
  const result = await redis.set(key, new Date().toISOString(), {
    NX: true,  // Only set if key doesn't exist
    EX: 60,    // Auto-expire after 60 seconds
  });

  // 'OK' means we set the key (first request)
  // null means key already existed (duplicate)
  return result === 'OK';
}

NX (Not eXists) is the key. Redis guarantees that if two requests race to set the same key, only one succeeds. This is atomic at the Redis level — no race conditions.

EX (Expiry) auto-cleans the key after 60 seconds. This means the same form can be submitted again after a minute, which handles legitimate re-submissions.

Step 3: The Middleware

Putting it all together in an API route:

async function handleFormSubmission(request: Request) {
  const body = await request.json();

  // Create fingerprint from the form data
  const fingerprint = createRequestFingerprint(body, userId);

  // Check for duplicate
  const isNew = await checkAndMarkRequest(fingerprint, 'contact-form');

  if (!isNew) {
    return new Response(
      JSON.stringify({
        success: true,  // Don't expose dedup to the user
        message: 'Your message has been received',
      }),
      { status: 200 }
    );
  }

  // Process the actual submission
  await saveToDatabase(body);
  await sendConfirmationEmail(body.email);

  return new Response(
    JSON.stringify({ success: true }),
    { status: 200 }
  );
}

Notice the duplicate response returns 200 with a success message, not a 409 Conflict. From the user's perspective, their form was submitted successfully. They don't need to know (or care) that their second click was silently ignored.

Graceful Degradation

What happens when Redis is down? If your deduplication blocks all requests when Redis is unavailable, you've traded one problem (duplicates) for a worse one (total outage).

The solution: fail open.

async function checkAndMarkRequest(
  fingerprint: string,
  operation: string
): Promise<boolean> {
  try {
    const key = `dedup:${operation}:${fingerprint}`;
    const result = await redis.set(key, new Date().toISOString(), {
      NX: true,
      EX: 60,
    });
    return result === 'OK';
  } catch {
    // Redis unavailable — allow the request
    // A potential duplicate is better than a total outage
    return true;
  }
}

This pairs with the circuit breaker pattern. When Redis fails enough times, the circuit breaker trips and the dedup layer stops even attempting Redis calls. Requests flow through without dedup until Redis recovers.

Real-World Results

Since deploying this pattern on NeedThisDone.com:

  • Zero duplicate form submissions from double-clicks
  • Zero duplicate payment attempts at checkout
  • No user-facing impact when Redis has brief outages (the system degrades gracefully)
  • 60-second TTL handles all legitimate use cases without manual cleanup

The whole implementation is about 50 lines of code. But those 50 lines prevent a class of bugs that would otherwise require manual intervention to fix.

When to Use This

Not every endpoint needs deduplication. Here's my rule of thumb:

Endpoint TypeDedup?Why
Form submissions (contact, quote)YesUsers double-click
Payment processingYesMust prevent double charges
Order creationYesNetwork retries
Data reads (GET requests)NoReads are idempotent
Search queriesNoDuplicates are harmless
Webhook handlersMaybeDepends on the webhook provider

If processing a duplicate would cause harm (financial, data corruption, spam), add deduplication. If duplicates are harmless, don't add complexity.

Want reliability patterns like this in your application? Get in touch.

Need Help Getting Things Done?

Whether it's a project you've been putting off or ongoing support you need, we're here to help.