Developer guide
Shopify webhook idempotency in Rails
A Rails guide to Shopify webhook idempotency using event IDs, durable deduplication, and processing patterns that survive retries, duplicates, and delayed deliveries.
What idempotency actually means for Shopify webhooks
Shopify is explicit about two things that matter a lot in production: you might receive the same webhook event more than once, and webhook delivery is not always guaranteed. Those two facts are enough to kill the beginner assumption that a webhook is a neat, single message that arrives once, in order, and can be processed without retries or reconciliation.
In other words, idempotency is not a polish step. It is part of the contract. If the same
event arrives twice, your final state should still be correct. If your job retries halfway
through processing, your final state should still be correct. If the update
webhook lands before the create webhook, your final state should still be
correct.
“Your app should process webhooks using idempotent operations.”
For a Rails app, that leads to a simple operating model:
- verify the webhook origin using the raw request body
- record receipt durably using a uniqueness guarantee
- return a success response quickly
- hand the real work to background jobs
- make those jobs replay-safe too
The working rule
Idempotency is not “avoid double inserts in the controller.” It is “the same event can hit my system twice and the business outcome is still the same.”
Choose the right identity key before you write any Rails code
Shopify gives you several useful webhook headers, but they are not interchangeable. The
one that matters for duplicate detection is X-Shopify-Event-Id. Shopify says
that the same event ID across more than one webhook indicates a duplicate event.
That sounds obvious until somebody reaches for X-Shopify-Webhook-Id because
it also looks unique and has the word “webhook” in it. Do not do that. The event ID is
about the underlying Shopify event. The webhook ID is better treated as a delivery-level
trace value for logs, dashboards, and support debugging. If you deduplicate on the wrong
thing, your system becomes “reliably incorrect,” which is a very expensive kind of fast.
| Header | Use it for | Do not use it for |
|---|---|---|
X-Shopify-Event-Id | Duplicate-event detection | Human debugging labels |
X-Shopify-Webhook-Id | Tracing a specific delivery attempt | Primary idempotency key |
X-Shopify-Triggered-At | Ordering hints and observability | Duplicate detection |
X-Shopify-Hmac-SHA256 | Authenticity verification | Business identity |
There is one important nuance from Shopify’s docs that many teams miss. If you have more
than one subscription for the same topic, you can receive several webhooks for the same
event, one per subscription. That means the most common dedupe key for a single endpoint
is shop_domain + topic + event_id, but if you intentionally run multiple
subscriptions for the same topic into the same processing path, you may need to include a
subscription discriminator such as an endpoint label or developer-supplied subscription
name.
Most apps do not need a very clever identity strategy. They need a very boring one that is correct under concurrency.
A Rails deduplication table that survives production
If your dedupe layer lives in memory, it dies on deploy. If it lives in a process-local mutex, it dies under horizontal scaling. If it lives only in Redis with a short TTL and no durable source of truth, it dies the first time an outage or delayed retry lasts longer than your optimism window.
Shopify’s duplicate-handling guidance explicitly says to use persistent storage. In Rails, that usually means a table backed by a unique index. The database is the only component on your stack that will win an argument against two Puma workers, three job threads, a retry storm, and one developer saying “this race condition seems unlikely.”
1. A practical receipt model
Model webhook receipt separately from business processing. You want a durable record of the envelope even if the actual work fails later.
# db/migrate/20260312000000_create_shopify_webhook_receipts.rb
class CreateShopifyWebhookReceipts < ActiveRecord::Migration[7.1]
def change
create_table :shopify_webhook_receipts do |t|
t.string :shop_domain, null: false
t.string :topic, null: false
t.string :event_id, null: false
t.string :webhook_id
t.string :subscription_name
t.datetime :triggered_at
t.string :payload_sha256, null: false
t.integer :status, null: false, default: 0
t.datetime :processed_at
t.text :last_error
t.timestamps
end
add_index :shopify_webhook_receipts,
[:shop_domain, :topic, :event_id],
unique: true,
name: "index_shopify_webhook_receipts_on_shop_topic_event"
add_index :shopify_webhook_receipts, :processed_at
add_index :shopify_webhook_receipts, :created_at
end
endA few notes on those columns:
event_idis the dedupe identity from Shopify.webhook_idis worth storing for tracing specific deliveries.payload_sha256is not your idempotency key, but it is great for forensics.statusandprocessed_attell you whether the business work actually completed.last_errorgives support and ops something more useful than “something went weird.”
2. Keep the model small and honest
# app/models/shopify_webhook_receipt.rb
class ShopifyWebhookReceipt < ApplicationRecord
enum :status, {
received: 0,
processing: 1,
processed: 2,
failed: 3,
}
validates :shop_domain, :topic, :event_id, :payload_sha256, presence: true
endResist the temptation to turn this into a magical god-record that also knows how to sync orders, notify Sentry, make tea, and heal your childhood. Its job is simple: represent the durable receipt and processing state of one Shopify event.
3. Atomically decide whether this event is new
On PostgreSQL and SQLite, Rails gives you insert_all and
upsert_all with unique_by. That is a clean way to make the
database decide whether the receipt is new without a check-then-insert race.
# app/services/shopify_webhooks/receipt_gate.rb
module ShopifyWebhooks
class ReceiptGate
Result = Struct.new(:accepted, :receipt_id, keyword_init: true)
UNIQUE_INDEX = :index_shopify_webhook_receipts_on_shop_topic_event
def self.call(headers:, raw_body:)
now = Time.current
attrs = {
shop_domain: headers.fetch("HTTP_X_SHOPIFY_SHOP_DOMAIN"),
topic: headers.fetch("HTTP_X_SHOPIFY_TOPIC"),
event_id: headers.fetch("HTTP_X_SHOPIFY_EVENT_ID"),
webhook_id: headers["HTTP_X_SHOPIFY_WEBHOOK_ID"],
subscription_name: headers["HTTP_X_SHOPIFY_NAME"],
triggered_at: parse_time(headers["HTTP_X_SHOPIFY_TRIGGERED_AT"]),
payload_sha256: Digest::SHA256.hexdigest(raw_body),
status: ShopifyWebhookReceipt.statuses[:received],
created_at: now,
updated_at: now,
}
insert_result = ShopifyWebhookReceipt.insert_all(
[attrs],
unique_by: UNIQUE_INDEX,
returning: %w[id],
)
inserted_row = insert_result.rows.first
if inserted_row
Result.new(accepted: true, receipt_id: inserted_row.first)
else
Result.new(accepted: false, receipt_id: nil)
end
rescue ActiveRecord::RecordNotUnique
Result.new(accepted: false, receipt_id: nil)
end
def self.parse_time(value)
value.present? ? Time.zone.parse(value) : nil
end
private_class_method :parse_time
end
endThe important thing here is not the exact Ruby syntax. The important thing is that the
acceptance decision is made by a durable uniqueness constraint, not by a friendly little
exists? query that loses a race under concurrency.
If you are on MySQL, be more explicit about the portability caveat. Rails documents
unique_by for insert_all and upsert_all as a
PostgreSQL and SQLite feature. In MySQL-backed apps, a common fallback is to attempt the
insert and treat ActiveRecord::RecordNotUnique as a duplicate receipt rather
than pre-checking in application code.
Your controller should verify fast persist fast and acknowledge fast
Shopify expects your endpoint to accept the connection quickly and complete the full request in under five seconds. If there is no response or you return an error, Shopify retries, and after repeated failures the subscription can be removed. So the controller has one real job: authenticate, record, enqueue, acknowledge. It should not try to run your entire product sync, inventory rebuild, ERP handshake, and spiritual journey inline.
Also, HMAC verification must use the raw request body. In Rack-based frameworks such as
Rails, Shopify notes that the signature header is exposed as
HTTP_X_SHOPIFY_HMAC_SHA256. If you parse and mutate the body before computing
the digest, congratulations, you have built a signature checker for a different payload.
# app/controllers/webhooks/shopify_controller.rb
class Webhooks::ShopifyController < ActionController::API
def create
raw_body = request.raw_post
unless valid_shopify_hmac?(raw_body)
head :unauthorized
return
end
gate = ShopifyWebhooks::ReceiptGate.call(headers: request.headers, raw_body: raw_body)
if gate.accepted
ShopifyWebhookProcessingJob.perform_later(
webhook_receipt_id: gate.receipt_id,
raw_body: raw_body,
)
end
head :ok
end
private
def valid_shopify_hmac?(raw_body)
received_hmac = request.headers["HTTP_X_SHOPIFY_HMAC_SHA256"].to_s
secret = ENV.fetch("SHOPIFY_API_SECRET")
digest = OpenSSL::HMAC.digest("sha256", secret, raw_body)
expected_hmac = Base64.strict_encode64(digest)
ActiveSupport::SecurityUtils.secure_compare(expected_hmac, received_hmac)
rescue KeyError
false
end
endNotice what this controller does not do:
- it does not deserialize into fifteen app models
- it does not call third-party APIs inline
- it does not hold open the request while jobs “mostly finish”
- it does not explode on duplicates and turn a healthy retry into a 500
If the receipt already exists, returning 200 OK is usually the right move. The
event has already been accepted by your durable dedupe layer. That is success, not an error.
Duplicate webhook delivery is normal enough that paging the team for it by default is like
paging the team because water is wet.
Design jobs so retries are boring
A lot of teams stop after deduplicating the HTTP request. That is only half the story. Shopify can retry webhook delivery. Your queue backend can retry failed jobs. Engineers can manually replay work. If the job itself is not idempotent, you have simply moved the bug downstream where it becomes harder to see and more expensive to unwind.
Rails makes retries straightforward with Active Job, but retry support is only safe when the job can run more than once without changing the intended result. The goal is boring retries. Not “exciting retries with duplicate rows and apology emails.”
1. Separate receipt state from domain state
# app/jobs/shopify_webhook_processing_job.rb
class ShopifyWebhookProcessingJob < ApplicationJob
queue_as :webhooks
retry_on Net::OpenTimeout, wait: 5.seconds, attempts: 10
retry_on ActiveRecord::Deadlocked, wait: 2.seconds, attempts: 5
def perform(webhook_receipt_id:, raw_body:)
receipt = ShopifyWebhookReceipt.find(webhook_receipt_id)
return if receipt.processed?
receipt.update!(status: :processing)
payload = JSON.parse(raw_body)
ShopifyWebhooks::Dispatcher.call(receipt:, payload:)
receipt.update!(status: :processed, processed_at: Time.current, last_error: nil)
rescue => e
receipt&.update!(status: :failed, last_error: "#{e.class}: #{e.message}")
raise
end
endThis job is still not “fully idempotent” just because it checks processed?.
That only protects the top-level receipt record. The dispatched business operations also
need stable write patterns.
2. Upsert by Shopify-owned identifiers
If the webhook concerns an order, product, customer, or fulfillment, use the Shopify object identity as your app-level uniqueness boundary. Do not create “whatever seems new enough” based on timestamps and vibes.
# app/services/shopify_webhooks/handlers/orders_updated.rb
module ShopifyWebhooks
module Handlers
class OrdersUpdated
UNIQUE_INDEX = :index_app_orders_on_shop_domain_and_shopify_order_id
def self.call(receipt:, payload:)
now = Time.current
AppOrder.upsert(
{
shop_domain: receipt.shop_domain,
shopify_order_id: payload.fetch("admin_graphql_api_id"),
order_number: payload["order_number"],
email: payload["email"],
financial_status: payload["financial_status"],
fulfillment_status: payload["fulfillment_status"],
synced_at: now,
updated_at: now,
created_at: now,
},
unique_by: UNIQUE_INDEX,
)
end
end
end
endThis is the correct place to be boring. If the same webhook is processed twice, the same order row is updated twice to the same state. Nobody notices. Nobody gets billed twice. Nobody emails you with “interesting issue!” which is support language for “your app bit me.”
3. Guard outbound side effects too
The nastiest duplicates are usually not duplicate rows. They are duplicate side effects: charging something twice, sending the same merchant email twice, creating the same ERP task twice, or hitting a third-party API twice because your job retried after the remote service timed out right after success. That is how weekends get cancelled.
Use a separate idempotency record for outbound side effects keyed to the business action you are about to perform.
# db/migrate/20260312000001_create_outbound_actions.rb
class CreateOutboundActions < ActiveRecord::Migration[7.1]
def change
create_table :outbound_actions do |t|
t.string :kind, null: false
t.string :shop_domain, null: false
t.string :subject_key, null: false
t.datetime :performed_at
t.timestamps
end
add_index :outbound_actions,
[:kind, :shop_domain, :subject_key],
unique: true,
name: "index_outbound_actions_on_kind_shop_subject"
end
endmodule OutboundActions
class Gate
UNIQUE_INDEX = :index_outbound_actions_on_kind_shop_subject
def self.allow?(kind:, shop_domain:, subject_key:)
result = OutboundAction.insert_all(
[{
kind: kind,
shop_domain: shop_domain,
subject_key: subject_key,
performed_at: Time.current,
created_at: Time.current,
updated_at: Time.current,
}],
unique_by: UNIQUE_INDEX,
returning: %w[id],
)
result.rows.any?
rescue ActiveRecord::RecordNotUnique
false
end
end
endThink of this as a second idempotency wall. The first wall protects receipt. The second wall protects consequences.
Ordering gaps missed deliveries and why reconciliation still matters
Shopify does not guarantee webhook ordering within a topic or across topics for the same resource. It also says webhook delivery is not always guaranteed and recommends reconciliation jobs. This matters because teams often use “idempotent” to mean “reliable.” They are related, but they are not the same thing.
Idempotency answers this question: “If I process this same event again, do I stay correct?” Reconciliation answers a different question: “What if I never got the event at all?”
“Because webhook delivery isn't always guaranteed, you should implement reconciliation jobs.”
The reliable production pattern is therefore:
- webhooks for near-real-time reaction
- Admin API reads for authoritative current state
- scheduled reconciliation for missed or out-of-order events
- metrics for delivery failure rate, retries, and response time
For high-value entities such as orders, fulfillments, subscriptions, or billing data, the webhook should usually act as a trigger to sync authoritative current state, not as a sacred scroll whose payload you trust forever. In practical terms, a webhook about an order often means “go reconcile this order now” rather than “this payload alone is the whole truth.”
# app/jobs/reconcile_shopify_orders_job.rb
class ReconcileShopifyOrdersJob < ApplicationJob
queue_as :reconciliation
def perform(shop_id:, updated_at_min: 2.hours.ago)
shop = Shop.find(shop_id)
ShopifyAdmin::OrdersFetcher.each_page(shop:, updated_at_min:) do |orders|
orders.each do |order|
AppOrder.upsert(
{
shop_domain: shop.shopify_domain,
shopify_order_id: order.id,
order_number: order.order_number,
email: order.email,
financial_status: order.display_financial_status,
fulfillment_status: order.display_fulfillment_status,
synced_at: Time.current,
updated_at: Time.current,
created_at: Time.current,
},
unique_by: :index_app_orders_on_shop_domain_and_shopify_order_id,
)
end
end
end
endReconciliation is not an admission of failure. It is an admission that networks, queues, vendors, and your own code all occasionally behave like raccoons in a server room.
How to test the ugly cases before production tests them for you
The failure modes here are known in advance, which is great news because it means you can write tests for them before a merchant discovers them during Black Friday. Your test suite should treat duplicates, retries, and ordering weirdness as first-class behavior.
1. Test that the same event is accepted once
require "test_helper"
class Webhooks::ShopifyControllerTest < ActionDispatch::IntegrationTest
test "duplicate webhook only enqueues once" do
headers = shopify_headers(
event_id: "evt-123",
webhook_id: "wh-1",
topic: "orders/updated",
)
raw_body = { admin_graphql_api_id: "gid://shopify/Order/1" }.to_json
assert_enqueued_jobs 1 do
post "/webhooks/shopify", params: raw_body, headers: signed_headers(headers, raw_body)
post "/webhooks/shopify", params: raw_body, headers: signed_headers(headers, raw_body)
end
assert_response :success
assert_equal 1, ShopifyWebhookReceipt.where(event_id: "evt-123").count
end
end2. Test that a job can safely run twice
require "test_helper"
class ShopifyWebhookProcessingJobTest < ActiveJob::TestCase
test "processing twice does not create duplicate orders" do
receipt = shopify_webhook_receipts(:orders_updated)
payload = { admin_graphql_api_id: "gid://shopify/Order/1", order_number: 1001 }.to_json
2.times do
ShopifyWebhookProcessingJob.perform_now(
webhook_receipt_id: receipt.id,
raw_body: payload,
)
end
assert_equal 1, AppOrder.where(shopify_order_id: "gid://shopify/Order/1").count
end
end3. Test out-of-order arrivals
If orders/updated lands before orders/create, your model should
still end up valid after both process. That usually means upserting current state, not
assuming a specific event chronology.
4. Test failed side effects with retry
Simulate the classic horror show: the remote API succeeds, your network times out, the job raises, and the retry runs. Your outbound action gate should block the second attempt from repeating the external side effect.
5. Test manual replay
Give yourself an admin-safe replay path for a receipt or a shop reconciliation job. If your system cannot survive operator replays, it cannot really survive production either.
The test philosophy
Do not just test that the happy path works once. Test that the same work can be attempted twice and the result is still boringly correct.
Best internal links
Sources and further reading
FAQ
Should I deduplicate on `X-Shopify-Webhook-Id` or `X-Shopify-Event-Id`?
Use `X-Shopify-Event-Id` to detect duplicate events. Treat `X-Shopify-Webhook-Id` as a delivery-level identifier for logs and tracing.
Is Redis enough for webhook deduplication?
Not as the core guarantee. Redis can help with throughput, but the authoritative idempotency record should live in durable storage that survives restarts, deploys, and retry storms.
If I deduplicate the webhook controller, am I done?
No. Your downstream jobs and side effects also need to be idempotent. Otherwise the controller is clean while the business outcome still duplicates.
Do webhooks remove the need for periodic sync jobs?
No. Shopify explicitly recommends reconciliation because webhook delivery is not guaranteed.
Related resources
Keep exploring the playbook
How to build a retry-safe Shopify event pipeline with Rails
A practical architecture guide for Rails teams building retry-safe Shopify event pipelines with webhooks, queues, reconciliation jobs, and durable side-effect tracking.
Calling a Rails API from a Shopify customer account extension
A practical guide to calling Rails endpoints from a Shopify Customer Account UI Extension, including session-token verification, endpoint design, and the requests that should not go through your backend at all.
Checkout UI Extensions with a custom backend architecture
A practical backend architecture guide for Shopify Checkout UI Extensions, covering when to call your own backend, how to keep the extension thin, and where teams overbuild the server side.