Concurrent counter increments are a race-condition trap. The Rails idiom (load the record, increment in Ruby, save) breaks under concurrent writes. Between the read and the write, another transaction can read the same value and produce a lost update.
The Postgres-native fix is to compute the increment server-side inside an atomic INSERT ... ON CONFLICT DO UPDATE statement and gate it by a unique index.
The shape that fails
counter = RequestCounter.find_or_create_by(api_key:, billing_period:)
counter.update!(request_count: counter.request_count + 1)
Two requests for the same key arrive on two web workers. Both read request_count = 7, both compute 7 + 1, both write 8. Two requests happened, one increment is lost.
The solution
Three pieces:
1. A unique index on the counter's identity columns. This is the conflict target.
class CreateRequestCounters < ActiveRecord::Migration[8.1]
def change
create_table :request_counters do |t|
t.references :api_key, null: false, foreign_key: true
t.references :billing_period, null: false, foreign_key: true
t.bigint :request_count, null: false, default: 0
t.timestamps
endon
add_index :request_counters, %i[api_key_id billing_period_id], unique: true
end
end
2. A class method that performs the atomic upsert. Inserts the row with request_count = 1 on first call, increments by 1 on every conflict.
class RequestCounter < ApplicationRecord
belongs_to :api_key
belongs_to :billing_period
def self.bump!(api_key_id:, billing_period_id:)
upsert(
{ api_key_id:, billing_period_id:, request_count: 1 },
unique_by: %i[api_key_id billing_period_id],
on_duplicate: Arel.sql(
'request_count = request_counters.request_count + 1, updated_at = CURRENT_TIMESTAMP'
)
)
end
end
3. A call site that fires once per event. An after_action on the API base controller.
class ApplicationApiController < ActionController::API
before_action :authenticate_api_key!
after_action :bump_request_counter, if: -> { response.successful? }
private
def bump_request_counter
RequestCounter.bump!(
api_key_id: @api_key.id,
billing_period_id: current_billing_period.id
)
end
end
Why a unique index
The unique index is a storage-level constraint: Postgres rejects duplicate keys at insert time, before commit. Two consequences:
-
ON CONFLICT (col_a, col_b)needs a unique constraint on those columns, otherwise Postgres has no way to know what counts as a conflict. Theunique_by:argument toupsertnames that index. - When transaction A inserts a key, Postgres locks that index entry until A commits or rolls back. A concurrent transaction B inserting the same key blocks on the same entry.
What happens when two bump! calls race
- Both transactions reach the upsert. A grabs the index entry first, inserts the row with
request_count = 1, holds the lock. - B blocks on the same index entry.
- A commits. The lock releases; the row is visible.
- B unblocks, sees the key now exists, and falls through to the UPDATE branch.
- B runs
request_count = request_counters.request_count + 1against the committed row.