Ephemeral jobs are jobs which are removed from the database immediately upon completion. Instead of being updated to a completed
state and left to be eventually reaped by the job cleaner, they're purged post-haste with a DELETE
operation. This trades the observability that would've been available from a completed job row for improved operational robustness stemming from jobs being cycled out of the database more quickly.
Use of this feature is recommended to be judicious, reserving it for select, high-volume jobs which will benefit particularly from being removed expediently, while leaving most jobs as non-ephemeral so they follow their normal job lifecycle.
Ephemeral jobs are a feature of River Pro ✨. If you haven't yet, install River Pro.
Added in River Pro v0.16.0.
Basic usage
Make a job ephemeral by adding an implementation for EphemeralOpts()
that returns riverpro.EphemeralOpts
:
type MyEphemeralJobArgs struct{ Message string `json:"message"`}
func (a MyEphemeralJobArgs) Kind() string { return "my_ephemeral_job"}
func (a MyEphemeralJobArgs) EphemeralOpts() riverpro.EphemeralOpts { return riverpro.EphemeralOpts{}}
Currently, EphemeralOpts
has no properties, but is reserved for future options.
Transitions to retryable
and discarded
Ephemeral jobs are deleted immediately where they'd normally transition from running
to completed
, but other states behave normally. When an ephemeral job fails, it transitions to either retryable
or discarded
(depending on whether it's exhausted its retry policy) like any non-ephemeral job would.
Operational advantages
In most cases use of ephemeral jobs won't provide a huge advantage over non-ephemeral jobs, but they can be useful in high throughput situations:
Pages in Postgres B-tree indexes may split as new records are added, but when records are removed, they don't recombine without a
REINDEX
. Removing high volume job rows immediately leaves room in indexes for new jobs to be added, which may avoid page splits.River's normal job removal involves doing work twice: once to complete a row from
running
tocompleted
, and then with another pass to deletecompleted
rows. This is a nominal amount of effort for most workloads, but it might matter in the presence of huge numbers of jobs.