Batch job insertion

River supports batch inserts, wherein many jobs are inserted at once using Postgres' COPY FROM protocol for optimal performance.

Insert many

Batch inserts are executed with Client.InsertMany and Client.InsertManyTx. Both take a slice of InsertManyParams structs, which like a call to a normal non-batch insert function, take job args and optional InsertOpts.

count, err := riverClient.InsertMany(ctx, []river.InsertManyParams{
    {Args: BatchInsertArgs{}},
    {Args: BatchInsertArgs{}},
    {Args: BatchInsertArgs{}},
    {Args: BatchInsertArgs{}, InsertOpts: &river.InsertOpts{Priority: 3}},
    {Args: BatchInsertArgs{}, InsertOpts: &river.InsertOpts{Priority: 4}},
if err != nil {
fmt.Printf("Inserted %d jobs\n", count)

See the BatchInsert example for complete code.

InsertManyTx takes a transaction, and like InsertTx, all the normal transactional enqueuing benefits apply, like that jobs aren't worked until the transaction commits, and are removed if it rolls back.

count, err := riverClient.InsertManyTx(ctx, tx, []river.InsertManyParams{

Normal job insertions are quite fast so it's usually not necessary to resort to batch job insertion, but it may be desireable in situations where many jobs are being inserted at once for optimal insert performance. Under the hood River uses Postgres COPY FROM, which dramatically reduces the number of network round trips to the database, and has a few minor performance benefits like reduced logging overhead.


InsertOpts's UniqueOpts are currently ignored while performing batch inserts. Job uniqueness is guaranteed through the use of a PG advisory lock and holding too many of these at once could lead to contention and deadlocks across transactions.

This limitation may be removed in the future, but the only workaround for now is to fall back to single job insertion where uniqueness is required.

Batch insertion does not support unique jobs

A limitation of batch insertion is that InsertOpts's UniqueOpts are not supported. Use single job insertion where job uniqueness is required.

Using an alternate schema