River has libraries for inserting jobs for select languages like Python and Ruby, but not in every ecosystem. This page describes how to insert jobs via raw SQL from unsupported languages, a technique that generally works quite well, even if not every feature is supported by doing so.
Minimal viable insert
Most columns on river_job
get default values so that they don't all need a value. There are three core columns that don't, so a minimum viable job insertion looks like:
INSERT INTO river_job ( args, kind, max_attempts) values ( '{"my_arg_key":"my_arg_val"}', 'my_job', 25);
The value in args
must be valid JSON, and it must be unmarshalable to the JobArgs
that maps to the job kind being inserted (my_job
in this example).
Unique jobs won't work using this method. They depend on a unique index with an internal format that's fairly complex to reproduce, and no attempt should be made to do so except through a well-vetted client library. Sme other advanced features like workflows are also not functional.
Unique jobs need a client library
Unique jobs depend on an internal format for their unique index that should only be replicated by client library. Uniqueness won't work with raw SQL insertion.
Notifying producers
The client will wake up every FetchPollInterval
to check for the new jobs, but to make sure a producer handles it immediately, use pg_notify
:
SELECT pg_notify(current_schema() || '.river_insert', '{"queue":"default"}');
River clients will start listening on channel names derived from Config.Schema
or current_schema()
, but if you know the schema River is running in, it can substituted directly for current_schema()
. The operation above also assumes the default
queue and should be replaced if inserting to a non-default queue.
SELECT pg_notify('my_custom_schema.river_insert', '{"queue":"my_custom_queue"}');
Like with many database operations, extreme use of listen/notify (thousands of invocations a second or more) can be detrimental to operational health. River debounces use of pg_notify
so that huge numbers of nearly simultaneous notifications are collapsed into only one outgoing call. If you intend to make heavy use of this feature, it's advisable to do the same.
Fully custom inserts
A number of other properties like metadata
, priority
, or queue
can also be specified during insert to assign non-default values:
INSERT INTO river_job ( args, kind, max_attempts, metadata, priority, queue, scheduled_at, tags) values ( '{"my_arg_key":"my_arg_val"}', 'my_job', 25, '{"my_metadata_key":"my_metadata_val"}', 2, 'my_custom_queue', now() + '1 day'::interval, '{"tag1", "tag2", "tag3"}');