Skip to content

Run queries

So far we have only seen how to output the generated SQL of a builder thanks to the build method. Obviously, at some point you will want to run your query against a true database. Depending on your use case and your preferences one or the other of the following methods will be more adapted.

As for the build method every method mentioned below lets you pass parameters for your parameterized queries.


This method returns a Promise which will eventually resolve with an array containing the rows returned by the driver. The promise will be rejected in case the driver forwards an error. This is probably the method you are going to use the most.

const latestPostgresPosts = await sh
    .where('title', 'ILIKE', '$title')
    .orderBy('published_at', 'desc')
        title: '%postgres%'

// latestPostgresPosts will be an array with all the posts
// whose title container "postgres", ordered by publication date

In an async context, don't forget to use the await keyword

If you know your result will be a single row, you can use a destructuring pattern: const [user] = await'users').limit(1).run()


This method is exactly the same than run method, but il will also print in the console the result of build (ie with the SQL query used).

.stream(sink: Generator, params?:object)

This method is very useful if you want to process each row one by one as they come out the driver parser. It will be particularly efficient if you want to process big amount of data without consuming too much cpu/memory as no data is buffered. Ship-hold itself does not do any extra work on the data coming out the driver so it would also be a good extension point if you want to add extra layers on top of ship-hold: for example if you want to build an ORM, you could create here model instances from the rows.

The first argument is a generator as data sink. You can have a look at that excellent book if you want more details. The second argument will be an object map with the parameters' values if you want to use a parameterized query.

Note that such streams do not have back pressure implementation.

Consider the following requirement:

I want to create a dump JSON file of all the posts in the database with their authors' data.

Our test database has only 20 000 posts but if you try to do something similar with others libraries you would likely hit a wall. With ship-hold and the stream method it is pretty easy!

    .select('posts.*', {value: toJson('"users".*'), as: 'author'})
    .on('users.user_id', '"posts"."user_id"')
    .stream(function* () {
        let first = true;
        try {
            while (true) {
                const r = yield;
                if (first) {
                } else {
                    console.log(',' + JSON.stringify(r));
                first = false;
        } catch (e) {
            throw e;
        } finally {

If I run the following command time node ./my-program.js > dump.json on my machine, I manage to create a 13Mb file in 1.25 second!