One of the things that made NoSQL databases like MongoDB so popular was how easy it is to just throw a blob of JSON in, when that’s what you needed. What you may not realise though, is that Postgres supports JSON pretty much equally well. In addition to just throwing some JSON in a field, you can query it using all the usual SQL operations. You get JOIN s, transactions, indexes etc.

JSON vs. JSONB

The first step, when adding a JSON field to postgres is to decide between JSON and JSONB. I’ll make this easy for you:

Always use JSONB, never use JSON

JSONB is just like JSON, except that it doesn’t store the actual string of JSON, instead it stores an efficient binary representation. The only reason you might ever want to store JSON is if you wanted to keep track of the whitespace in the original JSON serialisation. This should never be needed, if you want a “pretty” view of your JSON, you can just use JSON.stringify to pretttify it. I can’t imagine why you’d ever need the original whitespace.

Creating the Table

Now that you’ve decided on JSONB as the format, you can create the table just like you normally would.

CREATE TABLE my_data (

id TEXT NOT NULL PRIMARY KEY,

data JSONB NOT NULL

);

This creates a table with a primary key called id that’s of type TEXT and a data column to store our JSON data.

Reading and Writing JSON Data

If you’re using the @databases/pg client, you can read and write Postgres data just like you would any other value:

import connect, {sql} from '@databases/pg'; const db = connect(); export async function get(id) {

const [row] = await db.query(

sql`

SELECT data

FROM my_data

WHERE id=${id}

`

);

return row ? row.data : null;

} export async function set(id, value) {

await db.query(sql`

INSERT INTO my_data (id, data)

VALUES (${id}, ${value})

ON CONFLICT id

DO UPDATE SET data = EXCLUDED.data;

`);

}

This gives us a simple key value store for JSON blobs using a postgres database.

Querying JSON

Imagine we’re storing some blog posts in our “NoSQL Postgres Database”:

await set('post-a', {

author: 'ForbesLindesay',

title: 'Post A',

body: 'This post is about the letter A',

}); await set('post-b', {

author: 'ForbesLindesay',

title: 'Post B',

body: 'This post is about the letter B',

}); await set('post-a-rebuttal', {

author: 'JoeBloggs',

title: 'Post A - Rebuttal',

body: 'Forbes was wrong about the letter A',

});

Now imagine we want to get a list of all the blog posts by ForbesLindesay . The author field is buried in the JSONB field, but that doesn’t mean we can’t use it in our query.

export async function listByAuthor(author) {

return await db.query(

sql`

SELECT data

FROM my_data

WHERE

data ->> 'author'

= ${author}

`

);

}

Here, the ->> operator means “get the value of this property”. It will only work when the value is a string, number or boolean. If the value is another object, you must use the -> operator, which means “get the value of this property as JSON”.

Hopefully it’s clear that this means you can use the full power of SQL here, but just to give another example, we could get a list of all the authors:

export async function getAuthors() {

return (await db.query(

sql`

SELECT DISTINCT

data ->> 'author' as author

FROM my_data

`

)).map(({author}) => author);

}

Here, we’re extracting the author from the data, and then using SQL’s DISTINCT operator to only return each author once.

Conclusion

In Postgres, you can use JSON just like any other value, and it would be easy to set up a NoSQL like store for JSON blobs and just use that as your entire database. This doesn’t necessarily mean you should. This JSON data is totally schema-less, so it makes it very important that you properly validate that it matches any expected structure, before inserting it into the database. It’s very useful when you need to store large JSON structures and you’re not yet sure how you’re going to want to query them, but for the most part, I still recommend having explicit field in SQL, and using JOINs to store nested lists etc.