As well as being able to store JSON in the database, either as plain text (json type) or as a decomposed binary format (jsonb type) PostgreSQL has quite a number of useful utility functions for creating JSON from non-json data. In many cases, clients don't want to store the json or process it in any way, they just want to get hold of it outside the database. There are subtle differences between the two types - jsonb eliminates whitespace and duplicate object keys, and mangles the order of object keys.
PostgreSQL 9.5 will introduce jsonb equivalents of pretty much all the json building functions, namely json_agg, json_object_agg, json_build_array, json_build_object, json_object and to_json. The exceptions are the legacy functions array_to_json and row_to_json, for which we provided no jsonb equivalents. These are covered by to_jsonb.
If object key order preservation is important to you, or you want to be able to have duplicate keys, then you definitely don't want to be using the jsonb functions. But beyond that, if you don't want to store the json or process it in any way in general the jsonb functions are overkill, and they are definitely slower. A recent benchmark I did for a client had a query using the jsonb functions to be slower by a factor of about 4. When you think about it, this isn't entirely surprising - not only is it more work to turn data into jsonb, but when we're done we have to turn it back into a piece of text to deliver to the client. With json, once you've constructed it it's already text, so there's no extra work.
So don't assume that the jsonb functions are always going to be faster - in many cases they are not. Other things being equal, you will often want to be using the json generating functions.
On the other hand, if you want to store the data for later processing, e.g. by examining the contents of a field inside the json, jsonb is probably the way to go.
And of course, if speed is an issue you need to benchmark you application.
No comments:
Post a Comment