334

Unconventional PostgreSQL Optimizations

> The index is 214 MB! That's almost half the size of the entire table. So the analysts are happy, but you? Not so much...

This is part of a broader choice: write amplification. You'd want to, of course, have the most precise index possible - but no matter how you cut it, you are incurring extra I/O for writes - one for the tuple, one per index. How you index things is heavily influenced by the mix of reads and writes, and this is why we have data warehouses/read replicas in the first place: it allows us to avoid write amplification in the write path, while having fast filtered reads (that are slightly delayed).

If you're dealing with <ridiculous number of users>, there is a good chance that you don't want to be putting BI/OLAP indices on your OLTP database. You probably don't have enough users to worry about this - but - if you ever find that your writes are becoming an issue this is something to consider.

9 hours agozamalek

Would be nice if PG supported clustered indexes (Index Organized Tables in Oracle speak) as an option if you have a table thats accessed mostly the same way you can get a index without the write amplification because the table is the index.

9 hours agoSigmundA

It does support index organized tables with the CLUSTER command, or you meant something else?

19 minutes agowreath

Clustered indexes only save up to 2x write amplification in the very rare case where you're indexing the entire table (e.g. if it has very few columns).

However, that is usually the least of your concerns with write amplification. If you don't batch your writes, you can easily get 100x write amplification. For any primary key or any other index not strongly correlated with your INSERTs, you can get perhaps another 100x write amplification even if you batch you writes.

9 hours agogiovannibonetti

For inserts, you cannot escape writing into the base table and all indexes. However, my understanding is that for updates PostgreSQL has a write amplification problem due to the fact that each time a row is updated this creates a new row (to implement MVCC), and a new physical location in the heap, so all indexes need to be updated to point to the new location, even those not containing the updated columns.

OTOH, with a heap-less (aka. clustered, aka. index organized) table, you would only have to update the indexes containing the columns that are actually being updated. You don't need to touch any other index. Furthermore, only if you are updating a key column would you physically "move" the entry into a different part of the B-tree. If you update an included column (PK columns are automatically "included" in all secondary indexes, even if not explicitly mentioned in the index definition), you can do that in-place, without moving the entry.

Here is how this works in SQL Server - consider the following example:

    CREATE TABLE T (

        ID int,
        NAME nvarchar(255) NOT NULL,
        AMOUNT int NOT NULL,

        CONSTRAINT T_PK PRIMARY KEY (ID)

    );

    GO

    CREATE INDEX T_I1 ON T (NAME);

    GO

    CREATE INDEX T_I2 ON T (AMOUNT);
Now, doing this...

    UPDATE T SET AMOUNT = 42 WHERE ID = 100;
...will only write to T_PK and T_I2, but not T_I1. Furthermore T_PK's entry will not need to be moved to a different place in the B-tree. SQL Server uses row versioning similar to PostgreSQL, so it's conceivable that PostgreSQL could behave similarly to SQL Server if it supported clustered (index-organized) tables.
32 minutes agobranko_d

>in the very rare case where you're indexing the entire table (e.g. if it has very few columns).

Not sure I follow most tables are accessed primarily in one way (primary key) while maybe sometimes in others for analysis. Having the PK written twice because it's almost always indexed is normally a waste and good candidate for a clustered index. So much so that many DB's like SQLite and MySql always do clustered indexes on primary key because their storage engine is built such that tables are a b-tree anyway vs PG that has separate b-tree indexes and heap tables. MSSQL and Oracle give you a choice whether the table is a index structure or a heap.

If you have very specific use case tables they can typically have a clustered index and no secondary indexes, you can still scan them for ad-hoc analysis but you get better insert performance and space usage because you aren't double writing to the heap and a PK index like you would in PG.

As far as batch writes that is a separate issue and has to due with whether that even makes sense for durability, if you need to commit a single random row due to something occurring you can't batch that up and maintain consistency, if your bulk loading data sure and is common practice to do commit batches there, clustered indexes could still be a 100 vs 200x write amplification if you have to insert both an index row and heap row vs just a single clustered index row.

8 hours agoSigmundA

Clustered indexes aren't just about write amplification. They also reduce the reads needed to get the data. Sometimes by quite a bit.

5 hours agoTostino

That's true for seeks into the clustered (primary) index because that index includes all fields, so you don't need to "jump" to the heap to get them.

However, seeking into a secondary index, and then reading a column not included in that index incurs an additional index seek (into the clustered index), which may be somewhat slower than what would happen in a heap-based table.

So there are pros and cons, as usual...

20 minutes agobranko_d

Another option would be a good way of placing indexes on a different physical disk. You could use fast, ephemeral storage like you can for a WAL without amplifying the writes to the same device that is your expensive bottleneck. You could rebuild on data loss.

But it would add complexity to detect out-of-sync indexes and tables.

8 hours agoComputerGuru

You mean tablespaces: https://www.postgresql.org/docs/current/manage-ag-tablespace... ?

7 hours agoSigmundA

Maybe? I wasn’t under the impression these could be reliably lost or out of sync without risking data loss?

7 hours agoComputerGuru

Wasn't aware you could put a WAL on a unreliable storage system either without risking data loss?

Would be interesting for indexes say put them on ram drive and rebuild them on restart if they aren't there just fallback to table scans.

MSSQL has memory optimized tables that do this sort of thing: https://learn.microsoft.com/en-us/sql/relational-databases/i...

7 hours agoSigmundA

If you lose the WAL you lose the data since the last merge but there’s no risk of corruption. The WAL handles missed syncs fine, too, missing losing just that window of data.

I don’t know if or how Postgres records the transaction number in the index to be able to notice if it’s out of date. If it does, I don’t know of any solution to “catch up” the index besides recreating it, which would be ok if that’s the only issue but from my experience with out-of-date indexes (libc or icu updates, where Postgres doesn’t know if anything IS broken and just reports that it could be), there’s no guarantee you’d even notice and your app could be running completely broken until you rebuild.

7 hours agoComputerGuru

>If you lose the WAL you lose the data since the last merge but there’s no risk of corruption.

That is not my understanding:

https://www.postgresql.org/docs/current/app-pgresetwal.html

>After running this command on a data directory with corrupted WAL or a corrupted control file, it should be possible to start the server, but bear in mind that the database might contain inconsistent data due to partially-committed transactions. You should immediately dump your data, run initdb, and restore. After restore, check for inconsistencies and repair as needed.

6 hours agoSigmundA

I guess that depends on the definition of corrupted. It just describes what to do when the WAL is corrupt, but doesn’t say what conditions are considered corrupt. (We use ZFS so I haven’t run into torn writes.)

6 hours agoComputerGuru

Great article, shows a lot of interesting PostgreSQL features. I have used PostgreSQL and MySQL for decades, and this article showed me that I have barely scratched the surface of what is possible.

15 hours agosc68cal

I've used Postgres for more than a decade and everytime I wade into the docs I feel the same way, I'm barely scratching the surface. It's so immensely powerful.

11 hours agobooi

I love what LLMs are doing for me in PG's SQL. I discovered many features by having LLMs write them for me, often spot-on 100% on first prompt.

Since I know conceptually how RDBMSes work, I can ask vey specifically what I want. Also asking for feedback on schemas/queries really helped me. I use a lot more of PGs features now!

8 hours agocies

PostgreSQL is like Emacs. It's an operating system disguised as something else.

10 hours agowhalesalad

the way PG was originally implemented does have some overlap with operating systems design IMHO.. PG internals define and use PG tables in internal schema to implement core architectural features. The PG code that bootstraps the PG environment is minimal in important ways.

8 hours agomistrial9

The no unique constraint using hash index thing is something I always run into. Am I missing something or is it just glue code missing to translate that syntax to a exclusion constraint with a hash index, allowing foreign keys to reference the column in the process, and possibly also fixing the ON CONFLICT approach in one go?

7 hours agoComputerGuru

The most interesting thing for me in this article was the mention of `MERGE` almost in passing at the end.

> I'm not a big fan of using the constraint names in SQL, so to overcome both limitations I'd use MERGE instead:

``` db=# MERGE INTO urls t USING (VALUES (1000004, 'https://hakibenita.com')) AS s(id, url) ON t.url = s.url WHEN MATCHED THEN UPDATE SET id = s.id WHEN NOT MATCHED THEN INSERT (id, url) VALUES (s.id, s.url); MERGE 1 ```

I use `insert ... on conflict do update ...` all the time to handle upserts, but it seems like merge may be more powerful and able to work in more scenarios. I hadn't heard of it before.

13 hours agomsdrigg

IIRC `MERGE` has been part of SQL for a while, but Postgres opted against adding it for many years because it's syntax is inherently non-atomic within Postgres's MVCC model.

https://pganalyze.com/blog/5mins-postgres-15-merge-vs-insert...

This is somewhat a personal preference, but I would just use `INSERT ... ON CONFLICT` and design my data model around it as much as I can. If I absolutely need the more general features of `MERGE` and _can't_ design an alternative using `INSERT ... ON CONLFICT` then I would take a bit of extra time to ensure I handle `MERGE` edge cases (failures) gracefully.

13 hours agogshulegaard

It's kinda hard to handle MERGE failures gracefully. You generally expect the whole thing to succeed, and the syntax deceptively makes it seem like you can handle all the cases. But because of MVCC, you get these TOCTOU-style spurious constraint violations, yet there's no way to address them on a per-row basis, leading to the entire statement rolling back even for the rows that had no issues. If you are designing for concurrent OLTP workloads against the table, you should probably just avoid MERGE altogether. It's more useful for one-off manual fixups.

10 hours agokbolino

I'm not sure why you'd expect partial updates of a single statement in the first place. I mean, if I run `UPDATE Account SET Status = 'Closed' WHERE LastAccess < NOW() - INTERVAL '90 days';`, I'm not going to be happy if there's 50 records that match, the DB updates 30 successfully, and then error on 20. Atomic isn't just about rows. Do all the work or none of it.

If you're experiencing things that smell like TOCTOU, first you need to be sure you don't have oddball many-to-one issues going on (i.e., a cardinality violation error), and then you're going to have to increase your transaction isolation level to eliminate non-repeatable reads and phantom reads.

Like, the alternative to a MERGE is writing a few UPDATE statements followed by an INSERT and wrapping the entire batch in a transaction. And you should likely still wrap the whole thing in a transaction. If it breaks, you just replay the whole thing. Re-run the whole job.

3 hours agoda_chicken

That reference - my initial gut feeling was that `MERGE` felt more readable, but then I read this paragraph

> If you want the generality of MERGE, you have to accept the fact that you might get unique constraint violations, when there are concurrent inserts, versus with INSERT ON CONFLICT, the way it's designed with its speculative insertions, guarantees that you either get an INSERT or an UPDATE and that is true even if there are concurrent inserts. You might want to choose INSERT ON CONFLICT if you need the guarantee.

Basically, `MERGE` is susceptible to a concurrent process also writing `INSERT` where that `INSERT` and `MERGE` are unaware of one another, causing a duplicate value to be used.

10 hours agoawesome_dude

If you're doing large batch inserts, I've found using the COPY INTO the fastest way, especially if you use the binary data format so there's no overhead on the postgres server side.

12 hours agophiljohn

That doesn't work well with conflicts tho iirc

11 hours agosirfz

Not discussed in TFA: BRIN indices. For the given date example, they would be perfect - if you can guarantee that whatever you’re indexing is monotonic, they’re a godsend. Tiny and performant.

7 hours agosgarland

Minor correction: you don’t even have to guarantee the data is monotonic, it just performs best when it mostly is (good example is when handling incoming timestamps data on the server so messages/rexords are only generally monotonic but may be processed out of order).

How well do they work for UUIDv7? You’d probably have to tune (increase?) pages_per_range, but even though each index entry is 16 bytes you have to consider the btree index on the same is also similarly affected (or worse).

7 hours agoComputerGuru

The hash technique for uniqueness isn’t supported for indexes because it doesn’t handle hash collisions. The authors proposed solution suffers the same problem- values which do not already exist in the table will sometimes be rejected because they have the same hash as something that was already saved.

10 hours agodanielheath

This is completely untrue. While the index only stores the hashes, the table itself stores the full value and postgres requires both the hash and the full value to match before rejecting the new row. Ie. Duplicate hashes are fine.

9 hours agoDiggsey

I think a stored generated column allows you to create an index on it directly. Isn't it better approach?

15 hours agopksunkara

The article explains why they want to avoid this option:

> Starting at version 14, PostgreSQL supports generated columns - these are columns that are automatically populated with an expression when we insert the row. Sounds exactly like what we need but there is a caveat - the result of the expression is materialized - this means additional storage, which is what we were trying to save in the first place!

15 hours agotczMUFlmoNk

Thanks, missed that part. I would still be interested in knowing how much additional storage that adds, if the OP is interested in updating the article.

15 hours agopksunkara

>I think a stored generated column allows you to create an index on it directly. Isn't it better approach?

Is it also possible to create index (maybe partial index) on expressions?

14 hours agozenmac

That's the first solution (a function based index), however it has the drawback of fragility: a seemingly innocent change to the query can lead to not matching the index's expression anymore). Which is why the article moves on to generated columns.

11 hours agomasklinn

I assume it would increase the storage usage, which they say they are trying to avoid in that example.

15 hours agorpsw

Remarkably fresh content.

It's interesting how both virtual columns and hash indexes work, but feel like they're bolted on, vs being made part of the whole ecosystem so that they work seamlessly.

16 hours agoOptionOfT

Virtual columns are basically one or two minor patches from being fully done. Pg18 brought us most of the way there.

Hash indices have long been crippled; they shipped almost unusable but every few years get a good QoL update. I think automatic unique constraints are the big thing left there.

7 hours agoComputerGuru

I moved into the cloud a few years ago and so I don't get to play with fixed server infrastructure like pgsql as much anymore.

Is the syntax highlighting built into pgsql now or is that some other wrapper that provides that? (it looks really nice).

13 hours agothemafia

I generally use pgcli to that end. Works well, has a few niceties like clearer transaction state, better reconnect, syntax highlighting, and better autocomplete that works in many more cases than plain psql (it can even autocomplete on clauses when foreign key relations are defined!).

My only gripe with it is its insistence on adding a space after a line break when the query is too long, making copy/paste a pain for long queries.

12 hours agotuetuopay

You can use an IDE like IntelliJ and you get syntax highlighting, code completion etc.

13 hours agofolli

some points from this article that I didn't know before.

14 hours agolasgawe

Is the Hash Index method strictly superior to creating a unique "hash" column and precomputing the hash in the application or query?

12 hours agopphysch

Sibling comment mentions less space but also your alternative doesn’t (naively) handle supporting values that differ but hash to the same value.

Of course the hash index also outperforms a unique (btree) index on top of separately calculating the hash, in addition to the storage overhead, row bloat, lack of guarantees regarding the hash unless you expose it to Postgres as a user-defined function AND add a check constraint.

7 hours agoComputerGuru

It'll use less storage space

9 hours agosirfz

>Currently, constraint exclusion is enabled by default only for cases that are often used to implement table partitioning via inheritance trees. Turning it on for all tables imposes extra planning overhead that is quite noticeable on simple queries, and most often will yield no benefit for simple queries.

PG's lack of plan caching strikes again, this sort of thing is not a concern in other DB's that reuse query plans.

14 hours agoSigmundA

PG does reuse plans, but only if you prepare a query and run it more than 5 times on that connection. See plan_cache_mode[0] and the PREPARE docs it links to. This works great on simple queries that run all the time.

It sometimes really stinks on some queries since the generic plan can't "see" the parameter values anymore. E.g. if you have an index on (customer_id, item_id) and run a query where `customer_id = $1 AND item_id = ANY($2)` ($2 is an array parameter), the generic query plan doesn't know how many elements are in the array and can decide to do an elaborate plan like a bitmap index scan instead of a nested loop join. I've seen the generic plan flip-flop in a situation like this and have a >100x load difference.

The plan cache is also per-connection, so you still have to plan a query multiple times. This is another reason why consolidating connections in PG is important.

0: https://www.postgresql.org/docs/current/runtime-config-query...

13 hours agosingron

Yes manual query preparation by client [1] is what you did in MSSQL server up until v7.0 I believe, which was 1998 when it started doing automatic caching based on statement text. I believe it also cached stored procedures before v7.0 which is one reason they were recommended for all application code access to the database back then.

MSSQL server also does parameter sniffing now days and can have multiple plans based on the parameters values it also has a hint to guide or disable sniffing because many times a generic plan is actually better, again something else PG doesn't have, HINTS [2].

PG being process based per connection instead of thread based makes it much more difficult to share plans between connections and it also has no plan serialization ability. Where MSSQL can save plans to xml and they can be loaded on other servers and "frozen" to use that plan if desired, they can also be loaded into plan inspection tools that way as well [3].

1. https://learn.microsoft.com/en-us/sql/relational-databases/n...

2. https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...

3. https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-tr...