Postgres 15 Beta 2, Concerning Locks, Vacuum Tuning, Transaction Anomalies | Scaling Postgres 222

Episode 222 July 04, 2022 00:17:47
Postgres 15 Beta 2, Concerning Locks, Vacuum Tuning, Transaction Anomalies | Scaling Postgres 222
Scaling Postgres
Postgres 15 Beta 2, Concerning Locks, Vacuum Tuning, Transaction Anomalies | Scaling Postgres 222

Jul 04 2022 | 00:17:47

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss psql shortcuts, how to debug deadlocks, how to find & stop queries and how to understand and use SCRAM authentication.

To get the show notes as well as get notified of new episodes, visit:

https://www.scalingpostgres.com/episodes/222-postgres-15-beta-2-concerning-locks-vacuum-tuning-transaction-anomalies/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about postgres 15, beta two concerning locks, vacuum, tuning and transaction anomalies. I'm Kristen Jameson. And this is scaling postgres episode 222. [00:00:20] Michael, right? I hope you, your friends, family and coworkers continue to do well. Our first piece of content is PostgreSQL 15, beta two released. This is from Postgresql.org, and they've announced that beta two has been released, which seems a bit early given it's not going to be until the fall when postgres 15 is released. But here's the second beta and it mentions some of the changes from the first beta as well. So if you want to learn more about that, check out this blog post. [00:00:54] The next piece of content, postgres locking, when is it concerning? This is from Crunchydata.com, and they're talking about when you're monitoring your database. They're mentioning some tools here like PG Analyze, Pgmonitor. And the question is, when should you start worrying about locks? Because locks are a fundamental thing that a database does because you have a single source of data and you potentially have multiple consumers of that data that need access to it to update it, change it, delete it, et cetera. And they say the types of locks you generally have are kind of in two big buckets when you're doing data changes. One are shared locks, so multiple people can have a shared lock on an object and not interfere with one another. And exclusive locks, which means one single session needs to have exclusive access to a particular lock. They also mentioned that once a lock is taken, it's held until a transaction is committed. So that's just something to keep in mind. And they say the best way to monitor locks is looking at the PG locks view. Although in terms of seeing issues, I kind of agree with them down here that the PG Stat activity is a great place to look at that as well. Because you see the active statements and you could see in the wait event column. What type of lock is potentially happening for a transaction that's running? Because the PG Locks view can be very difficult to interpret. Although they mention the key columns here. One is, has a lock been granted yet or not? Because it's a boolean, what kind of mode? So that's essentially the type of lock it's taking. And then what is the PID or the process ID of that particular lock. So those are important bits of information if you're going to look at the PG lock screen. So, like I mentioned, they're mentioning here, locks happen in the system all the time. So you can't just look at, say, the count of locks to say there's a particular problem. But one way that you may be clued into that there's some type of problem going on is looking at where locks haven't been granted. So looking at PG locks where not granted, and say getting account of those. If you say that number increasing, there's presumably some type of problem that things are backing up because if something has acquired an exclusive lock and another process is waiting for a shared lock, it's essentially in a queue, a lock queue and where not granted is essentially that lock queue. So if you see an increasing size of transactions waiting in the lock queue, that indicates a problem. Now, if you want to do more investigation here, you can also still look at the PG Stat activity and look for a lock wait event type. And you can actually look at the process, but then you can also use the PG blocking PIDS function to see what processes are blocking this one. Why is it waiting? What's it waiting on? Do you want to kill those processes or get insight into what the issue is? Then he mentions you do have some non blocking locks and these are lightweight locks that affect data reading. So these are waiting for data to come back, say, reading something from disks or accessing buffer content. And these can happen on occasion if your system is really active, but if you notice consistent issues, then maybe it's time to upgrade your system. Lastly, he covers advisory locks, so these are another type of locking mechanism. Now, Postgres doesn't do this explicitly. This is something that an application developer can use these functions to create their own locks within their application for accessing data. And he does cover that. If you're using this with PG Bouncer and you're using Transaction Pooling, you definitely don't want to use the PG Advisory Lock command because that is session based. If you're using Transaction Pooling with PG Bouncer, you're going to want to use the PG Advisory exact lock functions instead. But this is a great post explaining how to determine when locks are concerning and some queries to take a look at that the next piece of content. Importance of PostgreSQL vacuum tuning and custom scheduled vacuum job this is from Percona.com and they're talking about how to deal with auto vacuum issues, which tend to become a problem the larger your database gets. Some of the issues that they've seen that they're mentioning is that tables become candidates for auto vacuum during peak hours because the trigger mechanism for vacuuming something is basically the threshold of unvacuumed tuples. So if you have a very active database that's changing a lot of data, that's probably going to trigger that auto vacuum should happen. And now it's happening during peak hours. The other problem is starving tables. So your active tables are getting all the vacuums and your less active tables essentially haven't been vacuumed in a while, which can lead to certain problems. The third is no way to control the throttle of auto vacuum workers dynamically. Now they do mention here you can adjust the cost limit dynamically and reload the database to update the configuration, but any actively running worker doesn't automatically change. So if you have a long running worker that takes hours to run, that will run to completion without taking into account this change. It's any new workers that will then use this new configuration. Four is the attempt by DBAs to tune parameters often backfires. So basically this is basically configuration mistakes. One thing they mentioned here is creating a high number of workers, which many people think just adding that will vacuum things faster. It won't. It may actually slow things down. Five is auto vacuum during the active time window, which defeats its own purpose. Again, this is triggering vacuum during active periods, which is not really what you want. And starve tables. Trigger Wraparound prevention Auto Vacuum So there's an aggressive auto vacuum that happens to prevent wraparound of the txids. So the longer you go without vacuuming the table, you run that risk. Then they go into ways to tune auto vacuum. And the advice here is really great. So this is the process you should follow. So the two parameters that are most important is the auto vacuum vacuum cost limit. A lot of times this is inherited from the vacuum cost limit. So either or because this determines how much work you can do. And then the second configuration change is how long to wait before you kick off the next auto vacuum worker. So basically you can reduce that to get auto vacuum going faster. Now, one thing to be aware of, if you try to increase the workers, this vacuum cost limit is a single pool. So if you increase the workers to 5610, that's all going to share this single cost limit. So actually, the more workers you have, each individual worker will be slower, whereas the fewer workers you have, each individual worker will vacuum faster. That's because it's using this common limit. And like we said, DBA is making mistakes. He says one common mistake I see across many installations is that auto vacuum max workers is set to a very high value, like 15. I haven't seen that. But that's going to make each worker really slow doing vacuums. Now they say it's fine to keep the default value of three. I've kept it at three or I've brought it up to four, five or six, never more than six based on how many tables we want to vacuum in parallel. Now, one recommendation they're using is that as you're making your auto vacuum settings, keep in mind that you can change things per table. And this can be useful if you have two or three tables that are highly active that need to be vacuumed more than your other general tables. You can actually change those settings to be able to give these tables a more aggressive vacuum. So keep that in mind and that's a good practice to do. The second thing which I have not heard of and I really like the recommendation here is that you supplement what auto vacuum is doing by adding your own vacuum job but by supplementing it you can target to do a particular vacuum job during off hours and basically what this command is here it's finding the oldest transaction IDs 100 of them that aren't being actively vacuumed so essentially this is targeting those starved tables and says okay, just do a vacuum freeze on these tables and maybe this job runs nightly. Well, you may get a different population of tables being vacuumed every evening and the other thing that they mentioned, which is great here is that you're choosing a slightly different parameter to trigger this and that is the Rail Frozen Xidh. So this is great advice and great recommendations. I definitely encourage you to check out this blog post. [00:09:14] The next piece of content is transaction anomalies with select for update. This is from CyberTech postgresql.com and they're talking about an anomaly that can happen with select for Update. So basically they create a bill table that has a total on it and then an item table that references the bill table with essentially item amounts. So they put one row in the bill table with a total of 60 and then each of those items in the item table that add up to that 60 amount. Now, when you need to add a new item to the bill, essentially you have to insert that new item into the items table and then you need to update the total with the item that was added. So this is a technique to do that here and they're using a transaction to make sure that the total isn't updated without the item or the items inserted without the total. And when they use this query here to join items and totals together, you can see before that new item is added the query looks like this and once that $40 item is added, it increments the total and you can see the new item now, but you can have a transaction anomaly occur here if you use select for update. So essentially this will create a lock on the bill table when you're wanting to an update. Now, they said the use case with this is that maybe you want to delete bills that don't have any items connected to them. So that's the particular use case they're thinking about here. And when they run this, normally you're going to get the same results with this query as the previous one. But if you run that query after the inserting transaction has performed the update, but before it's committed, you can actually get this result here, where the $40 item is not available, but the total has been updated, which is a bizarre result, and they explain how this can happen. Basically, the query runs, and it does all of this processing on the items based upon the explain plan. Then the last thing it needs to do is lock rows. So essentially it's waiting for that lock. And what Postgres does because this query is waiting for the lock. It's waiting for that bill to be updated to the new value. So it's waiting for it to essentially become $100 and postgres quote proceeds with its operation using the updated version of the row. So it's not using the original version of the row, which was $60, because it's waiting behind that lock. It's waiting to see what that new version of the row is and then goes ahead and does the update. But if you remember, because the query was processed early on with these items, the new item wasn't there yet, so it's not going to see it. So this is definitely a little bit of a head scratcher. So I encourage you to read this post to get all the detail behind it. But the key way to resolve this type of anomaly is to actually use a different transaction isolation level. So you can use serializable, of course, because that will assure that any concurrent anomalies just go away altogether. But in this particular case, you can use repeatable reads, because a read committed just means that anything that you're reading has to have been committed, whereas repeatable read will assure the same snapshot for the whole transaction, as they say here. So from the start, even though the total has been updated, it's still only going to use the original total for the bill row. And they show you how to do this in a transaction. Instead of saying just begin, you do begin isolation level repeatable read, and then you do your select for update query. Now, the caveat to this is that you may run into a serialization error and it will say error could not serialize access due to concurrent update. So basically that means you need to roll back and repeat the transaction. So definitely interesting. If you want to learn all the details about this, I encourage you to read this blog post. [00:12:49] The next piece of content ctid and other PostgreSQL table Internals this is from Andreas Sherbaum La. He's talking about all the hidden columns that are on essentially every table. So every table has a table OID, which indicates essentially the object ID of the table. Now, first you may be thinking, isn't this the same for every row? And that's true except in the case of partition tables. So it actually shows the child table, the origin of where the data came from. The Xmen X max indicates what transactions can see which rows. So this handles the MultiVersion concurrency control. There's CMEN, cmax that actually references the commands within each transaction. So if one transaction has, say, five insert statements, those will be indicated by the CMEN cmax, the five different commands within a transaction, the Ctid, which essentially references the physical location of the row in the table. Keep in mind that this can change. So really, primary keys are the best way to reference rows. They make the point of that in the documentation and then OIDs, which aren't really used anymore, but were in previous versions of Postgres. But if you want to learn more about these internals, this is a great post that explains all of them and he shows all these different examples about how to use them. So definitely encourage you to check out this blog post as well. [00:14:08] The next piece of content introducing publication row filters this is from PostgreSQL Fastware.com and they're talking about in postgres 15 the ability to filter what rows a publisher publishes, essentially. So you can create a publication and define a where clause with it and you're only going to be getting those rows back. So they have a few examples here you could do where a price is less than some amount, or where non managers are in the sales department. We only want to logically replicate those data to the subscriber database and you can create independent publications. So maybe certain subscribers want to subscribe to the USA data, or the UK data or the France data based upon different publications set up with different where clauses in them. And I really didn't think about this, but this is quite interesting here, is that updates are handled slightly differently on the publisher compared to how the subscriber receives them. So for example, if an old row and a new row aren't in the publisher filter, nothing's going to be replicated that's clear. But when a row wasn't in the filter and then it's update on the publisher to be in the filter, that actually needs to be an insert of the subscriber now not an update because it never had the data, so it needs to be an insert. Whereas if the old row was a part of the filter and then you've updated the data in the publisher to no longer apply to the filter, now that row actually needs to be deleted on the subscriber. And of course an update only happens when there's a match on the filter with both the old row and the new row. They also talk about it is possible to combine multiple row filters and how they do that is using an or to do that, but definitely an interesting feature coming for logical replication in Postgres 15. [00:15:57] Next piece of content introducing PostgreSQL interface for spanner availability at scale with the interface you know, so basically Google now offers a PostgreSQL compatible interface for their product called Spanner, which is basically a distributed SQL based database. So now you can talk to it as if it was PostgreSQL. So if you want to learn more, definitely check out this post. [00:16:20] The next piece of Content SQL isn't that hard when you don't approach it from an object oriented point of view. This is from Timescale.com and this is an interview they did with John Pruitt, so if you want to learn about his perspective, you can definitely check out this blog post. [00:16:36] Next piece of Content SQL isn't that hard if you have the freedom to experiment this is another interview on Timescale.com with Hakibinita, so if you want to learn more about his perspective on SQL, you can definitely check out this blog post. [00:16:50] The next piece of content. The PostgreSQL Person of the week is Maladin Marinovich. If you want to learn more about Aladdin and his contributions to Postgres, definitely check out this blog post. [00:17:01] And the last piece of Content we did have another episode of the Rubber Duck Dev show this past Wednesday evening. This one was on Ruby Fibers, so these are smaller threads, but basically a language implementation that offers some concurrency control. So if you're interested in this type of long form developer content, we welcome you to check out our show. [00:17:22] That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show. Notes Be sure to head over to Scalingposgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks.

Other Episodes

Episode 115

May 25, 2020 00:12:50
Episode Cover

PGCon Online, Performance Tips, Tracking Counts, Essential Monitoring | Scaling Postgres 115

In this episode of Scaling Postgres, we discuss PGCon going online, application performance tips, ways to track counts and essential areas to monitor. To...

Listen

Episode 326

July 28, 2024 00:20:28
Episode Cover

Faster Index Creation | Scaling Postgres 326

In this episode of Scaling Postgres, we talk about speeding up index creation, extensions to track wait events, a row pattern recognition feature and...

Listen

Episode 217

May 29, 2022 00:16:33
Episode Cover

Schema Change Mistakes, Canceling Statements, pg_rman, Pedantry Removal | Scaling Postgres 217

In this episode of Scaling Postgres, we discuss mistakes you can make when doing schema changes, how best to cancel statements, looking into pg_rman,...

Listen