Replication Conflicts, Postgres Governance, pg_crash, MobilityDB | Scaling Postgres 140

Episode 140 November 15, 2020 00:11:27
Replication Conflicts, Postgres Governance, pg_crash, MobilityDB | Scaling Postgres 140
Scaling Postgres
Replication Conflicts, Postgres Governance, pg_crash, MobilityDB | Scaling Postgres 140

Nov 15 2020 | 00:11:27

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss replication conflicts, how Postgres is governed, a pg_crash utility and tracking temporal spatial data with Mobility.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/140-replication-conflicts-postgres-governance-pg_crash-mobilitydb/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about replication conflicts, postgres governance, PG crash and mobility. DB I'm Creston Jamison and this is Scaling Postgres episode 140. [00:00:21] All right, I hope you, your friends, family and co workers continue to do well. Our first piece of content is that new versions of PostgreSQL have been released from 9.5.24 to 13.1. Now, it seems like the main reasons that these updates were published were due to three security issues that were mentioned at the top of the newspaper here. But of course, it also includes a number of bug fixes and improvements that are as well listed right here along with the links to the release notes. So it looks like due to the security issues you should probably upgrade sooner as opposed to later. [00:00:58] The next piece of content is dealing with streaming replication conflicts in PostgreSQL. This is from CyberTech Postgresql.com and they're talking about replication conflicts. Some conflicts that can occur when you have replication set up is snapshot replication conflicts. So example that they mentioned here is that you have a vacuum process on the primary and it removes dead tuples and that action is replayed on the standby. And now any queries will have to be canceled. There's also lock replication conflicts. For example, the primary takes an access exclusive that interferes with other queries that are ongoing on the standby. Therefore they have to be canceled for that to take place. Then they also mention buffer pin replication conflicts. So those are the three most common that would cause query cancellations on a standby. But they also mention a few other rare ones such as deadlock replication conflicts, table space as well as database. Now they do mention there is a view on the standby where you can look at these database conflicts and it's called PG Stat database conflicts that you can query so you can determine what kind of database conflicts are happening for your standby. Now, in terms of controlling these query cancellations, one parameter you can adjust is the max standby streaming delay. This is how far the standby is allowed to be out of sync from the primary in terms of the number of milliseconds. And they show an example that a query gets canceled and it gives some details that the user query might have needed to see row versions that must be removed and that's example of a snapshot replication conflict. Now, this max standby streaming delay has a default value of 30 seconds. So the standby has 30 seconds to apply those changes from the primary. And by adjusting this value you can determine whether you want the standby to keep in close proximity to where the primary is or to have it be a little bit further out so it can handle more longer queries. And they talk about some of these use cases such as high availability. So that means you have a standby that is running very close to the primary and in that case you want to keep the streaming delay low. Another scenario is you just want to offload big queries that maybe take a long time to run. Well then probably you want to have a higher max standby streaming delay for that Replica or that standby. Another scenario they mentioned is horizontally scaling. This is where you have one primary that does read write and you have multiple standbys that you can do read only queries against. Now, unfortunately, they mentioned with this particular settings there are really no good settings and if you're wanting to do this, perhaps you want to look in using synchronous replication with remote apply set on your standby. So that way you'll be confident as soon as data is inserted in the primary it is reflected in all of the Replicas. Now, the downside of that is it's a performance set because it actually waits for everything to be in synchrony across multiple databases before it returns a valid confirmation to the client. So doing something like synchronous commits with a remote apply definitely hits your performance but it definitely assures that the data is in multiple places at one time. Now, they don't cover that too much in here, but that is an option. Now, they say one way to avoid some of these replication conflicts is by disabling the hot standby. So that means you don't have any queries hitting the database. Essentially. Another way is avoiding lock conflicts. So basically avoid some of these operations on the primary when you don't want any queries to be canceled on the standby. They also mention for completeness is that Vacuum will also take an Access exclusive lock to see if there's any pages that can be given back to the operating system. And that you may want to set Vacuum truncate off for tables because that avoids this truncation step which is another Access exclusive lock that needs to be set, albeit temporarily. So that's something you may want to consider doing. And some things you can do to avoid snapshot conflicts is actually turn on hot standby feedback. Although the disadvantage of course is that this can cause bloat on the primary because you're essentially having the standby communicate to the primary hey, don't get rid of this. Yep, because I'm still using it and I need it for a query. It's also possible to increase the Vacuum deferred cleanup h but again, this could also lead to table Bloats as there's some disadvantages with setting these two and then they have a conclusion basically is that some of the best ways to avoid these replication conflicts is to have dedicated standby servers. One for high availability purposes and one for offloading queries and or backups. So if you're interested in learning more about some of these features and settings that you can make, go ahead and check out this post from CyberTech Postgresql.com. [00:05:37] The next piece of content is is it time to modernize the processes, structure and governance of the PostgreSQL core team. This is from PostgreSQL Fund and in light of the new members that joined the core team and as a reflection of EDB acquiring second quadrant, this article goes into more of a discussion about governance of PostgreSQL as a whole. Now, it mentioned some things that I found beneficial and part of the reasons that they're wanting to potentially examine this is looking on how postgres being governance from a company influence perspective, from a diversity perspective, from a democracy perspective, and a transparency perspective. So I definitely felt the article is a good read and I think examining how the community is structured could always be beneficial and how can we make improvements. So I definitely encourage you to read this blog post and think about how maybe the PostgreSQL community could be improved from a governance perspective. [00:06:36] The next piece of content is PG Crash crashing PostgreSQL automatically. This is from CyberTech Postgresql.com and is talking about an extension that purposefully crashes the database. So you load it into your shared preload libraries and you emit signals to the background worker to set a crash with a particular delay. Now of course they advocate this for like QA environments, but it's basically a way to test out your, say, high availability infrastructure to make sure that you can handle downtime gracefully. So, definitely interesting tool I had not heard of. So if you want to learn more, definitely check out this blog post. [00:07:14] The next piece of content is analyzing GPS trajectories at scale with Postgres, Mobilitydb and Citus. So this is from Citrusdata.com and they're talking about Mobilitydb, which is a software tool that they say supports temporal and spatial temporal objects and works with the Postgres database and its spatial extension Postgres. So it kind of works with Postgres and PostGIS and essentially what it does is it not only judges distances from one point to another, but also in a point in time making those calculations. Now, they made a point that you can just, with pure PostGIS, answer the same types of questions. So for example, a bus is traveling this route. On what point will it see, say, this billboard or this billboard and during what period of time and for how long? But doing it in that it has a very complex query to get that answer. With the Mobilitydb tool added to it, the queries become much easier to work with and you can get something like this so much simpler than the string of CTEs that were presented above. So this article goes through the use cases and how they're using it to track the spatial temporal visibility using Mobilitydb. So if you're interested in that, definitely check out this blog post. [00:08:37] The next piece of content is Postgres Constraints. This is from Mohit Care and this is a very short post, talking about the different constraints from a primary key constraint, foreign key constraint, check constraints, unique constraints, not null constraints, and exclusion constraints. And for each one he gives an example with code about how it is and what it does. So if you're interested in that, definitely check out this blog post. [00:09:01] The next piece of content is actually the Percona YouTube channel. Again, they continue to be posting videos to their YouTube channel with regard to Percona Live Online they have a number of PostgreSQL postgres you may want to check out such as Wall Commit and Synchronization optimization Opportunities, PG, Stat Monitor as well as others. So if you're interested in more video content you can check out this link. [00:09:25] The next piece of content is another YouTube video called Transaction Isolation Levels with PostgreSQL as example. So it goes through the different transaction isolation levels using Postgres as an example. So this is a very short video at only eight minutes, so if you want to learn more about that you can check out this video. [00:09:42] The next piece of content is Postgres streaming replication on Windows. A quick guide. So most of the examples are done in some form of Linux, but this one talks about how to do it on Windows. So if you want to learn how to do that, you can check out this post from Crunchydata.com. [00:09:59] Next piece of content is a webinar from Secondquarter.com. It's called commit without fear. The beauty of camo. So Camo refers to commit at most once. So this is talking about Postgres synchronous commit, how it works and working with it and compares and contrasts it to second quadrant's BDR product or their bi directional replication, basically their multimaster product that implements this commit at most once. So if you're interested in learning more about this product, you can definitely check out this webinar by clicking the link here. [00:10:34] The next piece of content is the PostgreSQL person of the Week is Carl Mopple. So if you're interested in learning more about Carl and his contributions to Postgres, definitely check out this blog post. And Last piece of content is a link to the PostgreSQL weekly news posted on the Postgresql.org website. So in addition to listing some product news, this is definitely the place to get all the different patches that have happened with Postgres over the past week. So if you're interested in that, definitely check out this blog post that does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show Notes. Be sure to head over to Scalingposgres.com where you can sign up to receive weekly notification of each episode. Or you could subscribe via YouTube itunes. Thanks.

Other Episodes

Episode 175

July 26, 2021 00:13:33
Episode Cover

Zheap Status, Sequence Gaps, Efficient Pagination, Set Up Patroni | Scaling Postgres 175

In this episode of Scaling Postgres, we discuss the current status of zheap, the gaps in sequences, how to efficiently paginate and how to...

Listen

Episode 288

October 29, 2023 00:19:02
Episode Cover

Index Corruption From ICU Collation Change | Scaling Postgres 288

In this episode of Scaling Postgres, we discuss index corruption from a ICU collation change, another caveat for PgBouncer prepared statements, ways to version...

Listen

Episode 94

December 16, 2019 00:12:29
Episode Cover

State of Postgres, Dumping DBs, Configuration Parameters, Partial WAL | Scaling Postgres 94

In this episode of Scaling Postgres, we discuss the State of Postgres, how best to dump your database, setting configuration parameters and managing partial...

Listen