Delayed Replication, Materialized View Permissons, Paranoid Postgres, Memory Overcommit | Scaling Postgres 173

Episode 173 July 12, 2021 00:15:27
Delayed Replication, Materialized View Permissons, Paranoid Postgres, Memory Overcommit | Scaling Postgres 173
Scaling Postgres
Delayed Replication, Materialized View Permissons, Paranoid Postgres, Memory Overcommit | Scaling Postgres 173

Jul 12 2021 | 00:15:27

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss how to delay replication, working with materialized view permissions, paranoid configuration options and addressing memory overcommit.

To get the show notes as well as get notified of new episodes, visit: 

 https://www.scalingpostgres.com/episodes/173-delayed-replication-materialized-view-permissions-paranoid-postgres-memory-overcommit/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about delayed replication, materialized view permissions, paranoid postgres, and memory overcommit. I'm creston. Jameson. And this is scaling postgres episode 173. [00:00:24] All right, I hope you, your friends, family, coworkers continue to do well. Our first piece of content is simulating temporal tables with DB link and replication delay. This is from CyberTech Postgresql.com, and they're talking about simulating the ability to do time travel with tables, basically being able to go back in time and see at what state a particular table was in. Now, of course, the easiest way to do this is to just work with insert only or append only tables. So you'd have to construct your data such that nothing is ever deleted or updated, but you just basically have inserts and you can go back at any point in time using the dates on the table. But apart from that, they did come up with an interesting technique to do this kind of time travel thing using replication delay as well as DB link. Now, the feature that applies a delay is called recovery men apply delay. And what that does is you can set it up on a replica so that it doesn't follow the primary database server immediately, but it delays itself by however many hours you choose. It could be an hour, 2 hours, 24 hours. So this is a great feature to recover data if you had a replica that was a couple of hours delayed, because even if you have a primary database with multiple replicas, what happens if you accidentally drop a table or you accidentally delete some data with a delete statement? Well, if you have a large database, it's going to take a while to do a full restore of that appointed time recovery up to that point. But if you had something that was a little bit delayed, perhaps you could go into that delayed copy, pull the data out you needed to reinsert it if you wanted to do that. So it's a very interesting feature for specific use cases, but here in this example, they're using it for this point because that's how they're doing the time travel or the time delay, because they're going to be contacting that replica. And how they're doing it is using DB link. Now, this is something that existed for one postgres server to talk to another postgres server before the advent of foreign data wrappers. So normally you would use foreign data wrappers, but because of what they set up here, apparently it didn't really work. But they used the older fallback DB link to do it. But they do describe how they set it up and then the relative performance of it. But mostly I wanted to highlight the Replica apply delay feature, as well as mention dblink still exists in addition to the newer foreign data wrapper. So if you're interested in learning more about some of these, you can check out this blog post. [00:03:02] The next piece of content postgres permissions and materialized views. This is from Rustprooflabs.com, and they're talking about when you have a materialized view and it needs to be refreshed. It needs to be refreshed by the owner or a super user of the system. And sometimes this can be a bit inconvenient because maybe the user that created it is not present and it needs to be refreshed and you don't have a super user around to do it. Basically you want general users to be able to refresh it. And how you do that is by assigning permissions to a specific view and granting that permissions rights to other users. And this blog post walks through all how to do it. So first he creates a super user role using the with login super user, and he created a materialized view with it. Then he created a general user role and granted select on that materialized view to the user and now he can query it just fine, no problem. But when he tries to refresh the materialized view as that user, it fails because an error in it says must be owner of the materialized view. So what he does here in this example is he reassigns ownership from a super user to another user that he creates, or I should say another role that he creates. So he creates another role, the owner of the materialized view role with no login means they can't log in, it's only a role. And he alters the materialized view to assign ownership to that role. And then he grants that role to his user role. And when he sets his role to be my user and refreshes the materialized view, it works. If you have the need to set up certain users to refresh materialized views, maybe you want to check out this blog post. [00:04:42] The next piece of content paranoid SQL execution on Postgres. This is from Ardentperf.com, and he's talking about a few things you may want to set up on your postgres database system to be able to protect it from things like rogue queries or things accessibly locking it. And I'm not going to go through the full list here, but I'll just point out some highlights. I think probably the most important ones are the second and third that he lists. So the most important one in my opinion is probably this lock timeout. Because whenever you're doing any sort of DDL statement, you want to make sure that while it's waiting to acquire lock, it's not going to back up other queries, at least for an excessive amount of time. And I've seen lock timeouts from two to 5 seconds. It depends on how active your database system is. And you don't have to have this running all the time, but when you're ready to do DDL changes, it's usually a good practice to have that. The other is to have a statement timeout. Now, I probably wouldn't put this on the database as a whole or if you do it needs to be incredibly long because this could cancel for example, backup jobs or vacuum jobs but applying it to the roles that are being used is a good practice or in particular sessions. Maybe you want to adjust the statement timeout. So this is another very good one to keep your database connected. Now they also mentioned things like connect timeouts which could help, and the number of other things I'll let you review if you're interested to try and protect your database. But in my opinion it's basically watching out for those locks. That's the thing that's probably going to do you in more than anything else, which is why I think the lock timeout is so important. And he does mention below here, watch what gets locked when you're doing DDL statements. So based upon the version they've been making improvements that you can change more things with dudl statements without creating long running locks. But that's a thing you need to be aware of in order to protect your database is what are you changing and how is the database system being locked? But if you want to learn more, go ahead and check out this blog post. [00:06:44] The next piece of content PostgreSQL memory and the Cloud this is from Sosna De and they're talking about memory over commit and they're talking about the scenario where basically modern Linux systems give more than they have in terms of memory to different applications. So that puts them in a position if an application actually does use all of that memory, it may get into a low memory state and then has to do certain actions to take care of it. So for example, they could panic or halt the system, they could freeze the requesting program, they could terminate the program or use an out of memory killer to terminate specific processes. And this is the thing that you hear a lot with postgres is that you don't want this running and terminating certain postgres processes. Now they give some examples like if you're having a desktop operating system, this makes sense. If something requests too much memory you're okay killing that particular application that may be running, or an application server maybe it's one process that has to be killed that's taking up too many resources, but with the database it can cause some more problems. And on Linux what they recommend is setting the VM overcommit to two. Now you may also want to adjust your overcommit ratio as well. So this is definitely the recommendation but I have seen issues when there is no swap space and a lot of cloud vendors don't provide swap space on their disks and I have seen some problems when setting this overcommit memory to two in the case where there's no swap file. So you do have to be a little bit cautious when setting this if you're operating in a cloud environment with no swap file. Now this post talks about the cloud environment and how some of these settings may not be made. So for example, with overcommit set to two, it just basically should not overcommit memory. So in this normal operation, things consume memory. When no memory is left, future memory allocation should fail and basically postgres rolls back that one query, so it shouldn't bring down the whole system, whereas in some cloud environments they may not have this setting set and an out of memory killer is triggered. And because some processes use shared memory, it means bringing the whole system down. So basically it results in a restart of the whole PostgreSQL system. So definitely not ideal either. So it's definitely something to keep in mind when you're configuring your postgres systems. I would probably set the overcommit to two, but then adjust the overcommit ratio to make sure that you're getting good memory allocation. But if you want to learn more, go ahead and check out this post. [00:09:25] The next piece of content PostgreSQL partitioning in Django. This is from Pginalyze.com and they're talking about partitioning your tables in your database for use with Django in the top. Here. They did do a test that they described lower down below, but they were seeing performance improvements when data has been partitioned. Now, this is a greater improvement than I've seen in my implementations, but it does stand to reason you should receive some performance benefit. But I think the bigger significance is the maintenance of tables is much easier when you're partitioning. So if you have a billion plus row table, it's much better to break that down into multiple partitions so that they will be much easier to vacuum. And then if you ever have to delete data, it's much easier to say delete older data if you need to by just dropping a table. Now, they say when you get to a million or more records, you may want to consider partitioning. I think you could push that out to potentially a billion. I mean, it depends on how much data is in each row, but I think you could easily push it to the 500 million 1 billion record mark before you do partitioning. And the rest of this post talks about doing list partitioning and range partitioning, hash partitioning, how it works and how you get it set up. Basically you create a parent table that has no data in it and then each of the partitions is what contains the data and it's partitioned based on how you have it set up. So here they just did it by a simple date range and they demonstrate the code to do that. They then used a faker library to create a bunch of data and that's how they did the tests from the graph that appears at the top. And then for those using Django, they suggest some additional libraries to help work better with Django and partition tables. So if you want to learn more how to use partitions in Django. Definitely check out this blog post. [00:11:17] The Next Piece of Content announcing Cred Check extension to enforce username and password checks in PostgreSQL this is from Migops.com. Now, PostgreSQL has a check password hook and they have leveraged this to develop an extension they're calling Cred Check and it allows you to specify complexity standards for the username and the password. So for example, a username such as My Simple User would be excluded, whereas this one of this complexity would be allowed, or something that just says Secret would be disallowed for a password, whereas this would be considered a good password. And they have all of these different standards for defining the complexity of the password in this extension. Now, this new extension does have some prerequisites. You need to be using at least version PostgreSQL Ten, and you need to have the development package library, such in the case of Ubuntu, it's the PostgreSQL server dev version and you do need to go ahead and compile it from Source and they describe how to do that here. Once you've done that, you can actually run the Create extension command, then add Credcheck to your shared preload libraries and then restart your database system. And then you enforce those standards by updating the Postgresql.com file. Or you could use the Altersystem command which updates the PostgreSQL auto.com file. So this is a very interesting extension to help you give username and password standards for your database system. So if you're interested in that, check out this blog post. [00:12:53] The Next Piece of Content the Next generation of Kubernetes native Postgres this is from Crunchydata.com, and this post announces that they have released Postgres operator version five. And they said this new version really embraces a declarative approach as opposed to an imperative approach. Basically you define what you want to have happen and it basically does it for you, as opposed to you having to explicitly set up all the different parameters. Now, related to that, they have a second post that says getting started with PGO postgres operator 5.0. And here they show you how easy it is to get up and running using Kubernetes and a Postgres cluster. So if you're interested in that, definitely check out these blog posts. [00:13:38] The next piece of content. Disaster recovery Strategies for PostgreSQL deployments on Kubernetes, Part Two. This is from Bping blogspot.com. Now, the previous post was talking about using the Crunchy PostgreSQL operator, but this new version talks about Zalando, so it's a Zalando postgres operator. So if you're looking to learn to use Postgres and Kubernetes, maybe you want to check out this post as well. [00:14:05] The Next piece of Content a quick sanity testing of PG pool two on Arm 64 this is from Amitcon, PG Bogspot.com. And as more software is being developed for Arm, this post explores using PG pool two on it. So if you're interested in his results, you can check out this blog post. [00:14:26] The last piece of content. The PostgreSQL Person of the Week is Daniel Westerman. So if you're interested in learning more about Daniel and his contributions to Postgres, definitely check out this blog post. [00:14:38] And a final piece of content I will mention is that we had our third episode of the Rubber Duck Dev Show last Wednesday and the topic was the people side of software project management. So I'll include the link down below if you want to check out that episode. The next episode on Wednesday at 08:00 P.m. Eastern Standard Time, we'll be talking about pair programming, so feel free to join us for our live show. [00:15:03] That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show notes be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks.

Other Episodes

Episode 29

September 10, 2018 00:13:27
Episode Cover

Parallel Indexing, SQL vs. ORM, Logical Replication Upgrades | Scaling Postgres 29

In this episode of Scaling Postgres, we review articles covering parallel indexing, SQL vs. ORM, logical replication upgrades and development DBs. To get the...

Listen

Episode 0

March 08, 2021 00:17:16
Episode Cover

Citus Open Source, Time Series Performance, Subscripting Updates, target_session_attrs | Scaling Postgres 155

In this episode of Scaling Postgres, we discuss Citus 10 open source, time series performance in native Postgres, using subscripting for updates and new...

Listen

Episode 206

March 13, 2022 00:17:51
Episode Cover

100x Optimizations, Auditing, Document Schema Designs, Checksums | Scaling Postgres 206

In this episode of Scaling Postgres, we discuss 100x optimizations, how to audit table activity, designing schemas and how to set up table checksums....

Listen