[00:00:00] Hello. In this episode of Scaling Postgres, we talk about Postgres 13, Release Candidate One, upgrading at scale with ties and hot updates. I'm creston. Jameson. And this is scaling postgres episode 132. You all right? I hope you, your family, friends and coworkers continue to do well. Our first piece of content is Postgres QL 13. Release. Candidate one is released. This is from the Postgres news area. And release. Candidate one is released. And it looks like they're scheduling September 24 so later this week. As for the final release, assuming there's no issues with the first release candidate, and there's a convenient link down here for the release notes for Release Candidate One. And what I always find interesting to look at is the migration to a particular version, things to watch out for. So when you do your upgrade, be sure to check out this area to make sure you don't have any gotchas when you do it. But we're fast approaching the release of version 13.
[00:01:12] The next piece of content is how we upgraded
[email protected]. And of course, this is from GitLab.com and they're describing their upgrade from postgres 9.6 to eleven and the process they used. Now this was super detailed, has a lot of information, a lot of links to additional information about the process they went through to do it, as well as their environment. So I highly suggest checking out this post this week. Now, first they cover why they decided to do the upgrade. Well, the big one is that for version 9.6, it's end of life on November 2021. So that's going to be here in a little over a year, but also the additional enhancements and improvements that they've done up to eleven. And of course they continue to do that with Twelve and soon 13. But they targeted eleven. And then they talk about their environment and their architecture. And they're using 1296 core instances to run postgres, each with 614gb of Ram. So their environment is set up like this. They're using Rails with Sidekick for job processing. Those speak to a PG Bouncer internal load balancer. Now, I'm not quite sure what this is. I don't know if it's custom software they developed or it's just an Ha proxy or something, but that, as they say here, distributes connections to the healthy PG bouncers. And then the PG bouncers talk to the primary PostgreSQL instance. And that's maintained through Petrone and this consensus console server cluster that they have set up here. And then even these PG bouncers go through a local PG bouncer to actually talk to the primary PostgreSQL server for read and write queries. They say the read onlys don't look like they go through a PG bouncer set the way they do here. But they do have a PG bouncer on the local PostgreSQL machines that these read only queries talk to, at least according to this diagram. And they're just using standard streaming replication to keep their replicas in sync and then they send their base backups and their wall archiving to Google Cloud Storage and then they have a, I think they said an eight hour delayed instance that's maintained by just reading the wall files from Google Cloud Storage. And then they have this other disaster recovery archive as well. So that's their environment, their infrastructure that they wanted to upgrade and then they show how busy things are during a given week and then when they targeted the upgrade is of course when they're not going to be that busy. Now they had certain upgrade requirements that they wanted to cover. Number one being they didn't want any regressions on PostgreSQL eleven. So they did a lot of testing and they said they developed a custom benchmark to perform extensive regression testing to make sure that they didn't have any performance degradations. After they upgraded to postgres eleven, they wanted to upgrade all of the database servers during the maintenance window. They were going to be using a PG upgrade, they were going to be keeping a 9.6 cluster available as a fallback and this is something that I do when I upgrade databases as well. They were going to be using the link capability of PG upgrade, which means you can't really go back on that instance. So having some extra servers around as the old version serves as a way to roll back. They chose to do the upgrade using a fully automated way to do it. So they're using configuration management software to do it, specifically ansible Playbooks, although they also are using Chef for certain purposes. They mentioned as well as TerraForm, and they wanted to keep the database upgrades to only 30 minutes. And they actually recorded the whole procedure and published it, which is available as a video down below. Now they go through the different phases of the project in terms of developing automation in isolated environment, integrating it into staging, doing multiple tests and staging, and then finally the production upgrade. They talk a little bit about using PG upgrade and some of the things that they took into account and then also their procedure for doing their regression testing and then a little bit into the automation to ensure that everything was as hands off as possible. Meaning you just ran different steps of an ansible playbook to do the process. There wasn't copying and pasting of code manually. So as you can tell, this is super detailed of everything that they went through as well as the actual final video on YouTube of the upgrade process that they went through. So I highly encourage you to check out this post from GitLab.com.
[00:05:35] The next post is PostgreSQL 13 Limit with Ties. This is from secondquader.com and there's actually a new clause that is supported when essentially using Limit. Although it looks like you can't use Limit with it, you actually have to use the SQL standard which is fetch first in rows, which is the same thing as limit. So fetch first ten rows is the same thing as limit ten. But what this does is it includes any extra rows that are the same as the last row that is being pulled if what you're ordering on the value is the same. So for example, here they're pulling select all employees, order it by the salary and limit it to two records. So you're only going to get two records. But the question is what would be the third person's salary? Well, that's where with ties comes in. So here it's the same query select all from employees order by the salary. And then again you have to use the fetch first so many rows. So it's using fetch first two rows with ties. And here you can see that indeed the third person ordering by salary does have the same salary as the last person. So it's going to give you three rows because you want to include any ties. So this is a pretty interesting addition that helps improve the SQL standard compatibility of postgres. So if you want to learn more, definitely check out this post.
[00:06:56] The next piece of content is Hot updates in PostgreSQL for better performance. This is from CyberTech Postgresql.com and they're talking about hot or heap only Tuple updates. So they say how updates happen in postgres is basically a new row is inserted while the old row is maintained. And then later that old row is vacuumed up when it's no longer visible. So for example, they show a little example here where you have an index here by ID and it's pointing to each of the records. And then if you update row two, it looks for the first empty spot and that happens to be in the next block, not the current block of the heap because the current block is full. So it's going to put it into the next usable area. So it's going to place that here. Now, doing updates this way they say, have a lot of advantages. There's no need for an extra storage area where old row versions are kept. These are like rollback segments. Say in Oracle, rollback does not have to undo anything and is very fast. And there's not an overflow problem with transactions that modify a ton of rows. But this has some disadvantages. Basically now you have to vacuum up all of those old rows. You have tables that can become bloated with all of these dead Tuples. And every update requires a new index entry to be added even if no indexed attribute is modified. And modifying an index is more expensive than just modifying the heap because you have to maintain order. And this is kind of where heaponly Tuple updates come in. So they did the example of the image that was mentioned above. So they have a table with an ID and then a value column. They generate a series of data in it and then they show where the data is in the different blocks using the Ctid here. And you can see when they do an update of ID 42, because block zero is full, it places it in block one, so it goes into 110 here. So that was not a heap only Tuple updates. But here they show an example of how it works. And the reason that this works is because for each block, an array of line pointers is maintained to point to the actual data within that block or that page. So it separates the external reference from the internal implementation. And they say here a heap only tuple is a tuple that is not referenced from outside the table block. And instead a forwarding address is stored in the old row version. So for example, three here, it's pointing here. But if you do an update of three and there's free space in this block, it can place that new tuple here. So it's maintaining the old version here and you have a hot link or heap only tuple link so that this index entry can find it. And you don't have to create an entirely new index entry for this tuple that's been created here when doing an update. Now, important thing to remember is, because it's using this internal representation, is that this only works if the new and old version of the row are in the same block. So basically, you need space in this block for this to work. If it gets put into the next block, it can't do a heap only tuple update. Now, the two advantages of hot updates are that postgres doesn't have to modify indexes because the external address of the tuple remains the same. You can see right here, it's exactly the same. It's just another reference within the block that it gets pointed to to actually get the new row version. And dead tuples can be removed without the need for vacuum. So any backend process can actually adjust this pointer or move the pointer around here. Now, they mentioned there are two conditions for using Hot updates. Number one, there must be enough space on the block containing the row. What we mentioned before, you have to have free space in this block. And number two, there's no index defined on any column whose value it modified. So for this latter point, that's pretty easy to tell. However, determining if there's enough space, how you can do that is exactly what they say here by adjusting the fill factor on the tables. So normally tables are filled at 100%, so the fill factor is 100%. But say if you have a table that gets a lot of updates, maybe you want to drop that down to 90, 80, maybe as far as 70% potentially. And then they say you can use the PGSTAT user tables system view to get a sense of how many Tuple updates are happening. And then out of those, how many are hot updates to give you a sense if you're doing pretty well on using hot updates or not. And then they give another implementation example where they have a fill factor of 70 that they set here and they use the same procedure to do an update of the table update ID 42, and now you can see it is kept in the same block, therefore it was a hot update versus the previous implementation. Now they did cover fill factor in a previous article that they linked to here and we discussed in a previous episode of Scaling Postgres and what they found was an 80 to 90 fill factor was probably the best balance for performance. But if you have a lot of heap only Tuple updates, you may want to adjust this down a little bit further potentially to get more heap only Tuple updates. So it's kind of based upon your use case and what you're seeing. And you can use the system table here to give you a sense of what's your balance of heap only Tuple updates to non heap only Tuple updates. But another great post I encourage you to check out from Cyprtechyphen Postgresql.com.
[00:12:14] The next piece of content is Seven best practice tips for PostgreSQL bulk data loading. And this is from secondquarter.com. All of these tips are essentially don't do things that could potentially happen during an insert, so basically just do less work. So tip number one is change your target table to an unlogged mode, so basically nothing gets saved in the wall files, so that allows you to get great speed with inserting it. However, that can cause a problem if ultimately you do want to log these tables, and particularly when a table is being replicated or you have a replica set up. This can potentially cause a lot of replication traffic once you try to re enable logging mode again. So something to be cautious of, but this is definitely an option. Second tip is drop and then recreate the indexes. So drop all the indexes except for say, maybe the primary key, and then recreate all of those indexes once all of the data is loaded. Third is drop and recreate foreign keys. So again, do less work for each insert. Generally the foreign key has to be consulted before the insert can succeed. So if you remove those foreign key constraints, it can happen faster. A four disable any triggers that you have there, as long as you can safely do that. Again, less work for the insert will make it faster. A five use copy. This is much faster than trying to insert data using standard SQL statements. If you must insert data using SQL statements, use a multivalued insert. So insert multiple rows at a time with your data and they advise keeping it to say 1000 rows or less generally. And the last tip is to run analyze once all of the data has been loaded. So definitely a great set of tips from secondquarter.com.
[00:13:59] Next piece of content is Hidden gems of PostgreSQL 13. This is from Crunchydata.com and it's highlighting some of the features that they call little gems. Number one is guard against the rogue replication slot, and I tend to call these orphan replication slots. So they're replication slots that have hung around and now they're potentially causing wall bloat in your primary and potentially causing it to run out of disk space. But there's now this new configuration setting max slot keep wall size that enables you to define the max amount of wall size a slot can use before it gets deactivated, essentially. So it protects your primary database system from running out of disk space. So this is a great addition. I definitely agree with this. The second one is Functions calculating the greatest common divisor and the least common multiple. The third is certificate authentication for the PostgreSQL foreign data wrapper. Fourth is encrypted certificates to connect from your applications. So basically your certificates you can now include an SSL password as a part of it to be able to do the authentication when there's a password on the key support for Unicode Normalization. The next one is using UUIDs without having to install an extension for it. And then finally the PostgreSQL glossary which they mentioned here. So definitely great additions. That sounds like we'll be coming to Postgres 13 as early as this week.
[00:15:28] The next piece of content is Postgres and the Artificial intelligence landscape. This is from Bruce Mongium at Mongium US. And this was an interesting post that talks about artificial intelligence and some things that you can do within postgres. Now what's great is he has links throughout this presentation to almost each slide where it discusses in more detail. So this was a very rich presentation from that perspective of diving in and getting exposed to machine learning. And then he actually uses PL Perl to generate machine learning within PostgreSQL functions. So this is a very interesting blog post if you want to get introduced to machine learning and doing a little bit of it within PostgreSQL.
[00:16:13] The next piece of content is exploring PL Python turn Postgres Table data into a NumPy array. So apparently a NumPy is for scientific analysis and one of its data structures is an array. And you can convert a Postgres Table data into that. And that's exactly what this post does so that you can do scientific analysis of data that resides within postgres. So if you're interested in that, check out this post from Crunchydata.com.
[00:16:42] The next post is talking about Citus and postgres at any scale. So this is discussing a presentation that was given at a conference and the YouTube video is right here for it and talking about the scale out solution which is Citus. So if you're interested in that, check out that blog post. This is from citrusdata.com. And then the last piece of content is the PostgreSQL person of the Week is Anthony novochain. So if you're interested in learning more about Anthony and his contributions to PostgreSQL, definitely check out this blog post that does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show. Notes be sure to head over to Scalingposgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube. Ride tunes. Thanks.