Episode Transcript
[00:00:00] Speaker A: Hello. In this episode of Scaling Postgres, we talk about Explain Analyze, buffers, healthier Postgres 2021 review, and high availability aspirations. I'm Kristen Jamison, and this is scaling postgres episode 197.
Happy New Year. I hope you had a great holiday. Today's episode represents a review of all the content since the last episode, so I hope you enjoy it. Our first piece of content is Explain Analyze needs buffers to improve the postgres query optimization process. This is from postgres AI, and he's talking about the importance of adding buffers when you're analyzing queries. So normally you would run Explain Analyze to actually run the query and give you query timings and the output to understand how that query was done. Well, he's strongly advocating for adding buffers as well to give you an indication of the I O that's involved in processing that query. Now, he does say this does come with an impact to the timing a little bit because it has to take into account and do that measurement. So it does have a very minimal impact, but he says the impact is still worth it. And he advocates not just looking at the timing, how fast you can get the query, but also focusing on the buffers because it's also a good way to compare between different environments. So if you're working in a staging environment or a restore environment, even if you have the same data set, buffers seems to be a much more consistent measure between different environments compared to just using timing. So the first thing he's covering is basically you can see the I O work that's being done, or how many buffers that have to be touched with doing a particular query when you're adding buffers to it. And a lot of the speed differences you see anyway are actually a result of doing less I O work. So this is also a point to advocate for using buffers. So in many ways, using Buffers allows you to see why something got faster. When you say his example started using an index only scan or different changes like a cluster of a table, you can see the I O reduction in it by less access to buffer. So it gives you more information to kind of understand why something got faster. And then the latter part of the process, he talks about benefits of thin cloning, which that's one of the products that they work on is to being able to make very quick copies of the database using copy on write techniques such as through CFS to be able to make quick clones to do performance testing. But this was an interesting blog post that provided a lot of useful information and if you want to learn more, definitely check it out.
The next piece of Content five tips for a Healthier Postgres database in the New year. This is from Crunchydata.com, and the first tip they mention is set a statement timeout. So basically set it so that statements cannot exceed a certain duration when being run. Now, I would advocate not doing this at the database level or at the postgres cluster level, but instead doing it for a say, user role. So you wouldn't necessarily want to do this for the postgres user, limit the statement timeout or even for a full database. I wouldn't necessarily do that because what if you're going to create an index or create something that is going to have a delay longer than 60 seconds? I would actually advocate doing it at the role level and so that your application users presumably belong to that role and the application users would be constrained by the statement timeout. Also it's important to, as he mentions here, set an idle and transaction timeout as well. That would also be important. The second tip ensure you have query tracking. So basically this is using PG stat statements. So highly encourage you to set this if it's not already done for your database so that you can track all the statements running through the system. Because this is the main tool or feature I use to find slow queries when I'm doing a performance improvement for a client. The third tip is log slow running queries and this could be done by editing your postgres configuration and setting the log min duration statement low. And even more recent versions of Postgres allow you to do a sampling of it as opposed to logging every single statement. The next tip is to improve your connection management. So now, he does mention that 14 had some great improvements in this. But if you're running a version less than that or even there are probably some benefits that's still using it in 14 is that you could use poolers like PG bouncer to be able to use many more connections to the database without the database having to have the same number of connections. So basically you could have say 1000 connections on the application side to the database, but the database is only seeing say 100 connections. The next tip find your goldilocks range for indexes. So basically what I think this means is basically finding the right balance of indexes. It's like not too few indexes, not too many indexes, because then you're going to impact your transaction throughput but having the right balance for your system. So definitely a great set of tips. So if you're interested in learning more.
[00:05:18] Speaker B: About this, you can check out this blog post.
[00:05:20] Speaker A: The next piece of Content a short summary of the year 2021 in the PostgreSQL ecosystem. This is from Migops.com and it's review of all sorts of things that happened this year, including PostgreSQL getting the DBMS of the year for 2021, postgres 14, its new features, the minor versions released, and some extensions. So if you're interested in a review post, you can check this one out. Another one also related in terms of a review post is Postgres in 2021 and Observers year in review. And this is someone who is actually participating in the postgres project and just mentioned a few things that he's observed. One thing is in 30 x improvement when doing certain N clauses that just happen to come with postgres 14 and mention some other performance improvements here. Mentioned that there are some patches that are looking at extending the 32 bit transaction ID to 64 bit which would potentially avoid the problems of transaction ID wraparound with highly active systems. This could really improve the situation. But of course this would be a big change and some of the proposal here kind of gets us a little bit of the way there. But some people are saying well, we really need to take this into account seriously. So it's not anything that he believes will be in and around postgres 15, it may be postgres 16 or later to get this type of a change in. Talked a little bit about some changes with regard to explain as well as some general other improvements in postgres for the year. So if you're interested in another review post you can check this out.
Next piece of content PG Friday defining High Availability in a Postgres World this is from Enterprisedb.com and this is just an aspirational discussion of high ability and what it should look like for postgres. And this is a great post to read through because it talks about the perspective where postgres has been and essentially still is now in that it was built as a single node database system, a relational database system, and that these other nodes have come up through doing replication, relying on the right headlog to create these additional nodes. And there's been other features that have been added like making these standby nodes synchronous or allowing streaming feedback from a replica to the primary, but they're still for the most part independent nodes. And he's kind of taking a first principle, looking at, well, what changes would be a good idea to do to make it an actual high availability solution? Because, as he mentions here, postgres isn't a cluster. It's not like some of the cluster databases. And we're going to look at a YouTube video that talks about really distributed databases that are PostgreSQL compatible. So basically, this continues to talk about going through different iterations of what a high availability system looks like and looking and thinking this would be a potentially good way to go where you have essentially a listener. That all. The applications connect to, and it's responsible for identifying who the primary is, who the replica or the standby is, et cetera, and have some sort of consensus system to manage the instances. But very interesting post. If you want to learn more, you can check it out.
The next piece of content postgres data dictionary for everyone. This is from Rustprooflabs.com, he's talking about the new extension Pgdd. That is a data dictionary that many different people who use the database could use and I wasn't necessarily interested in this because you can easily with the slash commands, use psql to get insight into the data dictionary or you could also go directly to the system catalogs. But this has an interesting use case of being much easier to use. So it just has a columns view, a functions view, schema views table view, and a views view. And I can see this appealing because it's potentially easier to use, very consistent and you can just assign permissions to whomever you want. Maybe it's a data scientist or an analyst like they mentioned up here, or someone in some business unit to be able to get access to the data, understand how it's being stored. So it's definitely an interesting use case and if you want to learn more.
[00:09:39] Speaker B: You can check out this blog post.
[00:09:41] Speaker A: The next piece of content is actually a YouTube video and it was introduced to me by Gene in a comment from one of the previous episodes of Scaling Postgres where I was talking about how Aurora is not postgres and he was saying well, actually there are some pretty big similarities between Aurora and Postgres when you're not talking about the file system. So a lot of it is similar, but where it differs a lot is how it stores its data. Aurora does a more distributed storage system for its database, but this is a post and it's from the Gigabyte YouTube channel and it talks about analyzing the PostgreSQL compatibility between Amazon, Aurora, Spanner, Gigabyte, DB and CockroachDB. So if you're interested in distributed databases and how these differ from PostgreSQL, definitely encourage you to check out this YouTube video.
The next piece of content fixing a PostgreSQL cluster that has no super user. This is from Endpointdev.com and basically how you do this is you run postgres in single user mode. So if you happen to run into this problem that you can't connect because the essentially admin user, the main user of the postgres cluster, has no permissions, you can actually start it in single user mode, assign those permissions again. Here he did an ultra row postgres super user and it gets back the proper permissions and then you can start up the server as normal. So if you run into that issue.
[00:11:07] Speaker B: You can keep a hold of this blog post.
[00:11:09] Speaker A: The next piece of content ecommerce customer names with interesting Unicode characters. This is also from Endpointdev.com. He's talking about where a particular ecommerce customer started seeing all these emojis as early as 2015 in usernames or names that people had made. And these names are fake names but basically these are the types of emojis that were just placed into the names and then there's also different emojis for roles or everything. So a lot of these emojis are winding up in our databases and he includes at the bottom here a way to look through Unicode ranges to actually identify some of these. So if you're interested in that, you.
[00:11:49] Speaker B: Can check out this blog post.
[00:11:51] Speaker A: The next piece of content. Kubernetes plus postgres cluster from scratch on Rocky Eight. This is from Crunchydata.com and they're talking about there's a new version of Kubernetes to release 1.22 and it has better support for C group version two as well as Linux swap. Well, they took this version of Kubernetes and got a postgres cluster running on it using their Crunchy data operator version five and doing it on Rocky Eight, which is a Red Hat Enterprise Linux compatible distribution. So if you want to learn more about how to set that up, definitely.
[00:12:24] Speaker B: Check out this blog post.
[00:12:26] Speaker A: The Next Piece of Content is actually a two post series that talks about data layouts and ways to do it and keep data up to date using triggers and stored procedures. So if you're interested in that, you.
[00:12:37] Speaker B: Can check out this blog post.
[00:12:38] Speaker A: This is from Dev Two in the Gigabyte section, also from the same area. Dev Two in the Gigabyte section is another series that actually talks about creating a rate limiting API and avoiding race conditions by using a serializable isolation level. So the first post here is SQL to avoid data corruption in race conditions serializable. And this is basically using the transaction isolation settings to be able to run as fast as you can without running into race conditions. And if there runs into a problem, it essentially cancels that statement and it has to redo the work. So this is a technique to create software that you don't have to deal as much with locks, so it could potentially allow you to go a little bit faster as long as you're willing to deal with some failures. So if you're interested in that, you can check out this whole series of blog posts.
Next piece of content. The PostgreSQL Person of the Week is Viphor Kumar. So if you're interested in learning about Viphor and his contributions to Postgres, definitely.
[00:13:41] Speaker B: Check out this blog post.
[00:13:44] Speaker A: The last two pieces of content are episodes of the Rubber Duck dev show that we presented over the break. The first one is when should you mock or stub during your tests? The second episode is actually a talk about Ruby and Rails with Andrew Mason, who helps run the Remote Ruby podcast as well as the Ruby Radar newsletter. So if you're interested in this long form, developer based content, definitely check out our shows.
That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show notes. Be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks.