Researching Performance, Postgres Survey, pgAdmin Survey, Long Running Queries | Scaling Postgres 227

Episode 227 August 07, 2022 00:08:53
Researching Performance, Postgres Survey, pgAdmin Survey, Long Running Queries | Scaling Postgres 227
Scaling Postgres
Researching Performance, Postgres Survey, pgAdmin Survey, Long Running Queries | Scaling Postgres 227

Aug 07 2022 | 00:08:53

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss PG14's new SQL function In this episode of Scaling Postgres, we discuss research into a performance puzzle, results from a Postgres and pgAdmin survey as well as the impacts of long running queries on Aurora and Postgres.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/227-researching-performance-postgres-survey-pgadmin-survey-long-running-queries/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about researching performance postgres survey, PG admin survey, and long running queries. I'm Kristen Jameson and this is Scaling Postgres, episode 227.1. [00:00:22] Alright, I hope you, your friends, family and coworkers continue to do well. Our first piece of content is researching the performance puzzle. This is from Ardentperf.com, and he's referencing a blog post that was posted, I believe, last week, or maybe the week before last, where he posed a performance puzzle where he had a table with two different big INTs, same indexes, same data type, same values contained within. But one was super fast to do a query, pulling up about half the values within a range, whereas the other query took forever. And the question was, what's the difference? And basically, the second big int column had random distribution of the IDs in there. So basically, it demonstrated how sequential data can be queried much faster. Now, it already had an index on it, so why would that take a while? Because in the indexes, it's sequential, but really it had to go to the heap to pull some data from another column in the table. So it was still doing across the heap, random I O to pull back that data. But in this post, he goes into a lot more detail as to why we're seeing the behavior that we're seeing. And he set up an explain analyze for both analyzing the buffers as well. And one thing that was obvious, he says, quote, our first query only needed to read 5533 database blocks to find the 500,000 rows it was looking for. But in the same table, the second query needs to read 378,035 blocks to find the 500,000 rows it's looking for. So that's a big difference between a Sequential Access and essentially a Random Access. But what's interesting about this is that in terms of the second query, basically it has to get 500,000 rows. And there were 123,000 shared hits to the shared buffers, whereas the shared reads were 378,000. So that's about 500,000. That makes sense. So what on earth is it only needing to access 5000 database blocks for? And what it does is when the database pulls a given block as it's reading it, it kind of knows that the data is in there for additional rows and it doesn't flag another hit for it. Essentially, it's able to use that data that's already in that block. So it's an efficiency he talks about here. Now, then he said, well, what if I make the number of blocks equal? So do a Random Access to the same number of blocks as a Sequential Access, and what does that look like? And what he found once he calculated a read latency is that the first column, the sequential one, had a read latency of zero point 14 milliseconds, so super fast, whereas the latency from the second column was zero four milliseconds. So he actually considers that a reasonable number. And he believes that Postgres is lying with regard to this latency because how is that possible? And he did some more analysis and basically what's happening is that Postgres relies upon the underlying operating system to do some work, even in terms of caching and whatnot and really what's happening. It's the Linux. Read ahead. So again, when you have that sequential set of data you need to pull, the Linux Read Ahead plays a role in pulling more information essentially than it needs and that's how it's able to retrieve the data so quickly. Even where Postgres does things, where he says he's calling it a liar and then he compares, okay, what happens if we turn off Read Ahead to see what that looks like? And really the overall execution time is close to the execution time of the randomly ordered data. So basically the sequential data helps, but it looks like the Linux Read Ahead is providing a ton of benefit as well. So this was a super comprehensive performance investigation and if you're interested in this kind of stuff, definitely encourage you to check out this blog post next Piece of Content State of PostgreSQL 2022 this is from Timescale.com and this is pretty much the full results, I believe, from the survey. So it's been mentioned in some blog post, but here's the full survey that asks questions such as what's the main reason you chose PostgreSQL over other options? And they're talking about open source reliability, the extension network that's available like PostGIS, and a lot of other points in the survey. So if you're interested in this type of information, definitely check out this blog post next piece of content. PG Admin User Survey 2022 this is from Enterprisedb.com and they did their own survey for PG Admin. They found some things like 57% of the user base runs on Linux, 18% on macOS and 63% on Windows. So clearly I guess their organizations are using Linux and Windows at the same time, but they do a break rip down of different packages, where they got the packages from, how many desktop deployments, and even how they're deploying on servers as well. So feel free to check out that if you're interested in it. [00:05:19] The next Piece of Content manage long running read queries on Amazon Aurora PostgreSQL Compatible Edition this is from AWS Amazon.com and whereas I don't normally discuss posts on Aurora, there aren't that many Postgres posts this week. But this was interesting because it talks about the architectural differences between Aurora and Postgres. Basically, Aurora has basically a shared file system between their primary and their read replicas. So that has some advantages in terms of bringing up a reader very quickly because you don't have to replicate all the data, whereas with Postgres the database files are essentially duplicated. It also makes Replicas cheaper because you don't need to duplicate the database space in Aurora. But they're also talking some disadvantages of this when you're wanting to run long queries, and particularly if you have queries that exceed the max standby delay, because those replicas eventually have to keep up with what the primary is writing, and that can cause queries to be canceled, particularly the long running ones. And they have some techniques to mitigate this. A lot of it is make things faster, so tune your queries manually vacuum faster. If you have cancellations, implement retry logic as a part of your application. Redirect the query to the Aurora nodes. None of these are easy ways to resolve it, but again, Postgres has the same kind of issues with canceled queries as well. But if you want to get more value out of this post the next Post five Minutes of Postgres episode 29 postgres Lock Conflicts on Amazon Aurora in tuning Max standby streaming delay and hot standby feedback. This is from Pganalyze.com and this is their five minutes of postgres where they examined this post and talked about it and then also compared it to postgres and how in some ways postgres has a little bit more flexibility of dealing with this because you can change hot stem by feedback in postgres, but not Aurora. It's on by default in Aurora. And he talks about a use case of someone who wanted to run some long running queries, and basically they turned off hut standby feedback and then made a very long, I think, six hour max standby streaming delay so that long running queries could be handled without being canceled. But I definitely encourage you to check out this episode this week as well. [00:07:38] The Next Piece of Content how to set up NFS and run PGE on it this is from Higo CA, and this post does exactly what it says. It shows you how to set up an NFS server and actually put your Postgres database store and run it on there. So if you're interested in that, you can check out this blog post. The next piece of content, the PostgreSQL person of the week is Bina Emerson. If you're interested in learning more about Bina and our contributions to Postgres, definitely check out this blog post and the last Piece of Content we did have another episode of the Rubber Duck Dev Show this past Wednesday evening. This one we had a discussion with Jason Charnes on the no More Estimates movement with regard to projecting when development work will be complete. So if you're interested in this type of long form developer discussion, we welcome you to check out our show. [00:08:28] That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show notes. Be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or for thanks.

Other Episodes

Episode 239

October 30, 2022 00:15:18
Episode Cover

Shared Buffers, Compression Algorithm, Merge Command, Postgres Contributions | Scaling Postgres 239

In this episode of Scaling Postgres, we discuss how to set shared buffers, the different WAL compression algorithm options, how to use the merge...

Listen

Episode 33

October 08, 2018 00:09:12
Episode Cover

Per-Table Autovacuum, FDW Synchronization, Distinct On | Scaling Postgres 33

In this episode of Scaling Postgres, we review articles covering per-table autovacuum, FDW for data synchronization, distinct on and Postgres Open. To get the...

Listen

Episode 114

May 18, 2020 00:20:28
Episode Cover

Data Directory, Vacuum Features, Binary Performance, Network Latency | Scaling Postgres 114

In this episode of Scaling Postgres, we discuss the Postgres data directory, new vacuum features, binary storage performance and the impact of network latency....

Listen