BRIN Pitfalls, pg_stat_statement Troubleshooting, Natural Join, Geography Type | Scaling Postgres 126

Episode 126 August 09, 2020 00:14:13
BRIN Pitfalls, pg_stat_statement Troubleshooting, Natural Join, Geography Type | Scaling Postgres 126
Scaling Postgres
BRIN Pitfalls, pg_stat_statement Troubleshooting, Natural Join, Geography Type | Scaling Postgres 126

Aug 09 2020 | 00:14:13

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss BRIN pitfalls, using pg_stat_statements for troubleshooting, natural joins to detect duplicates and the geography type.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/126-brin-pitfalls-pg_stat_statement-troubleshooting-natural-join-geography-type/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about Brin pitfalls, PG stat, statement, troubleshooting, natural join and geography type. I'm Kristen Jameson, and this is scaling postgres episode 126. [00:00:17] One. [00:00:22] Alright, I hope you, your family and coworkers continue to do well. Our first piece of content is avoiding the pitfalls of Brin indexes in postgres. This is from Crunchydata.com and Brin is a block range index. Now in this post they talk about blocks in pages, but pretty much they're interchangeable. That's basically the default storage unit of postgres and a block range index indexes ranges of blocks. So it doesn't give you the specific data, but a range of blocks to look for the data in. That's what makes a Brin index so much smaller usually. And by default it stores ranges of 128 blocks. Now to go over the pitfalls, they first create a test table here that using generate series to create a series of timestamps and basically one row per second. And they actually use a page inspect extension so you can actually take a look at what the index looks like. So for block number zero, it's going to be this range here and then at the 128 block number, the block range that is storing is here and it gives it a little graphical representation of what you can see within each page or within each block. So what that means you have multiple blocks and that block range is zero to 128. So this is the first row of the index, essentially. So when you're going to search for a particular value, it has to look through this block range to find the exact value you're talking about. So the index helps get you close, but not exactly like a B tree index would. So that's why a B tree index is faster to pull out single values compared to a Brin index. And then they looked at how many pages their particular table was using and looking at the count of rows, which was 1 million. And basically there are about 20,000 records for each brin record. So you can see why Brin is so small. And then they have this really great chart here where they're showing number of records, the relative table size, comparing it to the B tree and then the brin size. So you can see the brin is tenfold smaller at a small number of records. But here it looks like it's close to 1000 times smaller, two megabytes versus 21gb or maybe 10,000 times smaller once you have a million records. So Brin efficiency goes up the more records that you have. And then they show how fast it is to retrieve a record when you have these different record sizes. So as you could see, the Beatri index to pull out a single record is relatively fast, whereas the single record with a Brin index it seems to be ten times as slow to pull out that single record. But when you're looking at ranges, then the performance gets much closer. So much so that when you're looking at a month of data and a billion row table, the Brin index is actually a little bit faster in retrieval than the Btree index. And they have some of the conclusions here. One, Brin space savings over Btree are significant. Second, Btree generally outperforms Brin for single record queries. Thirdly, the query planner didn't even consider the Brin index until there were at least 100,000 records in it. And finally, Brin performs better comparatively when looking for more records. [00:03:43] So the question, when should you use Brin? And they have a very good statement here that says use Brin when you have a large insert only in order table with large select results. So basically it works well when you have a lot of records in the table because again, it's operating on block ranges insert only because order is important, as we'll see in a second in this post. Because if you have blocks covering all sorts of ranges, brin loses its efficiency and that you're pulling out a lot of records. If you have a huge table but you're just pulling out one record at a time, just stick with the B tree index. But if you're rolling up data like they show here, like a day, a month, a year, then Brin starts to make more sense. Now in terms of what to look out for is when you have things out of order, because then the block ranges don't align up neatly and your Brin index will lose efficiency. And they have this great illustrative guide here showing it. So basically this is the record layout for the pages and then at the top here, it shows you the range of records that are included within this page. So this is the range that the print index will be looking at. Now, when you have a delete, you're going to remove this record from here. Now once that space is vacuumed, it's going to clean it up. So now that record is no longer there. But then if you have a new insert, the space can be reused and now say this new record gets inserted in this database here, where four was. So now if you look, the new range is one to 19. So if it needs to look up the value ten, for example, it has to look in two different ranges now. So that's a lot more data to search through than just searching through one block before. That's why it's important to use a brand index with contiguous data values, like a sequence or an ever increasing timestamp. And then updates also have an issue. So in terms of an update, let's say you updated twelve to four. Well, twelve gets deleted because that's how an update works in postgres. And then a new record is inserted in this free page and that impacts the block range. So now instead of looking in the range of 17 to 18. It's looking from four to 18. Now, if you tried to select twelve, well, now twelve is in the entire range of all three of these blocks. So it's going to pull forward three blocks to try and search through. So then they did a test doing 20% deletes and 20% inserts, and you can see how the performance degrades. So looking at 100 million records, one record goes from ten milliseconds down to three to 8 seconds to retrieve it. And again, whether you pull an hour, a day or month, it doesn't really matter. But even looking at 10 million records, one record goes from eight milliseconds to 680 milliseconds because of all these additional ranges it has to search through. So definitely in terms of the pitfalls of Brin, you want to keep the data contiguous and not have very many, if at all, inserts and deletes. So this Brins work really well for appendonly tables. So if you're interested in using Brin, I highly suggest you check out this post. It was really great in terms of understanding how Brin works and some of the Pitfalls. [00:06:54] The next post is a quick PG Stat statements troubleshooting hack. This is from CyberTech postgresql.com. They have a scenario where they kind of need a monitoring system, but they don't have one implemented yet. But they do have PG stat statements. So basically what they do is they create a snapshot table that takes snapshots of the PG Stat statements table over time and stores them with a Timestamp column. So you can see here it created a Stat statement snapshots table with a Timestamp column, followed by all the remaining columns from PG Stat statements. And then this is just very quick and dirty. Just did a watch to execute and insert the new data from PG Stat statements into the snapshot table. Now, of course, you can implement this different ways, but this is kind of a quick and dirty. They call it a troubleshooting hack to be able to get some statistics over time relative to how queries are performing. Then they give you some different queries that you can use to find out and analyze the data. So for example, this one is which top three queries were burning the most CPU time between times t one and times t two, as well as what queries had the biggest absolute execution time compared to the hour before. Now, he says you can also improve the usage here by doing a couple of things. The first is to create this snapshot table as an unlogged table. So what that means, it's not going to write to the wall, it's not going to be sent to the replicas, but of course it's no longer crash safe. So you could lose the data at any time. But it does give better performance and doesn't impact the system as much to do this logging essentially the second thing to make it a little bit more efficient is maybe don't include all the queries every time from PG Stats statements. Or third option is don't update a record in the snapshot table if it hasn't updated since its last update. So again, this is a pretty interesting way to create a hack to be able to monitor your queries that are executing in your postgres system. So if you're interested in that, check out this post from CyberTech Postgresgirl.com. [00:09:00] The next post is use natural full join to compare two tables in SQL. So he had seen a post that was using Union and Accept to compare two tables and show the differences, but he felt it was a more efficient use to use a natural full join. So this compares all the rows of the table between different one and finds the differences. So here's his implementation here and it basically shows you the differences as well as the table where the values differ. Now he did say this will not work if you have nulls in your table, but he did give another implementation where there are null values in the data, basically using the distinct predicate. So if you want to use this technique to compare two different tables, definitely check out this post from Jooq.org. [00:09:50] The next post is PostGIS and the geography type. This is from Countrydata.com and this explains the geography type that's available within PostGIS for calculating spherical coordinates. And he talks about the importance of this because normally we think of flat planes when looking at a map, like Cartesian coordinates an X and a Y. But really we need to think spherically because the Earth is a sphere and whereas it looks on a map that oh, look. New York exists, right between Los Angeles and Paris, France. That's not how it exists on the Earth. Actually, the shortest distance is actually this route here because along the sphere this is the shortest distance. So it's important in many areas to use this geography type to deal with spherical coordinates. Now, he does say something like if your working area is geographically small enough, a state, a region, a country to fit in a projected system using a projected system and the geometry type for all your calculations will result in simpler SQL and faster queries. But if you have global data, you'll find your system is simpler to code and reason about if you use the geography type. So if you're interested in PostGIS and learning more about the geography type, definitely check out this post. [00:11:11] The next piece of content is webinar being committed a review of transaction control statements. Three out of three follow up. This is from secondquader.com and it's the third series about transactions in postgres and they cover transactions and locking and why it's important to avoid collisions and how you can improve transaction flow by reducing contention. It talks about the blind spot of inflight commits. So if you have a commit and then the system crashes, you don't know if that commit was committed yet. They show kind of a way around that, a way to recover after failing at the worst time is the way they phrase it here. And then commit at most once, showing how to recover after failing at the worst time and in the worst way. So if you're interested in that, definitely check out this webinar from second quadrant. You can just click the link here to register for it. [00:12:03] The next post is beyond JSON B, a generalized unstructured data type for Postgres. So this is talking about JSON B and how the different types that are supported within JSON B are the number, the boolean, the string, null array, and object. But they bring up the point how do you handle things such as binary data? Because there's no real great type to do it. You're going to run into issues if you try to do it as a string. How do you handle Internet types, point types, et cetera? And they're arguing that perhaps Postgres should have a superset of functionality to support things within JSON, because they mention all these other implementations that actually have added features to the JSON standard to be able to support more complex data types. So really, this is just a discussion and an opinion piece. And if you're interested in learning more, definitely check out this post from Angry. [00:13:00] The next piece of content is Continuous PostgreSQL Backups using Wall G. So if you're interested in using Wall G for doing your backup and restore procedures, definitely check out this post from Supabase IO. [00:13:16] The next piece of content is NSS on Windows for PostgreSQL development. So NSS, as they say here, is an alternative TLS library to OpenSSL, so if you're wanting to use that with Postgres on Windows, you can definitely check out this blog post from Secondquader.com. [00:13:35] And the last piece of content is the PostgreSQL Person of the Week is Tatsuyo Ishii. So if you're interested in learning more about Tatsuyo and his contributions to Postgres, definitely check out this blog post that does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show Notes. Be sure to head over to Scalingposgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks, our.

Other Episodes

Episode 139

November 08, 2020 00:12:37
Episode Cover

Weekly News, Full-Text Search Performance, pg_cron, Bulk Data Loading | Scaling Postgres 139

In this episode of Scaling Postgres, we discuss Postgres weekly news, full-text search performance, enhancements to pg_cron and the best way to bulk load...

Listen

Episode 202

February 14, 2022 00:12:49
Episode Cover

New Postgres Releases, A Hairy Incident, Slow Down To Go Faster, Loadable Module Archiving | Scaling Postgres 202

In this episode of Scaling Postgres, we discuss the new releases of Postgres, a hairy upgrade incident, why slowing down can make you go...

Listen

Episode 95

January 06, 2020 00:11:51
Episode Cover

max_locks_per_transaction, Logical Replication, Backup & Recovery, Pgbouncer | Scaling Postgres 95

In this episode of Scaling Postgres, we discuss max_locks_per_transaction, logical replication, managing backup & recovery and pgbouncer connection pooling. To get the show notes...

Listen