Sequential UUIDs, Runnning Totals, Read Committed, TOAST | Scaling Postgres 41

Episode 41 December 03, 2018 00:14:26
Sequential UUIDs, Runnning Totals, Read Committed, TOAST | Scaling Postgres 41
Scaling Postgres
Sequential UUIDs, Runnning Totals, Read Committed, TOAST | Scaling Postgres 41

Dec 03 2018 | 00:14:26

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we review articles covering sequential UUIDs, running totals with window functions, read committed vs repeatable read and TOAST.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/41-sequential-uuids-running-totals-read-commmitted-toast/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about sequential UUIDs running totals, read committed versus repeatable read and toast. I'm creston. Jameson. And this is scaling postgres episode 41. [00:00:23] Alright. Our first article this week is sequential UUID generators. And this is from the second Quadrant.com blog. Now, UUIDs of course means a universally unique identifier. And some of the issues that can crop up with using them in a database system is that they are not sequential, they're entirely random. So if you wanted to use it as a primary key, which has a unique index every time you add a new UUID, it's going to have to find that placing the index where that is going to be properly ordered. It's much easier to do that with a sequential integer. For example, you're essentially just appending it to the index, whereas this has to search through the entire index for each new UUID generated to find the right location. So what this blog post talks about is actually imparting a little bit less randomness, so that you can get a little bit more sequential UUIDs. Now, they talk through some of the issues here that entirely random euids can cause, like some slow insert performance, maybe even some slow select performance, because not as much of the recently changed data is cached because it has to search all over the index for it. And also talks an issue about a write amplification where your wall is larger due to having to do full page writes because again, you're touching many different pages. For example, when you're updating an index and they talk about making them a little bit sequential and some naive implementations you can do. Maybe you can put an integer in front of a Uuid, essentially append it with some random digits. Or you put a, they say here a 32 bit Unix timestamp in front of it and append some random bits. But there's actually an extension that does this for you called Sequential UUIDs. Now, this supports two modes. One is get a Uuid sequence. Next value. You specify sequence. You want to use a block size and a block count. Or you could do UUID time next val and have an interval length and a block count. And they show roughly what the implementation looks like for doing a sequence versus using a time. Now, what makes this a little bit different is that either the sequence or the interval length actually wraps around after so many UUIDs are generated to give you a little bit more of that randomness, but try to diminish the implications of the UUID being entirely random. And this is the part where I think the blog post gets even more interesting because they did a benchmark. So they compare tests from four UUID generators. One UUID generate version four, which is entirely random, one based upon time. And this is the configuration they used, one based upon a sequence where the block size is 256 and one where the sequence is, they tested three different ways. Small is essentially they start from an empty table just doing inserts. So how many transactions per second? How fast can they do the inserts? Small, the table is empty. Medium the table has essentially fits in memory and large is it no longer fits in memory, it essentially has to go to disk. And what's very interesting is that you clearly see the cost of having entirely random UUIDs versus some sort of sequence imposed upon them. You can see how much your performance drops off here and then further at this point here. So that's approximately five times slower at the large stage. And even it goes half as fast once your sequence block size is 256. So it's resetting the sequence pretty quickly. So that's for inserts. And then it also looks at write amplification and again, looking at the small data set. So an entirely empty table doing the inserts, you'll notice that it's this much smaller due to the full page write images that have to be done. Looking at the medium, the wall size gets even worse. Again, it looks like about five times as much walls being written. But it's even worse than that because once you normalize over how many transactions that are actually processed, you can see it's a lot worse, maybe 20 times as much compared to using the extension with a time based in a sequence with a 64K block size. Then of course, they show the large data set and then normalize it here and it still gets pretty bad. But what they make note of is that the sequential writes of the disk that they're using are pretty quick. But it definitely clearly shows the wall size difference ramifications of using entirely random UUIDs. And then they look at the cache hit ratio. So they're looking essentially select performance. And again, when your table is empty, there's pretty much no difference. Again, when it's medium, you see entirely random dropping off. And again with large that doesn't no longer fits in memory. The cache hit ratio drops to like 85%, whereas again, based upon the 64K block size sequence in the time stays up at approximately 99%. So, I thought this post was super interesting and I loved how they really did an analysis to see how well each performs. You can really see the cost related to random UUIDs. Now, they say they tested a 256 block size and a 64K block size and they did see differences. But the question is what is the best setting for you? It's actually kind of based upon the table. So it's something that you may need to experiment with to find the best and most optimized settings. So if you're using UUIDs for primary keys or even if you're using it as an index, this is something to potentially keep in mind. And if you don't need that entirely random set of UUIDs, could you impose some sort of sequence to it to potentially get better performance out of your database. So this is definitely a blog post to check out. [00:06:24] The next post is easy. Running totals with windowing functions in PostgreSQL. This is from the Musings rants and ponderings of a DB architect at the Dennisgobo blogspot.com. Now this is a relatively short blog post, but he basically talks about using a window function to generate running totals that essentially look like this. So you have charges come in and you want to know, as each charge happens, display the running total or essentially a balance of an account. And this post was really great in terms of going over exactly the code you need to do. Explains everything with regard to like what does rows between mean, what does unbounded preceding mean, what's current row? So if you haven't gotten into Windows functions for the purposes of things like running totals, this was a really great intro and really explained all the details behind it and was a really great example. So I definitely suggest you check it out if that's of interest. [00:07:25] The next post is Transactions in PostgreSQL read committed versus repeatable read. This is from the CyberTech Postgresql.com blog. Now this course is talking about transaction visibility and they explained what recommitted is. Basically when you're in a transaction and you're querying a table, you will be able to read whatever is committed. So if you're middle of the transaction and somebody another user commits something, you will be able to see that the next time you run a statement within your transaction, essentially if it's Committed, you could read it. The next option is repeatable. Read. What that means is when you start your transaction you are essentially saving the state of what the data looks like. And even though if other users are committing changes, you're not going to see a difference in your repeatable read. Now by default, Read Committed is the mode that's enabled and it's usually the one you want to keep if you're using an online transaction processing load. However, repeatable read may have uses in case of data warehouse and they have a graph that shows the example here where someone is using a Read Committed isolation level. And you can see when user three doesn't insert into the table that the count, they will be able to see that Committed change, whereas user two who's using a repeatable read, it will consistently give 17 while they're within that transaction. So if you've ever been curious about the difference between Read Committed versus Repeatable read for transaction isolation, definitely a blog post to check out. [00:08:57] The next post is the best postgres feature. You are not using CTEs aka with clauses, and this is from the Craigchurstines.com blog and essentially it covers what CTEs which stands for a common table expressions or essentially with clauses are. So again, I like to think of these, they're essentially like self contained sub queries. So you could easily write transactions using sub queries, but essentially these pull those sub queries out and make them essentially their own function. So for example, this creates a I'll call a function User's Tasks. So with Users Tasks and then you define what query you want that to store and then you can chain them together and use them. Now, he does have a caveat here is that it may not be quite as performant SQL, but it definitely wins the award of being more readable because it's much easier to parse a SQL statement in bits and pieces. So if you haven't used CTEs and you're potentially wanting to use them, definitely a blog post to check out. [00:10:03] The next post is toasting up large objects. Blobs. CLOBs. Let's talk toast. So this is talking about Toast, which stands for the oversized Attribute Storage technique. Now, PostgreSQL and they mentioned EDB postgres here as well, they used a fixed page size, commonly 8. It doesn't allow a tuple or essentially a row to span multiple pages, therefore it's not possible to store very large field values directly. So essentially, it creates a pointer to this toast table and it says, quote, almost every table you create has its own associated, unique toast table which may or may not ever end up being used. Depending on the size of the rows you insert a table with only fixed width columns like integers may not have an associated toast table. And again, most of this is transparent to the user and enabled by default. But this goes into how they're used a little bit of the implementation typically used for byte A and text fields that can store very large values. It talks about the implementation and then he talks about the difference between Blobs and clubs. So for example, byte A and text is toastable. So those columns in a table can be placed in the toast. You don't need to use a special large object API transactions not required. You need to track the OID. And this is something particularly of interest as you're scaling is that the object storage limit is 1GB. However, using the separate Blob club, the object storage limit is 4GB. So if you're wanting to get a little bit more information about toast and how that works, this is definitely a blog post to check out. [00:11:47] The next post is how postgres is more than a relational database extensions. And this is from the Citusdata.com blog and essentially they're talking about one of the things that makes Postgres particularly interesting to use as a data storage system is its ability to accept extensions. So yes, they continue to add new and great features to the core, but extensions allow other communities or groups or even companies to develop extensions that fully utilize the core PostgreSQL database system but add additional functionality to it. And he goes over some extensions in the real world, as he says here. One is custom data types. So for example, he talks about hyperlog Log, which basically does really fast approximate counts. Talks about monitoring in terms of using PG Stat statements, which is an extension you could add to get kind of analytics to the queries taking place. Talks about PostGIS, which essentially, quote, turns Postgres into the world's most advanced open source geospatial database, talking about their own extension, Citus, which transforms postgres into a distributed horizontally scalable database and then even talks about accessing data outside of postgres using the foreign data wrappers. And basically extensions are a way for you to add capabilities that the others in the community have built or even built your own. So if this is of interest to you, definitely a blog post to check out. The next blog post is GUI tools for postgres. And this is from the PG IO blog, and it's essentially a list of GUI tools. So first is PG Admin. It's the one I tend to use when I've used one. They also talk about Omnidb DB beaver. They have a great list of different GUI tools. So if you're in the market for one, this is a potential blog post to check out. Now, related to that in the last post is Webinar introduction to Omnidb follow up. And this is from the second Quadrant.com blog. So again, the second GUI tool listed in the previous blog post was Omnidb. So again, if you're in the market for a Gui tool, maybe check out this webinar to see if it has the features that you're potentially looking for. [00:14:03] That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show Notes. Be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you could subscribe via YouTube or itunes. Thanks.

Other Episodes

Episode 265

May 14, 2023 00:16:47
Episode Cover

pg_stat_statements, Transaction ID Wraparound, Consultant Knowledge, CitusCon | Scaling Postgres 265

  In this episode of Scaling Postgres, we discuss different ways to use pg_stat_statements, how to handle transaction ID wraparound, consultant knowledge and all the...

Listen

Episode 217

May 29, 2022 00:16:33
Episode Cover

Schema Change Mistakes, Canceling Statements, pg_rman, Pedantry Removal | Scaling Postgres 217

In this episode of Scaling Postgres, we discuss mistakes you can make when doing schema changes, how best to cancel statements, looking into pg_rman,...

Listen

Episode 149

January 24, 2021 00:12:23
Episode Cover

ARM Tests, Tips & Tricks, Hierarchical Structures, Benchmarking Framework | Scaling Postgres 149

In this episode of Scaling Postgres, we discuss tests of Postgres on ARM processors, 2021 tips & tricks, working with hierarchical structures and creating...

Listen