Connection Scaling, Cached Sequences, Message Queuing, Harmful Subtransactions | Scaling Postgres 181

Episode 181 September 06, 2021 00:21:58
Connection Scaling, Cached Sequences, Message Queuing, Harmful Subtransactions | Scaling Postgres 181
Scaling Postgres
Connection Scaling, Cached Sequences, Message Queuing, Harmful Subtransactions | Scaling Postgres 181

Sep 06 2021 | 00:21:58

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss connection scaling, how to cache your sequences, a message queuing implementation and the harms of subtransactions.

To get the show notes as well as get notified of new episodes, visit: 

 https://www.scalingpostgres.com/episodes/181-connection-scaling-cached-sequences-message-queuing-harmful-subtransactions/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about connection scaling, cached sequences, message queuing, and harmful subtransactions. I'm Kristen Jameson, and this is scaling postgres episode 181. [00:00:22] Alright, I hope you, your friends, family and coworkers continue to do well. Our first piece of content is when less is more database connection scaling. This is from Richyen.com and he's talking about the scenario where you have a max connection setting in postgres. And sometimes people who manage the database want to set it very high, even though you may not be using all of them. And the thing to keep in mind is that postgres is process based. And if you only have, for example, he's talking about a pretty big machine with 128 CPU cores and maybe it's hyperthreaded, you do have some limit of how much work you can do at a particular given time. So having 10,000, 20,000, 30,000 max connections doesn't make a lot of sense because you won't be able to use them. But there's a problem even if they are all idle. And that's what he mentions in the section here. But the connections are all idle because there's still work that needs to be done by all the shared components of postgres to maintain the fact that these connections could potentially be used. And this was demonstrated in a great post a number of months ago that was linked in the conclusion here. And I highly suggest you check out this post if you haven't already. And they're talking about making improvements to actually improve the scalability of postgres's connections. And they show when you have a high number of unutilized connections, your performance drops over time as you scale those idle connections. And that's what this post does here, essentially recreates that test. And in his example here, he was reaching 1500 transactions per second with a very low number of idle connections. And then as he increased the number of connections, you can see the transactions per second of what was running started to decline. And somewhere between ten and 15,000 connections, basically it dropped the throughput to a third of what it was when there were fewer connections. Now this was mitigated somewhat removing auto vacuum from the picture, but still it eventually caught up with it as well. So the key takeaway is to don't set your max connection settings to more than you need because you're going to run into this problem potentially. And that's another reason to check out this post, because they have made some improvements to postgres coming in version 14 in future versions, where they hope to push the number of max connections that are supported out even further and to alleviate some of these problems. But of course, you can always rely on a connection pooler such as PG Bouncer to be able to support many thousands of connections without having to run too many max connections on your actual database system. But if you want to learn more, you can check out this blog post. [00:03:11] The next piece of Content UUID or cached. Sequences this is from Dev Two in the Yugabyte section, and he's talking about where people decide whether to use a Uuid or an integer associated with a sequence to have as a primary key for their tables in deciding between them. A lot of people choose UUID for he says it's scalability, but he makes the argument that integers with sequences can be pretty darn scalable as well, particularly if you use big INTs for the integers as well as sequences have the ability to cache values. So what that means is that you can have a session store in memory a certain number of sequence values, not just one. So when you request a sequence, it doesn't give you one sequence number, it gives you say, the next 50. Or in his case, what he's potentially advocating here is essentially two bytes worth, around 32,000 sequence numbers per request. So these will all be cached in memory for that particular session, so they can very quickly grab what the next one is without asking the Central Sequence Number generator. So he shows you how to implement this and has an example with Python code where you can see the first thread pulls those 32,000 sequences, but you can see thread four here. Its number starts at around 32,000, thread five starts at about 65,000. So the next set, thread two, is at 98, thread three is at 131,000. So each thread is pulling two bytes worth of these sequence numbers and they don't have to request another one from the Central Sequence Generator until it has expired the range that it initially retrieved. Now of course, one downside of this, if your sessions get disconnected, you won't be able to use those sequence numbers again. You would have to do another sequence request and get another 32,000. So if you're constantly making connections and disconnections, this won't work as a great solution. But if you have long running connections to the database using some sort of a connection pooler, this could be a solution to give you scalability with sequences. Now, in my database application, I tend to use all integers for my primary keys and I only use UUIDs when a unique ID is generated within some application, like a JavaScript frontend. That's the only time where I would use a Uuid for that purpose. Basically, it's an ID generated outside of the database. But if you want to learn more about this, you can check out this post. [00:05:47] The next piece of Content devious SQL message Queuing using native PostgreSQL this is from Crunchydata.com. They're talking about wanting to set up a message queuing system within PostgreSQL. So basically they want to be able to capture some inputs stored in a table and then be able to have some processor worker pull off that work without conflicting with other workers and then be able to delete it. So their general requirements here are a table to hold events or items to be processed, something to NQ or put the items in the table, something to DQ and consume these items and do so without locking. So if you have multiple workers that are DQing the work to be done, make sure they don't step on each other's toes. So to set up the table, they set up this basic queue table with the ID as the primary key a timestamp as well as a payload to contain what needs to be processed. Now, to select what rows are going to be processed by, each worker came up with a command that says select all the rows from the queue table, limit it by ten. So get ten records at a time and use four updates. That means this needs to be locked while you're pulling it. But the problem is this locks and prevents others from pulling from the table at the same time. So he did it as a part of a transaction and you can see that the second back end essentially hangs, it can't do anything. But how you get around that is you use skip locked. So instead of doing four update, you do for update skip the locked. That way each back end will know it can pull its ten and skip over any ones that are already locked. So this way back end one can pull the unique ten it needs and backend two can pull the unique ten it needs. So that's a way to pull the data off. But then the question is how do you then delete that data? What's the most efficient way to do it? Now the way that they came up with of doing this is by using a delete statement with some joins and returning the rows that are deleted and that is what would be consumed. So for example, you do a delete from the queue table so it's a delete statement. Then you do using to kind of do a join to the select statement because it's the select statement that limits it to ten rows and does it for an update skip locked. So the using essentially gives you a join to get the limit in the row locks and what will be deleted. And then it says where for the using subquery ID is equal to the Q table ID and it returns the entire contents of the Q table. So you're getting all of the rows being returned from this delete statement. So essentially this is a way to delete the rows that you need and get all of that data from those deleted rows to be processed by the worker. And if there's any issue with the worker or something crashes, this should be rolled back and those rows would still be available for the next worker to pick up and process. So overall this is a pretty simple implementation of how you could do a message queue within PostgreSQL without any additional applications or tools. So if you're interested in that, you can check out this blog post. [00:08:57] The next piece of content PostgreSQL subtransactions considered harmful. This is from postgres AI and they're talking about subtransactions. So subtransactions are transactions within existing transactions. So when you have a transaction, you can actually fire off another subtransaction by doing a save point and that allows you to roll back to a given save point. If you want to save, work in the process of a long transaction. And here's a more visual cue where you have the main transaction here and within it you can have a save point, which is the subtransaction that you can choose to roll back to or not. Now, in terms of who is using subtransactions, he mentions a number of popular object relational mapping frameworks such as Django, SQL, Alchemy Ruby on Rails with its Active Record, SQL, et cetera. There's a number of them that use that. The other thing to keep in mind is that if you're using plpgsql code when you have an exception block that also uses subtransactions. Now there are a number of problems that you can encounter when using subtransactions. The number one he mentions here is XID Growth. So the transaction ID growth because every time you create a subtransaction, it creates a new transaction ID. So if you have a lot of these being generated, you're going to dramatically increase the number of transaction IDs that you're using in the system. So you want to be cautious of that because Auto Vacuum may have trouble keeping up and freeing those Xids for use. So you want to be cautious of that. The second issue is you can have a per session cache overflow. So by default, a given session is allowed 64 subtransactions. So each session can have one transaction happening and it can have 64 subtransactions happening within it. If you exceed that, you get a dramatic decrease in performance. So he shows some example of that here. But there's also this great post from CyberTech called PostgreSQL Subtransactions and Performance that you should check out, and it was covered on a previous episode of Scaling Postgres as well. So the thing to keep in mind with this 64 limit, if you have a loop in a plpgsql command and you're doing an exception block within it, you could potentially hit this limit very quickly and cause performance problems for your application. The next problem he mentioned is unexpected use of multitransaction IDs. So these are generally used when you're locking a row more than once. Like they're talking about the example of select for Share, but you can also hit it when you're using things like select for Update if subtransactions are used in a part of it. And there he has some example where someone had a lot of performance problems when they were basically doing a select for Update, doing a save point, and then updating the same row when they were using Django, which automatically uses some save points. They were running into a lot of problems. So that's something you need to be aware of using subtransactions with things like select for Update. The next problem is having subtransactions with SLRU overflow and in this case it was basically a significant performance degradation on standbys caused by using subtransactions in a long running transaction on the primary. They started seeing the standby not responding and then finally came back and started responding again once the transaction was concluded. So that's an issue that you could potentially encounter. And he also mentions that actually the cause of Amazon going down during Prime Day 2018 was actually subtransactions. Now they were actually using Postgres Aurora, but still they had this issue. Now he gives some recommendations here. First of all, you want to use some good monitoring of things like transaction ID wraparound, a weight event analysis from PG Stat activity and looking out for long range transactions to try to identify potential issues. You can also check if you have subtransactions too within your code base and looking for cases where you have exception blocks in your PLP SQL code. And his final recommendation is to experiment, to predict and prevent failures by stress testing systems. Now, he does say that there are a number of things being worked on to improve some of the areas that he's mentioned here, but they are still in process. But this was a very comprehensive post and if you're interested in this content, I highly suggest you to check it out. [00:13:35] The next piece of content how to create lots of sampled time series data with PostgreSQL generate series this is from Blog Timescale.com and this is a great post to learning how to use generate series to generate a series of data for your tables for testing purposes or load testing purposes. So they start off very basically where you can say generate a series using integers and it just prints out a series of numbers. And with a third parameter you can specify what increments should be used when generating those numbers. And you can even use dates. You can set a start date, you can set a stop date as well as as a third parameter. Do the interval of how often to generate this series of date times. Now with that you can also include other columns, of course, when you're generating a series. So for example here they just printed out a string that gets printed out with the series that was generated here. You can even do random numbers to give you a sense of hey, maybe this is CPU utilization, for example, for testing purposes. Then they showed an example of where you can actually do Cartesian products, basically a cross join to be able to generate a lot of rows at one time. So for every instance of this row, generate the number of rows indicated here. So it's ten rows times two will give you 20 rows. So you can think of using this technique to generate many, many rows. And this example kind of puts it all together where they're using generate series to generate a timestamp. Here they're using a device ID, also using a generate series of one to four and then doing a random number generator in the third column to track CPU usage. And they do some other examples of how to use date times to do this as well. So this was a great introductory post about generate series, so if you want to learn more, definitely suggest checking this out. [00:15:33] The next piece of content partitioning use cases with PostgreSQL this is from Nirat Info and he's predominantly talking about declarative partitioning and the different features it has and when it's a good time to use them. So he talks about how it can support partitioning by list, hash and intervals. It can do multilevel partitioning or nested partitioning. You can't partition on multiple columns and they also use foreign and primary keys. Now, he mentions two mistakes here that people use when they want to do partitioning. One is partitioning is necessary as soon as the size is important and he says that is not necessarily the case. It's not like once you have so many gigabytes or so many terabytes that you have to partition. Although I'll push back a little bit here because I think this is one area where I see partitioning frequently used when the management of the table becomes more difficult. So auto vacuum, even after some optimization, still takes a very long time to run. There's just so much data in the table. Partitioning that table will make the management a lot easier. So vacuum can run much more quickly on, say, the active partitions than ones that are much older, for example, if you've partitioned by date. So I think that is one area that is a prime case for partitioning. The second reason he mentions not to do it is for partitioning to spread data over disks. And I agree with this. With the advent of all the different cloud storage systems, spreading over disks using partitioning is not a great use case for it because you can easily spread your data over multiple disks with current hardware. The use cases for partitioning he mentions are partitioning to manage retention. So if you need to keep data for a certain period of time and you're partitioning by a date, you can easily delete the old partitions quite easily with having to go in and do delete statements, you just drop that old partition. The next is partitioning to control index float. So for example, anytime that data is inserted and or updated, there's a chance for that index to become bloated. Now, you can run reindex, but if there's a particular load giving your table, I could see how it could be advantageous to potentially partition such that it's easier to reindex the older partitions that are no longer active. The next one he mentions is partitioning for low cardinality. Now, for me, this seemed kind of similar to the Reindex use case, so I haven't seen a lot of reason to do it in this case. Generally, I would rely upon a partial index to cover use cases such as this. The next area he mentions is partitioning to get more accurate statistics. So I could see this is a use case if you're wanting to collect statistics at a more granular level due to the layout of your data, partitioning may assist with that. And he also mentions the benefits of a partition join and partition wise aggregate, although the performance of that is not significant, so I don't know if that would be a reason to do it. And the next reason he mentioned is storage tiering being able to put, say, recent partitions on very fast storage and older partitions on slower storage. So I definitely see that as a use case. But if you want to learn more, you can definitely check out this post. [00:18:57] The Next Piece of Content logical Replication Table Sync Workers this is from PostgreSQL Fastware.com and they're talking about some improvements that have been made to Postgres with regard to logical replication in the publisher subscriber model. And this goes into quite a bit of detail about the apply worker and tablesync workers. And it's all essentially the internal workings of logical replication and enhancements that have been done to potentially improve the initial copy time for logical replication and error handling. So if you want to learn more about the internals and details of postgres, you could definitely check out this blog post. [00:19:37] The next piece of content solving the Napsack Problem in PostgreSQL this is from Dev Two inferencesco de Ziot, and he's talking about a way to solve the knapsack problem, which is basically you have a number of items that have particular values and each have a particular weight, and you have a knapsack essentially with a capacity of 20. So what are the best items to put into the knapsack and get the highest value? So with that he uses a recursive CTE to solve this problem and here's the overall recursive CTE that is able to do that type of calculation for you. [00:20:17] And he walks through exactly how it works and shows you the results. Here where the combination with the most value is the socks, the hat and the shoes. So if you want to learn more about how to do this within PostgreSQL with a recursive CTE, definitely check out this blog post the Next Piece of Content PostgreSQL The Power of a Single Missing Index this is from CyberTech Postgresql.com and this is a very long poem about a missing index in postgres. So this is quite interesting. If you want to learn more about this poem about postgres, I definitely suggest you check out this blog post the Next Piece of Content the PostgreSQL Person of the Week is Dmitry Dolgov. So if you're interested in Dimitri and his contributions to Postgres, you can check out this blog post in the last piece of Content we had another episode of the Rubber Duck Dev Show this past Wednesday. This episode was on Modern Web application front ends, so if you're a developer and want some more long form content, you can definitely check that out. Our show. That's coming up this Wednesday at 08:00 P.m Eastern Standard Time is on. Working with date times, that does it for this episode of Scaling Postgres. You can get links to all the content mentioned in the show notes. Be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks.

Other Episodes

Episode 55

March 18, 2019 00:07:21
Episode Cover

Replicating Data, SQL Functions, High Availability, Locking Issues | Scaling Postgres 55

In this episode of Scaling Postgres, we review articles covering methods to replicate data, SQL functions, tools for high availability and locking issues. To...

Listen

Episode 339

October 27, 2024 00:11:32
Episode Cover

Postgres Goes Parquet | Scaling Postgres 339

In this episode of Scaling Postgres, we discuss pg_parquet allowing Postgres to read and write parquet files, other useful extensions, open source bounties, and...

Listen

Episode 144

December 13, 2020 00:12:09
Episode Cover

Postgres Migration, Bulk Data Loading, Updates vs. Upgrades, UUID Benchmarks | Scaling Postgres 144

In this episode of Scaling Postgres, we discuss a Mongo to Postgres migration, the best way to bulk load data, running an update vs....

Listen