PgBouncer Breaking Change | Scaling Postgres 323

Episode 323 July 07, 2024 00:23:47
PgBouncer Breaking Change | Scaling Postgres 323
Scaling Postgres
PgBouncer Breaking Change | Scaling Postgres 323

Jul 07 2024 | 00:23:47

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss a breaking change in the new version of PgBouncer, PostgreSQL 17 Beta 2 is released, examination of the new built-in collation provider in PG 17 and Notion's data lake.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/323-pgbouncer-breaking-change/

Want to learn more about Postgres performance? Join my FREE training called Postgres Performance Demystified here: https://www.scalingpostgres.com/courses/postgres-performance-demystified/

 

View Full Transcript

Episode Transcript

[00:00:00] There's a new version of PG Bouncer that was just released, and in this version they do have a breaking change that's not immediately obvious in their announcement here. But before we get into that, I hope you, your friends, family and coworkers continue to do well. So our first piece of content is PGBouncer 123.0 is released. This is from pgbouncer.org dot. And in order to see more detail about it, we're going to click on the change log. And here we see the three main features that were listed in the announcement. The first is adding support for rolling restarts so Sigterm doesn't cause an immediate shutdown of PG bouncer anymore. It now does a super safe shutdown waiting for all clients to disconnect before shutting down. Now this allows rolling restarts to happen, and they are classifying this as a minor breaking change. But you can still get the old behavior if you use Sigquit instead of Sigterm. And actually if you look at the pr, if you send one Sigterm message and then you send another Sigterm message, that also immediately shuts down PG bouncer as well. So just something to be cautious of when you're upgrading to version 1.23 and higher. The next new feature is introducing user maps for cert or peer authentication methods. So that's great to have. And then adding support for replication connections through PG bouncer. Before PG bouncer just used to disallow these connections. Now as long as you're using, I think just session based pooling, it should allow these replications to go through MB proxy. But if you want to learn more about that, definitely feel free to check out the release notes or the changelog here. [00:01:47] Next piece of content Postcardscril 17 beta two is released. This is from postcardscale.org dot. So shortly after the release of beta one, we now have beta two. There's only a few changes here. They're correcting a behavior on the onempty clause which is not present in an SQL JSON query, fixed an issue with PG logical slot get changes and several fixes around the new data structure for vacuum related data. So if you are testing out postgres 17 before it's released, you can now check out beta two. [00:02:18] Next piece of content looking at the new built in collation provider in PostgresQL 17. This is from PostgresQL Verite Pro and with version 17 they are now introducing a new type of collation provider, a built in one meaning built into postgres, so you're no longer relying on a Libc code or even a separate ICU one. And this new built in provider provides a C locale and a C UTF eight locale. And this post focuses pretty much exclusively on the CU TF eight locale, because he believes this gives the best of all worlds. And the advantage of the Cu TF eight locale is it gives you accurate results with non english languages. Now, in terms of C UTF eight being the best of all worlds, he's looking at this across three different areas. So the first is OS independence. This kind of means how portable is it? So as you can see here, the Libc collations are not that portable because you get version changes and going to a different operating system, things will be collated differently, whereas ICU correlations, as well as the C or the cutf eight built in collations are much more portable. Next, he looks at bytewise comparisons. This is basically doing a binary sort, and all the different C collations provide this capability, of which the built in cutf eight is one. And lastly, supporting the Unicode repertoire. So being able to support non english languages very well, you can see the basic c is not very good at this. It basically only does English fairly well. But using the built in UTF eight gives you that capability. And I think you see this mostly in cases where you're trying to do an upper or a lower of something. The functions are much more accurate with a C UTF eight compared to a general C locale. Now, if you wanted to use this new built in collation, how do you do it? So when you're initializing your database cluster, you can specify the locale provider and make sure it says built in. Then you specify your locale being cutf eight. So that will make it the default locale for this entire database cluster and any new databases you create with it. But if you already have an existing database cluster, you can create a new database using this new type of collation. So you create the database, you specify the locale provider to be built in, and you specify the built in locale to be cutf eight. And he also recommends specifying the template as well and using template zero, as opposed to the normal default, which is template one, because template one could have been altered with some changes that might be incompatible with the new locale of this database. So he recommends just using the template zero. But you can even use this type of locale for single indexes or providing collations for certain queries so in this example, he creates an index where he's specifying the collation to be PGC UTF eight, or even in your queries to use that as well, so that this type of new index would be considered by the planner. Now what about indexing and search performance? Because we looked at this in a previous episode of Scaling Postgres where dipesh was looking at some of the performance, and he showed that some of these built in providers were a little bit slower than like the Libc correlations. [00:05:42] But first he looks at index creation and how fast that happens. And you can see that the slowest is the English UTF eight from the LibC provider, and it's a half to maybe five times slower than the other providers and their collations. But interestingly, both C and C UTF eight using the built in correlation provider are super fast. They're the fastest ones. So that's awesome. And then looking at the query performance, it's not as big a difference, but you can see the C provider is the fastest. The C UTF eight from the built in provider is just in second place by a small amount, being even faster than the LIBC provider cutf eight. So that's great. And he says the reason for this is that when Dipesh was doing his testing, he was using collate clauses. But there is a performance difference between using collation clauses on individual statements and what is the database's locale set at. And you'll get greater performance sticking with the locale of the database because there are cache lookups that are happening. And here's a link to talk about that discussion here. But basically you'll get better performance today with sticking with the locale of your database as opposed to using collate clauses. But they say that could potentially change in the future. But of course the big disadvantage of c type correlations is the sorting. They do binary sorting and it doesn't look that great as opposed to a linguistic sorting. But of course how you get around that is you specify a collate statement as how you want your statement ordered. Maybe it's english us UTF eight, for example. Or he said there's also the Unicode collation that's quote designed to work well across all languages. And he even talks about changing the collation based upon the language of the person using the application. He talks about doing it in JavaScript, I guess, basically sorting in JavaScript as opposed to the database. But if you set the language of the person using the app, if your app supports that, you could use that language in collations for the query sent to the database as well. And lastly, of course binary sorted indexes are faster, but this can even lead to a little bit greater improvement because a lot of times when you are doing a left anchored search for something if you're using non c collations, generally you have to use Varchar or text pattern ops to get an index to be used. However, that's not necessary with C collation. So the cutf eight built in collation provider does benefit from this in that you don't have to create an index with this operator class in order to get the search benefits. So my understanding that there were no changes to indexes done, but with one collation, the performance returned this particular query in 4 seconds, whereas this query using AcUTFA collation returned in 31 milliseconds because it could use the index to satisfy the query. So that's another benefit of a C type collation. So this was a really great blog post. I definitely encourage you to check out, and I'm really happy to see the performance is good when you're using whatever the database's locale provider is. And even though the built in PGC UTF eight looks great, the only disadvantage is it doesn't do so well against linguistic sorting or what he's calling advanced comparisons, which means addressing case insensitive and accent insensitive comparisons. But that basically means if you have a need for these two, then go ahead and use a language specific UTF eight or even the Unicode collation. But check out this blog post if you want to learn more. Next piece of content building and scaling notions data lake this is from notion. So and notion has a very large multi terabyte database that they have sharded. So you can see here, prior to 2021 they had a single postgres instance that they then sharded into 32 postgres instances, each with 15 logical shards. So a lot of different different individual shards. But today they now have 96 postgres instances with each instance containing five logical shards. So again, close to 500 shards. But this post is talking about their data warehouse architecture or their data lake architecture. So again, around the 2021 timeframe, they had all of these operational data stores, all of the shards and postgres. They then had 485 tran connectors running against these shards to replicate the data into Snowflake, which is a data warehousing solution. [00:10:27] But they were running into challenges with this layout. Basically, the monitoring and managing 485 trans connectors was super challenged, along with trying to resync them during postgres resharding operations or upgrades. Second, ingesting data to Snowflake became slower and more costly, and that's primarily due to the update heavy workloads. They say that Fivetran and Snowflake worked really well for predominantly append only data, but due to their heavy update workload, it was really a challenge because if you think about it in notion, you create, you know, a document of sorts and you keep making changes to that document. Well, that's a very update heavy operation. And lastly, their data transformation logic became super complex for what they were trying to do, and they needed some greater capabilities in this area. So basically they decided to redesign essentially their their data lake of sorts. Now, they said they weren't entirely replacing Snowflake or Fivetran, but those continue to work really well for predominantly insert heavy workloads, but not necessarily update heavy. So for these they went to this different structure. So from their postgres shards, they used debesium and change data capture to send that data into Kafka. Then using Hudi and Spark got stored within Amazon s three that they could then use for their analytics and reporting data warehouse, or sent to an elastisearch cluster, different key value stores or even vector databases. So for their data repository, they basically chose s three to store all of this data. In terms of their processing engine, they chose spark to do it, and particularly the Pyspark framework. They chose to do incremental ingestion over snapshots, or really do a hybrid in certain cases. So they still do snapshots on occasion, but normally it's relying on just incremental ingestion using the change data capture methods. So they're using the Kafka CDC connector for postgres to send the data into Kafka, and then using Apache Hudi for Kafka to send into s three. And they did evaluate Apache iceberg or databricks Delta Lake, but they ultimately settled on Apache hoodie. So this blog post continues to go into a lot of detail on it. If you are building a data lake data warehouse, perhaps you'd like to review this blog post to understand more how they're doing it. [00:12:58] Next piece of content making the world my single point of failure with PostgreSQL. This is from Gabrielebartulini it and this post is about cloud native PG, specifically the new version that's coming out 1.24. And this offers the capability to, quote, declaratively control the switchover of postgres clusters across different regions. So as of today or with the current release of cloud native PG, you just have a single cluster running. So a single Kubernetes cluster with cloud native PG as the operator to run your postgres cluster within it. But this new version allows you to replicate this particular cluster to a separate region. And in their example here, they're imagining you have a central Europe Kubernetes cluster that is active. So it is running a postgres primary and a couple of replicas within Kubernetes using the cloud native PG operator. But it then uses replication to keep a second redundant cluster in a passive state that it could be failed over to. And the operator allows you to do a controlled switchover where you demote the primary cluster and you promote the passive cluster. So here's what it looks like. In a single Kubernetes cluster you have your users talking through a load balancer, talking to your applications, those applications talk to PostgreSQl. It has three instances, one primary, two replicas. Backups are continuously happening. Well, it looks like it can use a combination of log shipping of the wall files to the secondary cluster, or do streaming replication to it. So in the example you see down here, this is the passive cluster. It does a continuous backup from the primary, and the primary in this cluster is considered the designated primary. So it's not a primary yet until there's a controlled failover event to promote this cluster to be the primary Kubernetes cluster. And the main thing about this is that it's all done declaratively, meaning the cloud native PG operator takes care of handling all the orchestration of this. So he goes through and talks through all the declarative configuration you do to set up this type of cluster environment. And he says you don't just have to stop at two, you could have two or more clusters running in different regions that support this type of behavior. But if you're interested in this, definitely check out this blog post. [00:15:25] Next piece of content using transaction chaining to reduce server round trips this is from bugfactory IO, and he's talking about a scenario where you have multiple transactions that need to be executed one at a time. So you start a transaction, do some work, then you commit it, you start a transaction, you do some work, maybe you have to do a rollback, then you start another transaction, do some work, and then commit that. And presumably this is all one sequence of events that must happen, so they're always done in the same order. Maybe not necessarily a rollback, maybe it's a commit, but basically the same sequence of transactions need to happen. But the disadvantage of this is that when you send a start transaction, that is a single command that is sent to the database that puts the session in a particular state. So you have to deal with network round trips with a this first one, the second one, and this third start transaction. But to make it more efficient, you can chain these transactions together. So you start a transaction, do whatever work you need to do, and then you use the commit and chain clauses. What that does is it commits the work, but then it also initiates a new start transaction without you having to send the start transaction. So the and chain essentially starts a new transaction using the same characteristics like isolation level that the previous start transaction had. Then it does some work. Then you could do a rollback in chain again. You don't need to specify start transaction again because it's already initialized a new transaction. So this is a great way to reduce network round trips if you have this type of sequences in your application. So if you want to learn more, definitely check out this blog post. [00:17:07] Next piece of content there was another episode of Postgres FM last week. This one was on soft delete, and Naglia and Michael discussed different ways of doing soft deleting. I think the conclusion from the episode is that there's two major ways to do it. One, you do it on an individual table and you add perhaps a deleted at column that stores a timestamp of when that data was deleted. So that means your queries always have to look for where deleted at is null, it hasn't been populated with a date. And personally, that's what I've historically used. Now, Michael did mention he's seen this pattern used in applications he's worked with. And then generally what they do, they have some sort of scheduled job, maybe a cron job that goes in and deletes that data periodically. So the use case for this type of soft deletion is to help protect users from themselves. And I have done this exact pattern for that reason. Like for example, I have a product that is a form builder, and if someone inadvertently deletes a form they're working on, I don't want to have to restore all the data and metadata associated with that. So we implemented a soft delete feature so that we could quickly just update that one value to get the form back for them. Now, exactly as Michael mentioned, this is normally something that happens very immediately. Oh no, I deleted this. So what that means is that you can eventually delete this data, and if you can eventually delete it, this is just a safeguard to protect users from themselves, then this is a great way to implement this. But if this is more a scenario where you want to do a soft delete because you want to keep the data around or archiving it, then that's probably a different solution. Now they did mention having shadow tables that exactly mirror the primary table. So in my example, if you have a form table, what that would mean is creating an exact shadow of that table and moving the data from the primary table to the shadow table. And this movement is generally done probably using a trigger, although I have seen this done at the application level as well. But Nikolai mentioned this was a headache nightmare. Not great, because now you have to have two tables that always stay in sync, and remembering to add a column to both tables at the same time is really hard to do. And what he advocates is actually just using a single table for all of this type of data you want to archive, basically have a column that stores, say what operation happened and then the table it happened to, and then store all of the different data in a JSON B field. So that way schema changes don't really matter. He says you can also archive updates by either having two separate JSON B's, one for what it was before, what it was after, or even within the same JSON B organize it in a different way, to have both what the data was before and what it was after. So I would say this option is great for archiving when you're not really wanting to bring the data back into the table because that is much harder to do. So having a single column that defines whether something's deleted or not, or maybe a status column, determine what data is visible is great for bringing data back if you ever need to. But the archiving solution of a separate table where you send deleted data to is great if you don't need to bring it back or restore it, but you do want to retain it for a period of time after it's been deleted. But if you want to learn more about what they discussed, definitely encourage you to listen to the episode or check out the YouTube video down here. Next piece of content key PostgreSQL configuration parameters for enhanced performance this is from stormatics tech, and they're showing a particular instance, a two core eight gigabyte system, and they said alright, let's run pg bench against it to get some benchmarks for performance in an untuned state. And then they tuned it, changing approximately twelve or so parameters here. And these are really important ones to change if you're trying to optimize postgres and as a result of that it improved performance by approximately 30%. So this is a great example of the bang for your buck configuration options you can change to get better performance out of postgres. So check that out if you're interested. Next piece of content database troubleshooting for developers at Brighton Ruby this is from Karenjecks dot blogspot.com and she is presenting a presentation she gave in a blog format and she basically discussed different issues that developers can run into, such as help, I can't connect. [00:21:32] Postgresql isn't running. What do I do? You know, I've run out of space. What do I do if my queries are too slow? Help. I've lost a table, I've lost data. And a lot of this seems to be answering the question, are you in the right database? Is your search path set appropriately so you can find what you're looking for? And do you even have the permissions to find what you're looking for? But check out this blog post if you want to learn more. The next one is also from karenjex dot blogspot.com and this one talks about how postgres is misused and abused in the wild. And this is a presentation talking a lot about different quotes that she's seen with a particular expectation of how postgres works, that it doesn't really work that way, like people saying oh, we have to use a very specific version of postgres or things don't work. Now that may be true of some applications or things like this. With regard to backup and restore, what do you mean? I can't do point in time recovery. I run a pg dump every day. So just differences between expectations and reality or we need to be able to recover to any point in time. So we take backups every 20 minutes, or we don't need a database backup tool, we take daily snapshots of our data volumes or the data directory was full so we deleted the wall files to make some space. So definitely an interesting presentation. I encourage you to check it out if you're interested. And the last piece of content is mastering postgresql for rails. An interview with Andy Atkinson this is from andyatkinson.com and the main reason why I'm mentioning this one in particular is that Andy's book high performance postgresql for Rails is officially released now. So for a number of months it's been available for sale in a pre order state where you could get an advanced beta copy, but now the full book has been released, so if you're interested in that, you can click the link here to find more and purchase the book. [00:23:21] I hope you enjoyed this episode. Be sure to check out scalingpostgres.com where you can find links to all the content mentioned, as well as a way to sign up to receive weekly notifications for each episode. There you can also find an audio version of the show, as well as the full transcript. Thanks, and I'll see you next week.

Other Episodes

Episode 183

September 20, 2021 00:14:50
Episode Cover

Broken Indexes, Trademark Issues, Percentile vs. Average, Logical Improvements | Scaling Postgres 183

In this episode of Scaling Postgres, we discuss one cause of broken indexes, Postgres trademark issues, percentiles vs. averages and logical replication improvements. To...

Listen

Episode 113

May 11, 2020 00:13:06
Episode Cover

arm64 with apt, Contributors, Backup Manifest, Now Functions | Scaling Postgres 113

In this episode of Scaling Postgres, we discuss arm64 package support for apt, annual Postgres contributors, backup manifests & verifications and different now functions....

Listen

Episode 297

January 07, 2024 00:20:38
Episode Cover

Postgres In 2024 | Scaling Postgres 297

In this episode of Scaling Postgres, we discuss hopes for Postgres in 2024, whether you need foreign keys, incremental backups, and five ways of...

Listen