Episode Transcript
[00:00:00] When I saw the article that I made the lead article this week, I couldn't help but thinking making money with open source is hard, and a lot of companies try to build moats and do licensing that is open source adjacent, trying to protect the investments that they've made, whereas different initiatives seem to come out of nowhere potentially challenging you. But I, I hope that you, your friends, family and coworkers continue to do well. Our first piece of content is introducing PG time series open source time series extension for PostgresQL. This is from Tembo IO. And of course as soon as I read this, I started thinking about the ramifications of this for timescale DB. And basically Tempo IO has created an open source extension they are calling PGTime series. And it's really interesting how they did it. Now, the reason why they did this is because they create postgres development stacks. Like if you want an analytical database, you can do this, or maybe a geographical information services database. We give you a solution stack for that. And people were wanting a time series stack, and they say, quote, why not just power your stack using timescale DB? But the problem is, timescale licensing would restrict certain features such as compression, incremental materialized views and bottomless storage. And there may be others, I'm not sure. So basically they set out to create their own extension, and they're basically using the built in features of postgres. So they're using native partitioning in postgres, its indexes, its materialized views, and windows and analytical functions. But then it's bringing in a suite of extensions to add functionality onto that, with PGPartman for partition management, PG Cron for scheduling jobs, columnar which is from Hydra for the compression, and I'm assuming it's basically just column based storage, PGIVM which is incremental view maintenance for materialized views, and PG tier for offloading our older partitions into archival storage. Now what their PGTime series extension does is put these individual parts into a time series solution, and they give you a DSL for engaging with it. So for example, you can create a table, measures or measurements, partition it by some timestamp, and then enable this to be a time series table. And presumably it does a lot of things for you, like automatically partitioning by a range, and then it says as things age out, you can convert them to columnar storage or delete them. So it basically looks like a set of convenience functions, because all of the base extensions have all the functionality that they're really using here. So this pgtime series extension just glues other things together and puts a framework in a DSL on top of it. So I definitely see this as a shot across the timescale DB bow, of course, but it's interesting to see what will actually become of this extension. I'm assuming the timescale DB solution is definitely more performant, can probably compress your data better because they've definitely invested a lot more time in it. But if you're looking to get started with a time series database solution, does this give you enough at least to get started? But check out this blog post if you want to learn more.
[00:03:25] Next piece of content that Time PostgreSQl said no thanks, I don't need your index. This is from JY 13 Hashnode device and they're talking about a query that was running pretty fast, but then over time it kept getting slower and slower and slower. So they started investigating what the problem might be and they're just selecting from a single table, and the where clause has three conditions in it. So they're checking a business id, plan id, and a status. So they used an explain plan and they see that an index on the plan id was being used, but it was filtering by the other two columns, the business id and the status. The problem is they don't have that many different plan ids in the data, so the index wasn't very selective. So it wasn't a great index to use. So they didn't explicitly show it, but it sounded like they decided to create a multi column index with all the different columns in there. Then they ran the query again and they got the exact same plan. So it wasn't using the new index. So of course they thought, what the heck is going on? So they talked about a few different things, but basically they came to the conclusion is that, well, maybe our statistics are wrong. Okay, well then how do you update those? Well, you would run analyze, and they were really afraid of running analyze in production.
[00:04:43] We all watched feverishly as my colleague typed the longest two words I've ever encountered, analyze subscription. And of course it ran super fast and there was actually no need to worry about that because auto vacuum should be running analyze anyway. But now when the command ran, it ran super fast and it is using the index that they created. And if you look at the latency chart, you can see it dropped like a rock. So that's why it's highly important to keep postgres statistics up to date. So the planner makes the best decision when running your queries. And after simply running an analyze and adding the multicolumn index. Of course it resulted in being 70 times faster. But check out this blog post if you want to learn more. And if you want to learn more about postgres performance. I do have a free mini course called the PostgresQL Performance starter kit. I talk about how to use explain in pg stat statements to help understand what's going on in your system and start to optimize your queries. You can find a link in the description below.
[00:05:45] Next piece of content Postgres Planner quirks how to fix bad JSON B selectivity estimates this is from pganalyze.com and this is Lucas's five minutes of postgres. And as you can tell, this is another statistics problem. Now, in this episode, Lucas talks about this particular post mostly, which is postgresql stories from slow query to fast via stats this is from render.com and they were getting poor performance from the query we see here. So looking at an events table, joining to a Postgres DBS table, and looking at a value in a JSON B column, and joining it to the database id column on a postgres DB table. Now, they couldn't figure out what was taking so long about this query. They must have mentioned how slow it was in part one. Presumably they already had a B tree index on this JSON B column for the service id, but the query was still running really slow, and I think they said it was so slow we didn't even want to analyze it. They tried a number of different things trying to optimize it. I'll let you read that. And Lucas and his five minutes of postgres does cover it, but ultimately, like the last post, it came down to statistics. So the types of statistics that JSON B collects is basically for the whole value, and essentially the B tree expression index can't really use these statistics effectively for planning. So not only must you have an index on the JSON B column, but you must also add some statistics as well. So basically they created statistics on that JSON B column for the service id component. Once that was done, they were getting what they expected for the most common values and now running the exact same query again here. So nothing changed. The only thing they did was add statistics for this column, and now it runs in 650 milliseconds.
[00:07:36] And basically this is the advice that Lucas talked about in his episode. Basically, if you're using a contains operator in JSON B and you're not seeing the performance that you're expecting, go ahead and use a B tree expression index for the particular value you're looking at. So you can use an equality such as here, and then potentially create statistics on that column as well. But check out these pieces of content if you want to learn more.
[00:08:00] Next piece of content geographically distributed postgres for multi tenant applications this is from Zeta IO. I thought this was an interesting concept. I'm not sure if I would implement it, but basically they have a scenario where if you have your application distributed to users across the world, maybe you're using Cloudflare, you have app servers out on the edge. You might want to have the databases close to where they are too, because otherwise you're going to have a worker, say in Sydney, Australia, here, trying to contact your, your database that's in the east coast of the United States. You're going to have a 200 millisecond latency for each query that worker has to make. And that's definitely not good. So the pattern they are thinking about is you have your control plane. So these are global tables that don't really have tenant based data or account data in them, although I would argue users could potentially be a part of this. But basically, continuing with the analogy, you would put the tenants data in particular regions. So you have a database node in the east coast and that would have all of the tenants from the US, you would have a node in Europe that would have the Europe tenants and a node in the Asia Pacific region for all of those tenants. And they said, well, what if you need to query more than one tenant? Well, you could use a postgres foreign data wrapper and create foreign tables from each of the regional database servers in a control node and make them partitions of a partition table. So you could query this global pages table from the control node and it would get the data from these regional database servers and they have some code showing how you could set this up. They do talk about some of the limitations. There is some performance limitation. It's definitely complicated. And of course write transactions across regions are not acid compliant, although everything happening within a particular region would be acid compliant, of course. But I just thought this was a pretty interesting take on it. And if you have the need to have geo distributive postgres, you may get some ideas from this blog post.
[00:10:09] Next piece of content. There was another episode of Postgres FM last week. This one was on minor releases and they talked a little bit about how often these releases come out and how it's usually not too hard to upgrade them. It does take some downtime because generally you need to shut down the database, update the binaries and then restart the database system again. But there are some providers that can do this more seamlessly for you. They also mentioned the fact that if you do have to do a restart, go ahead and do a checkpoint or two before doing the restart to minimize the restart time required. And of course when you're doing a minor release update, you definitely should check the patch notes because there are times where additional work is required such as re indexing certain indexes after you apply the patch, or even the most recent release 16.3, you have to recreate some system views to fully remedy a CVE that was discovered, but if you want to learn more you can listen to the episode here or watch the YouTube video down here.
[00:11:10] Next piece of content waiting for postgres 17 merge split partitions this is from depeche.com dot and we've talked about this previously that merge and split partitioning is coming for list and range partitioning in postgres, but it didn't really show the patch notes like dust here and they admit this commit comprises quite naive implementation which works in a single process and holds the access exclusive lock on the parent table during all the operations, including tuple routing. This is why this new DDL command can't be recommended for large partition tables under high load. So basically this is an early early version of this functionality being able to merge and split partitions. Hopefully in future releases they will make less locking required to make it possible to do this type of work with tables with more data or more partitions. But check this out if you want to learn more. Dipesh does, go ahead and test it to see how it works.
[00:12:10] Next piece of content n memory disk for postgresql temporary files this is from PgStaff GitHub IO and he was noticing temporary files being created during a parallel index creation and he wanted to try to speed up that index creation. So he said I wonder if I can use an in memory file system to do that, specifically temp fs. So this shows him going through the process of doing that. Ultimately it only sped up the process maybe ten to 15%. So I don't know if it's quite worth it to invest the time to do this, but if you want to learn more you can definitely check out this blog post.
[00:12:47] Next piece of content the life of a bug from customer escalation to PostgreSQL commit this is from enterprisedb.com and this describes an issue a customer was happy with sub transactions with regard to the SLRU cache, talks a little bit about the investigation, the analysis of it, and eventually creating the patch for it. And this was covered back in March by Lucas and his five months of postcreas talking about the benefits of this patch and some of the performance improvements. Like for example, he says here he found 2.5 to three times faster performance improvement after this patch. And specifically this link here shows a YouTube video of him describing the implementation of the patch. So if you want to learn more about that, definitely check out this blog post next piece of content what we're excited about in Postgres 17 this is from timescale.com dot and this is a summary of the things they're looking forward to in postgres 17. Basically PG create subscriber being able to convert a physical replica into a logical ones support for merge and split partitionings, although as we discussed, a lot more work needs to be done for real world use cases. Adding support for incremental system file backup enabling the failover of logical slots. That's definitely huge. Allow explain to report optimizer memory usage and then a few other commits that they got to work on as well. So you can check out this blog post if you want to learn more.
[00:14:14] And the last piece of content scaling clubhouse from 10,000 to 10 million users in six months with postgres. This is from Stepchange work and this is a podcast. I haven't had a chance to watch the entire thing, but it seemed like it might be an interesting piece of content.
[00:14:31] I hope you enjoyed this episode. Be sure to check out scalingpostgrass.com where you can sign up to receive weekly notifications of each episode. You can also find links for all the content discussed and audio version of the show, as well as a full transcript. Thanks and I'll see you next week.