Episode Transcript
[00:00:00] In this episode of Scaling Postgres, we talk about Stats Collector gone, handling latency postgres in the browser and future high availability. I'm Kristen Jameson and this is scaling postgres episode 230.
[00:00:20] What right. I hope you, your friends, family and coworkers continue to do well. Our first piece of content is PostgreSQL 15 stats Collector Gone. What's new? This is from Procona.com and they're talking about when you look at the processes of running postgres 14 you will see a stats collector. However, when you're running postgres 15, you do not see the stats collector process. Why? Because it's gone. So what the Stats Collector does, it doesn't collect the statistics on the data, which is what Analyze does. To track a sampling of the data in your database for the planner to make decisions the stats collector collect all the different statistics in the database related to counts of things such as how many commits have happened, how many rollbacks have happened how many particular statements have been run, how many times auto vacuums run, how many times analyze has been run So basically things located in the PG stat views, which they refer to here. So those statistical views, that's what the stats collector collects. Now they say how it used to operate is that each of the backend processes would send communication over UDP to the stats collector process and then that would write information to files. But apparently people had some problems with this approach. At times you could get stale statistics, at times the stats collector wasn't running or auto vacuum may not be working or starting. I haven't heard about these types of problems necessarily. But another big problem they mentioned here is the I O, what can cause sometimes tens of megabytes would be written per second. I think they mentioned somewhere here. And so then the question is, what are they doing if they just got rid of it? Well now, as they say here, instead of using the files and file systems, statistics now use dynamic shared memory. And there's a commit here by Andres that describes a little bit of the detail with regard to it. So basically there's one less process running and there should be less file writing to it. I mean, I'm assuming at some point it's going to save the contents of the shared memory on a periodic basis. But with using the shared memory they actually have a new parameter called stats fetch consistency. So you could have none, which is the most efficient, but you're not going to have any read consistency for any queries that would expect to get the same value back. You can set a cache value to it so that repeatable reads work and that this will be the default. Or you could do snapshot, which is slower but presumably gives you a higher resolution because of course nothing would be cached. And they say what happens during a restart? Well, as part of the shutdown procedure it does save it to memory, but it will be discarded if there's a crash, which is usual for the stats. They also introduced a few different weight events because we're using shared memory now that these are also a possibility that the stats could cause some weight events, so they're adding some new ones here. But this is a pretty interesting addition I haven't seen, and there's a ton of people that did work on this feature. So thank you so much for doing that. If you want to learn more, go ahead and check out this blog post.
[00:03:35] Next Piece of Content five minutes of Postgres episode 32 Benchmarking and Improving Query Network latency using TC, Copy and Pipeline Mode this is from Pganalyze.com, and they're talking about a post written by Lawrence Jones that described them making a transition at his former company GoCardless from IBM software hosting to Google cloud hosting. And in the process of moving all their servers over, at one point, the database was going to be in a separate cloud provider, a separate data center essentially, than their application servers. So this could introduce significant latency. And that latency may have varied when they were colocating in the same location at about 0.5 milliseconds up to as long as ten milliseconds, going to an entirely separate data center. So in preparation for this move, they were wanting to address things ahead of time. And what they literally did is introduce latency into their running production application to test how far they could push it. So they would increase latency using the TC command, which is part of Iptables It's traffic control. So you can actually introduce a latency and they increased it by one millisecond to measure the impact and resolve any issues that came up prior to doing their move. Now, as Episode continues, talking about how you can track timing, and you can use Explain Analyze and that gives you the server return time. But in a psql client, there's also the timing switch, and that will give you the timing essentially from the client. So if you're on the server running psql, this should pretty much match what you see in Explain Analyze. But if you're running psql in another location and reaching across the Internet or across a network with a lot of latency, you're going to see this timing increase because it's the results of pulling down data and sending it to the client. He also talked about one thing that could hinder performance if you have high latency is if you're sending single inserts or single statements at a time. And one way to speed things up if you have a high latency network is to use multiline inserts. So you're sending more than just one row of data per insert. You can put many rows in there. And of course, Copy is a much faster way to transfer data if you need to do it over a high latency network. And then lastly, you also talked about the new feature that was available, I think in Postgres 14 is Pipeline mode. This is essentially async processing of statements sent to the server. So you can send multiple statements, but then not have to wait for the return and the data gets returned later. So these are a couple of different methods you could use to mitigate high latency connections. And if you're interested in learning more, I encourage you to check out this episode.
[00:06:20] Next Piece of Content crazy idea to Postgres in the browser this is from Crunchydata.com and this is a follow up from the post we discussed last week talking about the Postgres playground. And this goes into detail about how they set that up. And basically they had seen someone had taken SQL Lite and compiled it to WebAssembly to essentially run in the browser. They said hey, can we do this with Postgres? And unfortunately, due to some networking issues, that wasn't really possible.
[00:06:47] But what they did do is create a whole virtual machine in the browser. So there's this particular emulator that they used called V 86. So basically it compiles to run in WebAssembly, but it's its own virtual machine. So basically the first thing they did is they set up a VM using QEMU and they set up a very minimal Alpine Linux image, created a very small image, set up the VM, installed Postgres on this minimal image, made sure they could connect to Postgres. Using it, they shut down the virtual machine and then they took that image they created of Alpine Linux and they used it to set up an emulation in V 86. And with it they essentially got Postgres running in the browser on WebAssembly within a virtual machine using V 86. So if you want to learn more about the details about this, definitely check out this blog post next piece of content PG Friday defining High Availability in a Postgres World this is from Enterprisedb.com and this is a post that Sean has talked about multiple times on basically a better path forward for high availability in postgres. And he talks about the current situation, what a lot of people are using now some Petrones PG auto failover, but what his ultimate desire is, is to have something that looks like this. So basically right now the Postgres cluster is considered a clustered of databases on a single machine. And ideally they'd like to take that concept of a cluster to mean this whole thing inclusive of tools that perhaps don't even exist yet, but have a common listener that all applications can connect to. And this listener is aware of what databases exist and which is the primary to send all the connections to, or send readonly connections this way and you would have some consensus layer to agree and know and coordinate all the different nodes of postgres that exist as well as handling the listener services too. So he says if postgres can build out something like this we would no longer need tools such as Sed or Console IO or HAProxy or anything like that. Ideally, Postgres would kind of handle it all. Now, of course, this would take a lot of work, and there's nothing like this currently planned that he said. But it seems like this is the path he would be very interested in following. And if you want to learn more, you can check out this blog post next piece of content ansible benchmark Framework for PostgreSQL. This is from Enterprisedb.com, and he's talking about them setting up a benchmark solution for testing postgres or their EDB advanced server, and how they want to run many tests in parallel and to make sure builds work successfully. And they set up a way to do this using Ansible, setting up their entire infrastructure and being able to run multiple tests in parallel. And as a benefit, they released this as open source, which right here on GitHub. And you can download and start using this framework for doing any postgres testing. You mean maybe you're developing an extension and you want to do some various tests with regard to it? Well, you could perhaps use this framework to assist with that, but if you want to learn more, definitely check out this blog post Next Piece of Content what does a PostgreSQL Commit Fest Manager do and should you become one? This is from Timescale.com, and last July, it looks like Jacob here became a Commit Fest Manager, and he talks about what the job entails, as well as asks the question, should you become one? So if you're interested in that, you can definitely check out this blog post.
[00:10:24] Next Piece of Content upgrading PostGIS Sfcgal to 3.1 or higher this is from PostGIS Net, and this post is talking about as of the release of PostGIS Three. One there's the support library. The Sfcgal support library is no longer part of the PostGIS core, so basically you are going to have to use this new library if you want that functionality. So definitely something to keep in mind if you use this particular library in PostGIS.
[00:10:54] Next Piece of Content migrate scheduled jobs to PG timetable from Pgagent this is from Cybertechphen Postgresql.com, and if you're using Pgagent and want to migrate to PG timetable, this blog post describes how to do it.
[00:11:08] Next piece of content. They had another episode of Postgres FM this week. This one was on how to become a DBA. So if you're interested in this 30 minutes episode, definitely check out this piece of content. And they do have a link to the YouTube video. It's actually this little symbol down here that takes you to their playlist. I don't think it brings you to the exact video, but at least you get the playlist and you can find this episode within that playlist.
[00:11:34] Next piece of content. The PostgreSQL person of the week is Antonian Hoska. If you're interested in learning more about Antonian and his contributions to postgres. Definitely check out this blog post and last piece of content we did have another episode of the Rubber Duck dev show this past Wednesday evening. This one was on live streaming code development, specifically with Laravel, but most of it was how to get started. Live streaming code development. And this was with Aaron Francis. So if you're interested in that kind of long form developer discussion, we welcome you to check out our show.
[00:12:08] That does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show notes. Be sure to head over to Scalingpostgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks.
[00:12:30] You close.