Leveraging Postgres, Tuning Autovacuum, Slow Queries | Scaling Postgres 15

Episode 15 June 04, 2018 00:12:02
Leveraging Postgres, Tuning Autovacuum, Slow Queries | Scaling Postgres 15
Scaling Postgres
Leveraging Postgres, Tuning Autovacuum, Slow Queries | Scaling Postgres 15

Jun 04 2018 | 00:12:02

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we review articles on how to better leverage postgres, tuning autovacuum, handling slow queries and using window functions.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/15-leveraging-postgres-tuning-autovacuum-slow-queries/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about leveraging postgres, tuning, auto, vacuum, slow queries and window functions. I'm creston. Jameson and this is scaling postgres episode 15. [00:00:21] All right, our first piece of content is actually a YouTube video. It's called rails. Comp 2018 using Databases to pull your Application's Weight by Harisankar P. S. Now, this is on the Confreak's YouTube channel that you can find it. And this presentation is done by someone who's a Ruby on Rails developer. And he describes how he started leveraging the power of his PostgreSQL database in order to optimize the performance and make his job as a developer easier. So if you're a developer, particularly a Ruby or a Ruby on Rails developer, this is definitely a presentation I suggest you check out. So here's a quick overview of some of the things that he covers. So, he talks about indexing and basically what you should think about when indexing. So, for example, the primary key foreign keys columns that would be in a where clause. Index keys used to join tables, date columns if you're using them frequently, and also add a partial index to particular scopes so that they can be retrieved faster. So definitely some good advice there. He also talks about his use of materialized views. So, views are essentially virtual tables that give you a representation of data you want to retrieve, usually across more than one table. However, a materialized view means that it actually creates a table for it that you can apply indexes to. So it offers very, very high performance. The downside is these materialized views have to be rebuilt periodically. But based upon your use case, these can give you some outstanding performance benefits. And he covers how he uses them in his application. And the last area he covers is actually choosing to have his database on occasion generate the JSON that he delivers to end users, as opposed to pulling from the database and generating the JSON within Ruby. And he saw some very significant performance gains by actually using the database to generate the JSON using functions such as rotajson. So these were some major points of the presentation, but I definitely suggest you check out this video. [00:02:45] The next post is Faster JSON generation using PostgreSQL JSON function. And this is from the bigbuinary.com blog. Now, this is related to the previous video. I'm not sure if this individual saw that presentation, but basically he describes the same scenario as the previous presenter did in the YouTube video. And he says, quote, as the number of records in the database grows, rails can take a very long time to generate essentially a JSON response. And quote, the bottleneck can be traced back to JSON generation. So basically they did what the previous presenter did. They started using the row to JSON function and then in certain cases array to JSON and they showed exactly how they used it in Rails. These particular functions, in order to generate the JSON. And the performance benefits that they saw were for short records, it went from 40 milliseconds down to 15 milliseconds, but for a large number of records it went from 1200 milliseconds down to 20 milliseconds. So essentially 60 times faster. So definitely some pretty big performance gains realized by having PostgreSQL generate the JSON as opposed to doing it through Rails. But they do mention this because it is a bit more complex to do this. So they recommend, quote, use Rails way of generating JSON as long as you can, but only when performance starts to be an issue. Think about switching over to generating JSON out of PostgreSQL, but definitely something to keep in mind to see if you need assistance in scaling your database with this. [00:04:28] The next post is Tuning postgres auto vacuum for scale. And this is the Gojakeengineering.com blog. So this post discussed the scenario where they had a heavily written to table where auto vacuum was having trouble keeping up. Now in particular they talk about the importance of the transaction ID and the potential for it to run out and the dangers of that where postgres could literally shut down if you don't have enough transaction IDs for particular table. Now, they had a heavily written to table where they had millions of transaction IDs being consumed each day in their table and quote with the default settings, the auto vacuum process was able to reclaim only 70% of them and took up to eleven days to complete. So auto vacuum definitely needs to be tuned in this case. But here they go over what they did. They described the database virtual machine they were using and then they went in and described the different settings that they changed. So they changed auto vacuum work memory up to 2GB and then they have a lot of the detail here about what the setting does as well as changing the vacuum cost limit, adjusting the maintenance work memory and the work memory as well. But related to auto vacuum, essentially these two auto vacuum work memory and auto vacuum cost limit were the ones that sped up their auto vacuum. So now they were able to process 1 billion IDs in only two days versus not being able to do the daily consumption and taking eleven days to do it. And then they talk about the future, what they're planning in terms of increasing the number of workers and increasing the cost limit. Because when you increase the number of workers, you actually should increase the cost limit as well, otherwise those workers will actually work a little bit slower. And basically how often to kick off vacuum is the auto vacuum vacuum scale factor. And they have some follow up advice in terms of monitoring the transaction limit as well as some design considerations where you can reduce the risk of having these heavily written two tables such as partitioning or storing data in more than one table. So if you're observing some of the issues they were or heavily written tables and auto vacuum. Having trouble keeping up. Definitely a blog post to check out. [00:06:53] The next post is why is PostgreSQL running slow? Tips and Tricks to get to the source. And this is from the several nines.com blog and in it they go over a step by step process by when you have slow queries what you can check. So their step zero is just basically gathering as much information about what are the problems, where it's happening in the application, what are the queries that are being impacted. Step one is check. PG stat activity. What are the active queries that are running in the database and look for ones that are waiting or being blocked potentially. And if the queries are running okay, why are they taking so long? So basically use explain plan at that point. Another thing to check, or are the tables loaded? Has it been a while since the auto vacuum has kicked off to remove the dead tuples in a table? So you can check that as well as checking how often checkpoints are occurring by looking for fatal or error messages in the PostgreSQL log and then covering things like the health of the system is it swapping to disk, what is the CPU load and the different logs of the system. And then finally they suggest getting some assistance if you're not able to narrow down what the potential cause is. Some pretty good series of steps to check when your database or certain queries are running slow. [00:08:16] The next post is fun with SQL window functions in postgres. So window functions historically I've typically seen them used to do running totals. For example, let's look at someone's salaries over the past twelve months and do a running total in a column. However, you can do a lot more functions and this is a very short blog post that talks about some of those. So for example, one of them is you could rank over what you're partitioning by. So in this case they're sorting salary information and ranking it by the top salary by department. And it also goes into in addition to rank you could also do percent rank which gives you the percentage ranking based on your window. And then you have Lag which will give you the row value x rows before your current row and also Intile when you want to compute what percentile particular values fall in. So again, this is a very brief post, but it shows you some of the capabilities that Windows functions offer. And a lot of times for particular queries it's faster to use window functions in your applications versus pulling all the data back and doing that in your applications layer. So definitely a blog post to check out. The next post is actually a YouTube video called scaling a relational Database for the cloud age. And this was at Data Enginef, San Francisco, 2018. This was on the Haka Labs YouTube channel. And this is the co founder and VP of Engineering at Citus Data. And basically this YouTube video, he goes over three main areas of how to scale your PostgreSQL database. And of course, because he's from Slidus Data, he focuses on the Sharding aspect. But I found it very interesting how he discusses scaling a relational database and he discusses basically splitting tables into their own database, similar to the thoughts on Scaling that I presented in a previous episode of Scaling Postgres, where actually the PostgreSQL community is thinking about the future of Sharding. The secondary goes over how to use SQL to be able to target particular partitions and bring results of queries together. Whereas the third talks about transactions that potentially have to hit multiple database partitions and how that requires using things such as two phase commits and in the case of like global atomic transactions because when you're sharding, you're literally having more than one database and all the coordination that needs to happen with that. So if you're considering Citus or looking at Sharding or what the future holds for, that definitely a presentation to check out. [00:10:56] Last Post is PostgreSQL Streaming Replication a Deep Dive. So essentially, if you're not familiar with streaming replication, it basically allows you to stream all the data coming into a PostgreSQL database to one or more replicas. And this post goes over it and gets it a little bit into a little bit of the history of replication and basically how it has been using the write ahead log or the wall files. And it goes over how it works from a basic configuration with these convenient images. So if you have not yet set up streaming replication and you're interested in reading a bit more about it, definitely a blog post to check out. [00:11:37] That does it. For this episode of Scaling Postgres, you can get links to all the content presented in the show notes. Be sure to head over to Scalingpostgres.com where you can sign up to receive weekly notifications of each episode, or you can sign up at YouTube or itunes. Thanks.

Other Episodes

Episode 60

April 21, 2019 00:14:51
Episode Cover

Pluggable Storage, Developer Gotchas, Cursors, PG12 | Scaling Postgres 60

In this episode of Scaling Postgres, we review articles covering pluggable storage, gotchas for developers, using cursors and new PG12 features. To get the...

Listen

Episode 243

November 27, 2022 00:10:25
Episode Cover

IN vs ANY, Ghost Conditions, Percentage Calculations, Variadic Unnest | Scaling Postgres 243

In this episode of Scaling Postgres, we discuss how PG15 helps reduce replication lag, how to get the parameters used in prepared statements, PostGIS...

Listen

Episode 341

November 10, 2024 00:14:15
Episode Cover

The Future of Postgres Upgrades | Scaling Postgres 341

In this episode of Scaling Postgres, we discuss the future of Postgres upgrades, the usefulness of pg_dump, partitioning with minimal downtime and limitless Aurora....

Listen