World's Best, Subtransactions, Sharding, Schema Changes | Scaling Postgres 105

Episode 105 March 16, 2020 00:19:22
World's Best, Subtransactions, Sharding, Schema Changes | Scaling Postgres 105
Scaling Postgres
World's Best, Subtransactions, Sharding, Schema Changes | Scaling Postgres 105

Mar 16 2020 | 00:19:22

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss how Postgres is the world's best database, performance ramifications of subtransactions, the current state of sharding and safe schema changes.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/105-worlds-best-subtransactions-sharding-schema-changes/

View Full Transcript

Episode Transcript

[00:00:00] In this episode of Scaling Postgres, we talk about world's best subtransactions sharding and schema changes. I'm Kristen Jameson, and this is scaling postgres episode 105.1. [00:00:20] Alright, well, we're definitely living in crazy times with the Coronavirus around, so I do hope that you and your family, friends, coworkers are safe. And of course I also hope you're having a great week. Our first piece of content is PostgreSQL is the world's best database and this is from Secondquadrant.com. This just goes through a list of features that he feels makes Postgres the best database. And I'll just go over some of the highlights from here. First is transparent security. So unlike a commercial vendor that usually keeps things secret until they have a solution and then resolves it, any security issue is released in the open and everybody has to work against it to find the solution. So it's very transparent working on the security. The next area is MultiVersion concurrency control is good for you. Now, the thing to keep in mind in terms of performance is that this, as he mentions, is best for high insert and select workloads. So right now, updates and deletes take a bit of a hit because it marks rows as unused anymore. Once something is updated, it doesn't do an update in place. So essentially, if you have a lot of updates going on, it's creating more and more postgres rows that need to be vacuumed. So there's a performance penalty for that. With some of the new storage solutions they're talking about coming online. That may speed up some of this because that still maintains the MultiVersion version concurrency control, but it's just something to keep in mind in terms of performance. Next is a PostgreSQL does everything he's proposing. All of the features you would see in NoSQL, React, React, Redis, Mongo, it has ability to do them, although with of course, maybe not some of the richness that exists in those solutions. The next section is hook it to a map. So Postgres very significantly used and chose PostgreSQL as its backend database. So there's a lot of support for that. PostgreSQL is growing, leading the way in open source. It is a very stable, well run project. I have to agree with that. PostgreSQL builds solutions that are stable forever. Now, you can run them forever because it is totally open source. There's no built in support, of course, but you can run particular versions for other, although they still come with upgrades. Declarative is better than imperative. So he's advocating for SQL versus other types of query languages for a database. [00:02:48] Extend PostgreSQL the way you like, so all of the different extensions that you can add to it and even write your own software to make PostgreSQL your own. Go big and go wide. Talking about a lot of performance improvements for significantly large workloads like partitioning, parallel execution, partial indexes, table spaces, caching, et cetera. [00:03:11] All the variety of indexes that are available, gist spgist, KNN, Gist, Gen, Bren, Btree, et cetera. [00:03:20] Roll it in, roll it out, they're talking about you can use this as a heart of a data management platform where you can easily roll data in or send it out. You can use logical replication, you can use foreign data wrappers for managing data. The license is wide open. So a very liberal BSD license, a fantastic documentation. It's developed with test driven development to be able to achieve, as he says, quote this provides for zero known bug releases, internationalization and localization. You can cloud operation, you can operate in the cloud standards compliance, always working towards the standards and language features. So I felt this was an interesting post. You may want to take a look at at all the different features that postgres has, particularly if you're having discussions with other developers with regards to its use. [00:04:10] The next piece of content is subtransactions and performance in PostgreSQL. So this is a post he says came from Troubleshooting, a PostgreSQL performance problem. So first it goes into what are subtransactions? So a transaction, basically you do a begin, you start some level of work and then you eventually end it or commit it. And within that transaction you can choose to roll it back before the commit and work is done as a unit. Now, subtransactions, he says, are a way to roll back just part of what you are working on and that you can start a subtransaction inside a transaction by creating a save point with a name as an identifier. And then if you want to roll back to that point in the transaction, you just do rollback to the save point and then give the name you provided. And here's the use of a subtransaction. So you do a begin, you do some work, you do something that results in an error division by zero error. And then when you continue to try to do more work, that transaction was aborted so everything was lost. But if you do the same thing, you begin a transaction, you do some work, do a select statement and then you create the save point. Then you create that division by zero error. Then at this point in the transaction you can do a rollback to the save point and now you can do some more work. So it's as if this error never happened because you're reverting to the save point you created. Now, in terms of subtransactions in plpgsql, anytime you do a transaction that has an exception block, essentially you're creating a subtransaction here even if it's not executed because I believe we saw that from a previous post that was talking about these and we'll mention why this is a little bit relevant later. Then he talks about compatibility with other databases. Other databases have different behaviors like for example, they may constantly auto commit things whereas PostgreSQL waits for that commit. It doesn't auto commit something. But there are drivers that are used with PostgreSQL. Like they're saying, the JDBC driver has a parameter called auto save, and that could potentially lead to some performance problems. [00:06:25] So he did this as a performance test. He created an unlocked table halfway filled with a fill factor and inserted some data into it, and then ran PG bench and had a few queries. And he set up a save point. So it ran a save point, set a random number, and then executed a select and then an update. And he did two tests. One had 60 save points for test number one. The next test, test number two, had 90 save points. And we'll see why this is relevant here. So he had a machine with eight cores. He set the test to run six concurrent clients for ten minutes. And then this is the PG bench he ran, and it gives the output here. And then he used this perf command to be able to track what the database is doing. And you can see, and this is the test number one. With 60 subtransactions, the transactions per second were 167. And nothing looks relatively out of the ordinary here, but when he goes up to 90 subtransactions, the transaction per second drops by less than half to 60, say 69 transactions per second. So less than half the previous 167, as well as had a lot of work being spent in locking areas like LW lock attempt, lock, LW lock release, LW lock acquire. So he says, quote, even if we take into account that transactions in test two are one longer, there is still a performance regression of 60% compared to test one. Now, here he goes over the implementation of how some subtransactions happen, how transactions are stored in the PGx Act directory, and how subtransactions are stored in the Cbpg subtrans subdirectory, and then goes into the visibility, and how you can define whether these transactions and some transactions are visible. But the important point is here it creates a bit of a snapshot to determine what's visible. And it, quote, has room for at most 64 nonaborded subtransactions per session. And there's one transaction allowed per session. If there are more than 64 subtransactions, the snapshot is marked as sub overflowed. So basically, that's why 60 transactions performed well. But when he went to 90 subtransactions in this test, that's why it slowed down. So he says a quota subflowed snapshot does not contain all the data required to determine visibility. So PostgreSQL will occasionally have to resort to PG subtrans. So that results essentially lightweight locks trying to look up this information. Now, he talks a couple of different ways. How you can diagnose the problem of too many subtransactions is that, number one, if your workload performs well when run single thread a bit badly when run in several concurrent database sessions, two, you often see the wait event subtrans control lock in PGSTAT activity. Or three, you export a snapshot using Pgeport snapshot function the resulting file in the PG snapshot subdirectory of the data directory will contain the line. SOF one to indicate that the subtransaction array overflowed. Now, the thing to keep in mind is that we've been talking about using save points and that's how he did his test here. But this also applies to exception blocks because exception blocks create subtransactions even when not fired. So you can imagine in plpgsql code, if you have a loop that goes over and executes this multiple times, once you hit that 64 limit, you're going to start getting performance ramifications of that. So definitely something to keep in mind, which kind of goes to what he's signifying here in the image at the top of this post is that when you're looping over a thousand times and you have an exception block, you're going to start running into performance problems due to subtransactions. So definitely a great blog post. I suggest you check out the next post. Horizontal scalability sharding in PostgreSQL core missing pieces of the puzzle so this is a follow on post from a series of three that they did back in August 2019 talking about the future of sharding with PostgreSQL. And basically the intent is to go the foreign data wrapper route. So use foreign data wrappers to create separate foreign servers and essentially those would be the shards. So you have a primary database and it uses the foreign data wrapper technique to treat those as separate shards of a common database. And he goes over that process discussing here and how you could potentially create it. And a little bit of this is ideally what should be done, but what are the missing pieces of the puzzle? So basically the core PostgreSQL community wants to eventually get to support sharding. This is probably similar to a Cytus type solution, but in terms of what's missing is that number one, you would need a global Transaction Manager. Two, phase commit for foreign data wrapper transactions. So he mentioned this is in process and the state of it is in as well as high Go software is contributing to this and he says it looks promising for postgres 14. [00:11:39] The next area is a global snapshot for cluster, wide read consistency, so basically visibility of where transactions are across the cluster of machines. And again he goes over the current status of this, although he doesn't project here when it will be. So it still may be ways off. The next area is parallel foreign data wrapper scan, so basically asynchronous query execution. And it looks like this feature is as he says, while this patch is destined for PG 14, they may get a piece of it in postgres 13. So that's something that could be potentially beneficial. And lastly is mostly regarding management, he says a shard management. So the partitions on foreign servers not currently get created automatically as described above. So basically it's the management of a cluster. How are you going to handle that in the creation of the different tables on the foreign servers. So this is something that needs to be investigated. Now, they also mentioned they have their own version of PostgreSQL that does something like this, but you can choose to look at that if you'd like. Now, apart from this, he also says there's a need for general management tools as well. So for example, how do you manage the cluster monitor, the machines? How do you do backup and recovery when you have all these different databases sharded around? [00:13:02] How do you support high availability if one of the foreign servers goes down, how do you handle that? So a lot of areas of maintenance and management you need to consider. But if you want to look for what the future of Sharding is, here's an updated post you can check out. [00:13:21] The next post is fear free PostgreSQL migrations for Rails. Now this is dedicated for Rails and it's talking about database migrations that alter the schema essentially of your database. And this is from Gocardlist.com and they're talking about how schema changes can run into problems. So first he goes over migrations, what they are, how they work, and then mentions how you can run into problems. [00:13:47] Now, for versions prior to eleven, if you added a default it would lock the table. That doesn't happen since eleven, but there are other areas that you can encounter problems. When a table gets an access exclusive lock, you could get stuck behind a lock queue because there are other locks in front of it and that lock is waiting in a queue that locks everything up. So it can still cause problems and essentially downtime for your production environment if you don't do migrations carefully. So they describe that, but then they go over to potentially solving the problem and that they had developed a gem, which is a Ruby library, essentially called Active Record Safer Migrations. To help prevent that, well, now they have a new solution called Nandi. Now, not many people may be using Ruby, but what's interesting is the techniques that they use. You may want to consult this library or look at what they do. Like of course, one of the main things they do is they use a lock timeout and a statement timeout. So if any statement exceeds a particular amount of time, it airs out, or if any lock is maintained for greater than 5 seconds, it goes ahead and stops the migration. So those are great things to have. But also when you're creating a foreign key, you're locking two different tables with an excess exclusive lock and there's a particular technique you need to do in order to not bring your database down. And this library, Nandi, does it for you. So for example, the first step it does is it creates the new column that's needed, then it creates the constraint and finally validates the constraint. So that's a process to avoid issues and they're continuing to work on this library. So if you want to learn more about ways to migrate your schema safely, definitely check out this blog post, particularly if you use Ruby or Rails. [00:15:32] The next piece of content is how to provision and deploy highly available postgres with TerraForm and Ansible automation scripts. This is from Enterprisedb.com. Com and essentially on GitHub they've released a postgres deployment repository that contains Terraforms and Ansible scripts to be able to stand up a replicated environment with management tools as well. Now they are using the Enterprise DB management tools, but still the fact that they open sourced it you could definitely look at how they have their TerraForm set up, how they have their Ansible set up, to see if you want to use that or even modify it for your own use cases to be able to manage your postgres deployment. So if you're interested in that, definitely a blog post to check out. The next piece of content is actually a PDF from Momgn, US databases, Containers and the Cloud. So he talks about using Kubernetes and database containers for managing databases and he goes over what it is, how it can work. I found slide 14 interesting of the benefits for databases and how rapid creation destructions less overhead than the VM. They're not huge benefits for databases. There's a little bit of a benefit for scaling and migration. In his estimation. [00:16:55] The most powerful benefit is automated deployment, although you could be using something like TerraForm and Ansible to essentially do the same thing, which is actually what I use. I use Ansible to do my deployments, but interesting nonetheless. If you want to check out Kubernetes and postgres presentation, the next piece of content is webinar state of full text search in PostgreSQL twelve follow up. Now this is a webinar that covers a YouTube video we posted actually in episode 100 about full text search. So this is the same individual in the presentation and it's a webinar version. It's a little bit longer. So I believe I preferred the version that was presented in episode 100, but you can check out the webinar version of it just by clicking here to register. And this is from a second quadrant.com. Also from secondquadrant.com is with a little help from my friends. So this is basically for postgres twelve talking about width and Cdes and the enhancement that was done in terms of inlining as well as how to use it in use cases. [00:18:02] Next post. Also from secondquadrant.com is how to deploy a multimaster BDR cluster in Kubernetes. So again, BDR is bi directional replication and how you would deploy that using Kubernetes. And they have a video that shows the different commands to run through as well as a text description of it. So if you're interested in BDR on Kubernetes, definitely check out this post. [00:18:26] The next piece of content is the new PostgreSQL Person of the Week as Christophe Pettis. So this is an interview with Christophe and his work and contributions in Postgres. And finally, the last post is Querying spatial features with PG featureserve. So this is the next microservice that Crunchy Data has introduced for the purposes of providing a web interface for PostGIS data. And they tell you how to get started with it right here. So if you're interested in that, go ahead and check out this article that does it. For this episode of Scaling Postgres, you can get links to all the content mentioned in the show notes. Be sure to head over to Scalingposgres.com, where you can sign up to receive weekly notifications of each episode. Or you could subscribe via YouTube or itunes. Thanks.

Other Episodes

Episode 268

June 04, 2023 00:11:38
Episode Cover

Faster Copy, psql Variables, backup_label, Bad Encoding | Scaling Postgres 268

  In this episode of Scaling Postgres, we discuss faster COPY in Postgres 16, how to use psql variables, the importance of backup_label and issues...

Listen

Episode 170

June 20, 2021 00:15:08
Episode Cover

Redis vs Postgres, Hard Quadrant, Optimizing Joins, Materialized Views | Scaling Postgres 170

In this episode of Scaling Postgres, we discuss Redis vs. Postgres, working in the hard quadrant, how Postgres optimizes joins and working with materialized...

Listen

Episode 36

October 29, 2018 00:10:51
Episode Cover

Sharding, Giving Back, SCRAM, Partitioning Evolution | Scaling Postgres 36

In this episode of Scaling Postgres, we review articles covering PG11 sharding, giving back, SCRAM authentication and the evolution of partitioning. To get the...

Listen