psql Shortcuts, Debugging Deadlocks, Find & Stop Queries, Using SCRAM | Scaling Postgres 221

Episode 221 June 27, 2022 00:17:35
psql Shortcuts, Debugging Deadlocks, Find & Stop Queries, Using SCRAM | Scaling Postgres 221
Scaling Postgres
psql Shortcuts, Debugging Deadlocks, Find & Stop Queries, Using SCRAM | Scaling Postgres 221

Jun 27 2022 | 00:17:35

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss psql shortcuts, how to debug deadlocks, how to find & stop queries and how to understand and use SCRAM authentication.

To get the show notes as well as get notified of new episodes, visit:

https://www.scalingpostgres.com/episodes/221-psql-shortcuts-debugging-deadlocks-find-and-stop-queries-using-scram/

View Full Transcript

Episode Transcript

[00:00:00] Hey. In this episode of Scaling Postgres, we talk about psql shortcuts, debugging Deadlocks find and stop queries and using scram. I'm Kristen Jameson, and this is scaling postgres episode 221. [00:00:23] Alright, I hope you, your friends, family and coworkers continue to do well. Our first piece of content is psql helper Managing Connections and simplifying queries. This is from LFOs IO. He's talking about configuration changes you can do to psql predominantly by altering some of its configuration files it looks at. So they're talking about you have a PG pass file that you can create in the home directory of where you're running psql and it will consult that. And that's an easy way for you not to have to fill out passwords, but you still have to put this long connection string when you're connecting to particular databases. However, there's this other file called PG underscoreservice.com that you can configure to define different servers that you can connect to. And this kind of looks like a config file for SSH. But basically you give a particular database a name with square brackets, and then you can define the host, DB name, port, user password, application name, all the different parameters that you can pass into psql. You just state them here. So to connect to the service, you just start psql and you pass in an environmental variable called Pgservice, where it equals the service you want to connect to. Now, I've used Pgpass, but I haven't used Pgservice.com probably because each database that I tend to do consulting with, they have it on one system and I'm usually just dealing with one database, so I just connect to that one database. The next file they mentioned is the psqlrc file. And this of course allows you to configure different configuration settings of psql, but it also enables you to create something like an alias. So for example, they did Backslash set activity and gave it a particular query. And now just by typing activity, you can run that particular query. So this is a great way to create aliases for frequently run queries for your database. And if you want to find out more, definitely check out this blog post. [00:02:26] Next piece of content debugging Deadlocks in PostgreSQL. This is from CyberTech UL.com. They're talking about Deadlocks, where one session is holding a lock of some sort. The other one tries to access the same lock, but each is locking something different and the system can't resolve the lock contention. So it basically results in a Deadlock. Now, for the example they've shown here, this is actually a type of Deadlock situation I haven't necessarily encountered. When I see Deadlocks, it's normally because there's a long running process that acquires locks on potentially multiple tables and doing a body of work. Essentially, it hasn't been ordered appropriately, so maybe one worker starts from the beginning and starts going through its locking process. Another worker takes the reverse sorting order and goes to that process and eventually one process ends up locking something the other needs at the same time that it's holding a lock for something the other process needs. So 95% of the time in my consulting, I've been able to resolve deadlocks by just telling the clients to order the work that they're doing and that essentially resolves the deadlocks. Now there is order involved with this one, but it's a little bit different. So basically they had a parent table and they created a child table. They started a transaction inserted into the child table particular value. Now this will actually hold a lock because there is a foreign key constraint on the parent table for this row because it doesn't want it to be deleted. Then a second session starts, it inserts the child value. This will create another shared lock on that parent table row, but it won't be blocking. But then it's going to lock that same row with a four update. But then when they try to do the same thing on session one, then you get a deadlock. The locks can't resolve themselves. Now you'll see this kind of information in the log and it tells you the exact statement that has caused the deadlock. It was this one and this one. But normally these will work fine. You can hold many different four update locks if you want to on a particular row, or at least they can wait. The problem is one session already has an existing lock and it's trying to lock the same thing. But that information, the fact that it's holding a lock due to this insert that has a foreign key constraint on the parent, is not listed in this log information. So it becomes hard to debug what's causing this type of deadlock. Now they say for this particular scenario, if you did a select for no key update, that will actually allow you to avoid the deadlock in this particular case. But of course, this post is talking about how to debug the deadlocks. One solution is by doing a lot of logging on the application side to determine, okay, what transactions have led up to this deadlock because it's really historical locking that eventually results in a deadlock. Typically another way to try and figure out what's going on is annotating the queries on the application side. So putting in comments about where this code came from in the application to understand how it got locked and how this query is being run. Another is trying to do more logging on the database side by altering the log line prefix and logging all statements. But that can be a performance dog, definitely make things very slow. Another interesting solution I hadn't explored before is actually increasing the deadlock timeout. I mean, I tend to keep it at 1 second, but they're talking about extending it to 15 minutes. Maybe that's a bit long, but set it long enough that you can actually do diagnosis to figure out, okay, what's locked. You could look at the PG lock table and try to figure out what locks exist that are causing this issue. But even with these techniques, it can be hard to understand how to determine what's causing a deadlock. But again, like I said, 95% of my cases, it's been a case of where something needs to be processed in an ordered fashion. And once that's done, it usually avoids the deadlock situation. But check out this blog post if you want to learn more. The next piece of content how to find and stop running queries on PostgreSQL. This is from adamj EU and basically he talks about how you can stop queries in two steps. First thing is you find the PID and typically you would consult PGSTAT Activity for that to find the right process ID. And then you can call one of two commands, either PG Terminate the backend that essentially closes the whole database connection to the server for that client, or the other option is PG Cancel Backend. That will just cancel the running statement on that client. It would still keep that session open so the database session doesn't get canceled, only the running query, whereas Terminate terminates the whole session. Then he looks at the different ways you can cancel sets of queries that potentially may be blocking. So, basically, consulting PG Stat Activity and then rendering out, say, the PG Terminate Backend or PG Cancel Backend command. The first one he looks at queries blocking a particular process. So in his example, maybe you want to alter a table, but there are things blocking it. Well, you can find the process ID of that statement and then use the PG Blocking IDs function with that ID you want to run and it will find all the processes that are causing it to be blocked and then you terminate those. Then he looked at queries against a particular table, a way to cancel those or connections open longer than n number of seconds. Now, he also mentions that you can stop the queries through the operating system. So you can send a Kill SIG term signal which is essentially terminating the database connection. Or you could send a Kill SIG int command and that will cancel the active running statement. And he also mentions the Windows equivalent here as well. Now he does say killing query processes using the Kill kill command and that is definitely not advisable. Now he doesn't mention this here, but this basically causes downtime because the database system has not had an opportunity to get to a consistent state and you potentially may have shared memory inconsistencies. So it basically has to restart the whole system. So that could be seconds, that could be minutes depending upon the state of your system when you run this, even on just a database connection. So you definitely don't want to do this, or I would advise against it, but if you want to learn more, you can definitely check out this blog post. The next piece of content is actually a YouTube video and it's get your Insecure PostgreSQL passwords to Scram. And this is on the San Francisco Bay Area PostgreSQL Users Group channel. And this talk goes into a lot of detail about Scrum, how it's implemented within postgres and the different processes it goes through to do password authentication. And basically Scrum allows a client and a server to maintain a secret, but then do authentication where that secret is never passed across. They use cryptographic techniques to authenticate one another without actually sharing the secret. And he goes into a lot of depth explaining that, frankly, the first 35 minutes of the talk is about that, and then there's about 1 minute of talk of like, okay, how do you start using it? So it's very easy to get started using Scram or convert from using MD Five to Scram as long as you don't have too many users. But this is a great piece of content if you want to learn more about how Scram works. [00:09:31] The next piece of content, PostgreSQL upgrades are hard. This is from Andreas. Sherbon la. He's talking about a presentation that was done at Postgres Vision 2022, and there's a link to the YouTube video here, but this is a text based review of that talk. And with regard to minor version upgrades, they basically say, well, those are pretty easy to do. It does require restart of the database because you're just changing the binaries and there are no new features with minor version upgrades, usually just bug fixes and security upgrades. So it's pretty easy to do these type of upgrades. The next one is major version upgrades, and those take a lot of work to validate that everything works as expected. In particular, you can run into issues with extensions, of course, because these extensions are essentially third party tools, so you need to validate that all of those are working on the new version and the particular upgrade process to use. Now, the default way to do upgrades of PostgreSQL is using the PG underscore upgrade. And that's what I use to do all of my PG upgrades, is the PG upgrade tool, and I use the link method so that it converts them pretty quickly. Looks like they were talking a little bit about the support policy, and right now they support five major versions. So essentially five years of versions. And apparently there was a discussion on whether they should adopt a model like, I guess, Ubuntu, where they have long term service releases, but I think five years is fine. And then people should upgrade their database every couple years because you get better performance and a bunch of new features. The next area they talk about is cluster awareness, and that is a downside of postgres, is that you may have one primary and three Replicas, but when you're doing an upgrade, the Replicas have a knowledge of who the primary Is, but not other replicas. So trying to upgrade a whole cluster of servers, there's no easy way to do that. Now there's a particular sequence you should adopt and follow, but there's not really such a thing as cluster aware upgrades. Now with regard to this, a thing to keep an eye on is Kubernetes. So different vendors, postgres operators are offering techniques to actually do coordinated upgrades using the operator in Kubernetes to coordinate version changes. I think the Crunchy data postgres operator mentioned that in a previous article that they published. Another consideration they talked about is about the downtime required for an upgrade. So that's definitely always a consideration. Then they talked about different upgrade strategies. So there's running PG upgrade in link mode, which is I tend to do, you can run PG upgrade in copy mode, but that takes forever. They talked about logical replication as a solution. So basically logically replicating across solution. Now this takes a lot of time and testing to do correctly, but it is another viable route. They talked about Slony which uses a trigger based mechanism. Frankly, I'd probably choose the logical replication as opposed to Slony at this point. And they say here development of Slony stopped a while ago, so another reason to probably avoid it. And then, of course, PG. Dump. And PG restore. If you have a small database, this is definitely a viable path because it's super easy to do. But if you have a larger database, then you're looking at probably PG upgrade. So you can definitely check out this blog post or use the link above here to look at the YouTube video if you want to get all the details with regard to it. [00:12:47] Next piece of content ordinality in function queries. This is from fluca 1978 GitHub IO and he's talking about the clause with ordinality. So when you develop a function and it returns a table. So in this case he's returning a three column table from this function that he developed. And when you actually call that function, so select all from the function that he created, it outputs this data. That is a table with three columns. If you use the with ordinality operator, it actually adds an additional column called ordinality and it returns a big int in an Iterative order for each row. Now this is identical to the PK column here, you'll see, but he just fabricated this PK column and it just happens to be identical to the ordinality column. Then he showed the ordinality operates from the output of the function. So you can see here, when you order it by random, you could see that they still are in alignment with what the primary key field says. And you can also give ordinality that column name in alias. So you can actually rename all the columns here and you can still do where and order by against it to pull out the exact data that you want. So in this case, he's only looking at the even rows. So if you want to learn more about this clause, definitely check out this blog post. [00:14:09] The next piece of content. H Three indexes for performance with PostGIS data. This is from Rustprooflabs.com, and they're talking about the H Three Hex grid extension, which helps map out data using Hexagons. And this particular post looks at specific columns like an H Three IX column I guess offered by the H Three extension as well as a H Three index on a particular table. And it actually helped accelerate nearest neighbor style searches as well as regional analysis. And looking at the conclusions below, they saw nearest neighbor searches performed 73% to 77% faster with these H Three indexes and their regional analysis was 99% faster. So definitely benefit of using this extension in these particular types of indexes. But if you want to learn more, definitely check out this blog post. [00:15:00] Next piece of content five minutes of postgres. Episode 23 fuzzy Text Search and Case Insensitive ICU Correlations in Postgres this is from Pganalyze.com and he goes over in depth the fuzzy text search article and case insensitive ICU correlations that we mentioned last week on Scaling Postgres. So if you want Lucas's perspective on it, definitely check out this piece of content. [00:15:22] The Next Piece of Content how we made data aggregation better and Faster on PostgreSQL with Timescale DB two seven this is from Timescale.com, and primarily what this post is talking about is the feature called continuous aggregates. That's in timescale. Now by continuous aggregates. My interpretation of that is that you have a time series database, maybe you have three years of data in it, and you want to run a query frequently during the day for three years of data, including the most recent one. Well, a continuous aggregate, I'm assuming you are maintaining an aggregate of that three years of data, maybe minus one day or minus two days, or they're bucketed in such a way that historical data is pre aggregated. So you just have to throw buckets of data together quickly or it's all in one bucket and then you just add the most recent data. I'm not sure on the implementation, but it's a way to maintain these aggregates so you don't have to every time run the whole three years of data. So they've added a new enhancement to this feature that has led to some pretty impressive speed performance. Now on the low end it looks about a 2.6 improvement with a particular workload, but it goes as high as thousands. Like here's over 1000 times faster aggregation result, and here's a 44,000 times faster aggregation result as well as some smaller as well. But if you want to learn more about this new feature and some of the benefits, definitely check out this blog post. The next piece of content. The PostgreSQL Person of the week is Harvey Sfwitzer. If you're interested in Harvey and his contributions to postgres. Definitely check out this blog post and the last piece of Content we did have another episode of the Rubber Duck Dev show this past Wednesday evening. This one was on Ruby three keyword arguments. So if you're interested in the changes made with keyword arguments in Ruby Three, we definitely welcome you to check out our show. [00:17:11] That does it. For this episode of Scaling Postgres, you can get links to all content mentioned in the show notes. Be sure to head over to Scalingposgres.com, where you can sign up to receive weekly notifications of each episode, or you can subscribe via YouTube or itunes. Thanks you our channel.

Other Episodes

Episode 121

July 06, 2020 00:16:33
Episode Cover

Fantastic Data Types, Domain Data Types, Insert Performance, Row Differences | Scaling Postgres 121

In this episode of Scaling Postgres, we discuss fantastic data types, custom domain data types, improving insert performance and how to query row differences....

Listen

Episode 48

January 27, 2019 00:14:41
Episode Cover

Microsoft Acquires Citus Data, Split WAL, Maintenance Work Mem | Scaling Postgres 48

In this episode of Scaling Postgres, we review articles covering Microsoft acquiring Citus Data, split WAL files, maintenance_work_mem and logical replication. To get the...

Listen

Episode 98

January 26, 2020 00:13:00
Episode Cover

Non-Relational Data, Contributers, Security Features, High Availability | Scaling Postgres 98

In this episode of Scaling Postgres, we discuss storing non-relational data, recognizing contributors, Postgres security features and implementing high availability. To get the show...

Listen