PGlite: Embeddable Postgres & postgres.new | Scaling Postgres 329

Episode 329 August 18, 2024 00:19:01
PGlite: Embeddable Postgres & postgres.new | Scaling Postgres 329
Scaling Postgres
PGlite: Embeddable Postgres & postgres.new | Scaling Postgres 329

Aug 18 2024 | 00:19:01

/

Hosted By

Creston Jamison

Show Notes

In this episode of Scaling Postgres, we discuss PGlite, an embeddable Postgres, postgres.new which adds AI features, new Postgres releases and the performance of synchronous replication.

To get the show notes as well as get notified of new episodes, visit: 

https://www.scalingpostgres.com/episodes/329-pglite-embeddable-postgres/

Want to learn more about Postgres performance? Join my FREE training called Postgres Performance Demystified here: https://www.scalingpostgres.com/courses/postgres-performance-demystified/

View Full Transcript

Episode Transcript

[00:00:00] In this show we typically talk about performance and super large databases, multiple terabytes, maybe petabytes if you're lucky. But the first two articles I'm going to cover this week are super small databases and they're basically embeddable. But I hope you, your friends, family and coworkers continue to do well. Our first piece of content is pg lite embeddable postgres. This is from pglight dot de v dot. And this isn't necessarily a blog post, but it's something that I didn't know existed and it was linked to a post I'm going to get to after this. But basically this is a full postgres database that runs locally under webassembly and it can run in your web browser. So for example this is a running database here and you can create tables, insert data, select data, basically whatever you want. And from my understanding this is not using any sort of Linux VM or anything. It is running essentially under webassembly within JavaScript and the database is three megabytes gzipped. It has support for some extensions such as PG vector and post GIS and it says it can do some data loading and synchronization. So looking at the about this is basically an ephemeral in memory database, but you can persist it either to the file system using node or bun or to the index DB which is a storage system within the browser. And this is being developed by Electric SQL. And what's super interesting about it is actually the next blog post which is postgres, new in browser postgres with an AI interface. This is from supabase.com and they basically took PG Lite and merged it with an AI LLM, basically a GPT file. And then they added some features such as a drag and drop CSV import allow you to be able to generate and export reports, to generate charts or to build database diagrams. And they actually give the LLM control so it can actually run database commands within this local database. And they said they wanted to do this because essentially this is a sandboxed environment and you don't have to worry about your data necessarily running it within this pg lite system here. And they're thinking of scenarios where because this is a small sandbox environment, you could basically spin up as many of these databases as you want to do particular type of work. Another name they're calling them is extremely cheap databases. So basically they go into more depth about what I just discussed. Basically they set up an AI assistant that is using GPT four o for you to interact with. You can ask it to create tables for you or to generate data. They do have the ability to import and export CSVs which would just create a table for you on the fly to be able to import the data into it. It can generate charts as shown here, and even generate ER, diagrams or migrations for schemas that you maybe want to set up. And it can actually build those schemas within the database to validate that it's a kind of what you want. And because it supports the PG vector extension like I mentioned in the previous blog post, they've actually built some additional tools where you can actually ask the AI assistant to say hey, generate embeddings for five famous movie quotes, and it actually stores those within a dedicated table in this local database, meta embeddings, and gives you a reference or an id back to that embedding that you can then use in the future. They also built another tool called PG Gateway that allows you to connect to the database from an external, say a P SQL client, because currently PG Lite doesn't support that. But they developed this additional interface so you can access postgres from a client tool like a PSQL, and then you talk a little bit about PG Lite and what it is. Now postgres new is a service that is hosted by Supabase right now because you know, it's interfacing with chat GPT which uses their API and everything. However, everything that they kind of use to put it together are open source libraries that others have used or they've built themselves. So you have the PG lite embeddable database which is open source. You have the PG gateway for server side access to it, the postgres new which is the front end for postgres new, and then transformers JS which allows you to generate embeddings right within the postgres running in your browser. Now in terms of the future, they want to be able to connect the database up to use s three as a data storage solution. They want to support more file types for consuming data such as CSV or word docs or even images. They want to be able to do database sharing where you can just give someone a URL and they would have access to this local small database. They wanted to support the OPFS file system which gives more robust file system support for working with a host file system, and then of course supporting database exporting as well. So I thought all of this was super fascinating. I don't have an idea of how I would use something like this today, but I'll have to give some thought about what it may mean in the future. But check this out if you're interested. Next piece of content Postgresql 16.415 point 814.1 313 13.1 612.20 and 17 beta three have been released. This is from postgresql.org dot and as you can tell, these are new releases. There is one security bug that was fixed, as well as over 55 other bugs that have been reported over the last several months. So the security issues when a privileged user is running pgdump, it's possible that someone with create object permissions could replace a sequence or a similar object with a view or foreign table that executes malicious code. So therein lies the vulnerability. You need to run pgdump and have someone with the privileges to be able to change objects. And the last thing mentioned here, of course, is that postgres twelve will be end of life as of November 14, 2024. So go ahead and upgrade if you haven't done so already. [00:06:20] Next piece of content the cost implications of postgresql synchronous replication this is from enterprisedb.com dot. So when you set up a normal replica for postgres, it sets it up in an asynchronous manner. So what that means is when an insert, update or delete is done on the primary database system, as soon as it's written to the database locally to the wall files, it immediately returns and allows another SQL statement to be executed for a given session. So it won't return control back until that change has been committed. But that's only on the primary. It doesn't matter what happens on the replica, you don't know if that wall change will eventually make it to the replica or not. So you could have a scenario where you write a change to the primary database, it's committed to the disk there, and that wall change will eventually make it to the replica. But what if the primary immediately crashes? Well, that data may not have been sent to the wall stream and accepted by the replica, so that could be lost data if you chose to then promote that replica. What synchronous replication does is it tries to reduce or fully eliminate data losses due to this type of reason. So in this scenario, they actually set up three nodes. This is the primary node, and they set up two replicas that are synchronous replicas. So they set up the synchronous standby names to be pgnode two, pgnode three, and when you set this up, there is several different synchronous commit settings you can make. If it's set to local, then that means when a statement is executed on the primary, as soon as it is written and flushed to the disk locally, it immediately returns control back for the next statement to be executed. So as you can tell, there's no remote guarantee at all. Remote write means that the transaction has been written to the remote node's disk, so the synchronous replicas have written it to the disk, but it may not necessarily have been flushed yet. That is what on means. So when it's on, it means the replicas have both written and flushed that data to the disk. So you can pretty much be assured that you're not going to lose data. And the final option is remote apply. So that means it's been written and flushed to the disk on all the replicas as well as that change has been actually written to the database files itself. So if you were to query that replica, you would get the exact same data as appears on the primary. So when you're just going with on, it is possible that the change has been committed to the wall on the disk on the replica, but not the database files yet. So you could get different answers between what's on the primary and what's on the replicas. But remote apply eliminates that to make sure that the databases are fully in sync. But as you can imagine, as you go from local to remote write to on to remote apply, each of these adds more latency to it because you have to wait longer and longer before a given session can return control back to the client saying okay, I'm ready for your next command now. And this gets worse, the longer distance or the greater latency that is between the different servers. So what they did in this blog post is they actually fabricated a ten millisecond delay between each of the instances, and they used PG bench to run at different client amounts to see what the transactions per second were as well as the query latency. So with the ten millisecond delay you can see at a client count of only 40, the synchronous options other than local were about half as fast, so that's a significant price to pay. However, as your number of clients increase, you can see the advantage quickly diminishes. Now their thought as to why this happens is because there's probably more work for the cpu's to do with a higher number of clients, where they just have to wait around at this low client count here. But really you can see that the ceiling of the system is around the 4500 transactions per second. Local hits that rather quickly, whereas it just takes a bit longer for the other synchronous methods to be able to hit that ceiling and consistently, as you can tell, the one where you have to wait all the way until the changes are written to the actual database files. The remote apply, that's the worst performing out of the different options, whereas remote write and on, there really wasn't much of a difference. And the same thing with query latency. Again, the lowest latency is with local. The highest latency is with remote apply, with the other options basically in the middle. Now when they drop the network delay to three milliseconds, you could see things get a lot tighter. Like there's hardly any difference between remote apply and on now, and even the local, it's only higher at the 40 client count. At the 80 client count there's hardly any difference between it and remote apply. It's probably statistically insignificant and the query latencies are super tight. So basically what I gather from this is that if you're going to use synchronous replication, you need to have the replicas as close as possible together, or having the least amount of network latency between them. So they should be in the same data center or even in the same availability zone to minimize performance impacts. And what does this mean for implementation? Maybe that means you have a primary and one synchronous replica in the same data center or the same availability zone, but then you have maybe asynchronous replicas to other availability zones. Or maybe you will accept the performance hit for being able to have synchronicity between all the availability zones. But if you want to learn more, definitely check out this blog post. [00:12:06] Next piece of content postgres troubleshooting diskful error could not resize shared memory segment this is from crunchydata.com and they're talking about this error message here. PG diskful error could not resize shared memory segment so he says, if you ever see this message first, don't panic. It's not that you're diskusful, this is a red herring. It's not accurate. What it is is that one of your database sessions is not getting enough memory. And he says, you know, you're not really out of disk, it's basically a memory error. So if you take a look at the logs, you can see this error message and you look for the process id, and then you can look at the other logs for this process id or you could look at the logs below this error message and you can see the queries that caused it. Like for example, this is a massive join of seven or eight tables here, and this query just caused this particular session to run out of memory. So next he says okay, what can you do to resolve this issue? The first thing is decrease your reliance on hash tables and add indexes. So if you're trying to join a bunch of tables together that are larger, you may be using hash tables to do a join. Well, ideally you would want to put primary and foreign keys on those databases to have more efficient joins. The other option is to decrease the amount of work memory that's being used for these. Another option is decrease the number of max parallel workers you have. Again, just trying to reduce the memory pressure you're putting on the system. And then lastly dig into the queries. Is there a way to make them more efficient? Are you pulling too much data? And lastly, of course you could always add more memory to your system. So rather than making things more efficient, just give the system more memory. But if you want to learn more, definitely check out this blog post. [00:13:55] Next piece of content. There was another episode of Postgres FM last week. This one was on index only scans. So basically how a query typically works is that you query a particular table, it looks to see if there's a relevant index. If it does scans that index, and then based upon that it goes finds the rows in the heap table to answer the query that you've given. But if everything that you've asked for within that query resides within an index, meaning you select the id where the id equals five and there's an index on that, all the system has to do is read the index and it can give you the answer. So in that case it just doesn't index only scan, it doesn't have to go to the heap. Now there are ways you can increase the probability of that. You can use a multicolumn index, so all the columns you need are within the index when you do your query. Or you could also use the include clause which adds data from the table as an additional payload on the index. That's another possibility. But the thing you also need to be aware of is that all the visibility information with postgres resides within the heap. So what that means is that when you do an index only scan, there's another structure. Postgres checks to see if a whole page is visible or not. And if it's not visible, it still needs to go to the heap. So even though it's trying to do an index only scan, it may have to still scan the heap for certain things. So to have efficient index only scans you want to have your visibility map to be as updated as possible and what updates the visibility map but vacuum. So basically you need to make sure your tables are well vacuumed in order to get efficient index only scans. Nikolai and Michael also discussed one disadvantage of trying to achieve index only scans is that you may add so many columns to the index that basically maybe you hurt your chance for getting a heap only tuple update. So you make your updates less efficient and just creating a large index in general. So you want to beware the trade offs. The more columns you add to an index trying to get an index only scan you may be actually hurting your insert update delete operations as well. They also definitely advised adjusting your random page cost down to 1.1, basically not four to try and optimize index scans in general. But if you want to learn more you can definitely listen to the episode here or watch the YouTube video down here next piece of content PG Friday my postgres is rusty. This is from bonesmoses.org and he actually wanted to try rust and using it with postgres. So he talks about setting up a big basic rust program and interacting with postgres with it. So if you're interested in getting into rust and using it with postgres, you might find this piece of content interesting. Next piece of content cloud native PG recipe ten simulating production PostgresQl on kubernetes with kind this is from Gabriele Bartolini it and this is the next recipe where he's using kind, which stands for kubernetes and Docker. So it lets you create a Kubernetes cluster within your local system to test out using the postgres cloud native PG operator here. So check this out if you're interested in that next piece of content will postgres development rely on mailing lists forever? This is from Vondra me, and he's talking about how postgres development currently runs on the mailing list. It's the main source of communication and making patches and everything. And the other option that a lot of open source projects use is something like GitHub or GitLab, and they use pull requests to do that. Well, he was saying he doesn't necessarily think that would work for really large, complex features. It's great for small features, but not necessarily large ones. But the question is how should postgres eventually evolve into the future? What should it be using as it moves forward with development? Should we stick with the mailing list or change to something else? So if you're interested in that discussion, you can check out this blog post and the last piece of content, the new PostgresQl 17 make dist. This is from Peter eisentrout.org dot and this talks about the generation of the code Tarball that then distributors use to create packages for postgres for all the different distributions and some of the challenges of generating this code tarball prior to 17. And basically they put work into 17 to make this much more repeatable and easy to use. So if you want to learn more about that, definitely check out this blog post. [00:18:36] I hope you enjoyed this episode. Be sure to check out scalingpostgres.com where you can find links for all the content mentioned as well as sign up to receive weekly notifications of each episode there you can also find an audio version of the show as well as a full transcript. Thanks and ill see you next week.

Other Episodes

Episode 18

June 25, 2018 00:17:31
Episode Cover

Impact of Virtualization, Compressing Data, Parallelism | Scaling Postgres 18

In this episode of Scaling Postgres, we review articles covering the impact of virtualization on databases, techniques to compress your data and parallelism. To...

Listen

Episode 31

September 24, 2018 00:15:45
Episode Cover

CTE Warning, PG 11 Features, Death by DB, Correlation | Scaling Postgres 31

In this episode of Scaling Postgres, we review articles covering a CTE warning, Postgres 11 new features, death by database and column correlation. To...

Listen

Episode 269

June 11, 2023 00:18:41
Episode Cover

Upgrade Struggles, Hyperloglog, PgBouncer Usage, Postgres Scaling | Scaling Postgres 269

  In this episode of Scaling Postgres, we discuss Postgres upgrade struggles, how and why to use hyperloglog, using pgbouncer for session vs. transaction pooling,...

Listen