[00:00:00] It's crazy. In the last year I've heard a ton of different things about DuckDB and it being an analytical database and giving incredible performance for analytic workloads. But in the last few months I've heard of people trying to integrate it with postgres to basically try and shore up areas where postgres isn't that great for doing analytical type workloads. Well, this week we're going to talk about a potentially significant integration that's happening, but I hope you, your friends, family and coworkers continue to do well. Our first piece of content is splicing duck and elephant DNA. This is from motherduck.com. that's quite a name for a company.
[00:00:45] And basically this is an announcement of pgduckdbirdenhe. So this is an open source postgres extension that embeds DuckDB's analytical engine within postgres, allowing you to do super fast analytical queries. Now, back in June I did cover a post by paradedb.com where they were integrating DuckDB as well. But this extension is going to be fully open source with a permissive MIT license, and the IP is owned by the DuckDB foundation and it's actually even being hosted by the DuckDB GitHub repository. So this is definitely going to be a first citizen tool in the DuckDB ecosystem. And what's even more impressive is the number of organizations that are behind it. So DuckDB Labs, which created and are stewarding DuckDB, are of course participating in Ithoodae Motherduck, which apparently has a lot of experience running DuckDB, and they host DuckDB data. So that's what this website is from. Also, Hydra, which is a company that started doing analytical performance, I think maybe a year or two ago. There's the serverless postgres company neon is participating as well as Microsoft. Now, I did mention that others have integrated DuckDB into their analytics solutions, and they mentioned here that crunchy data does have a commercial version. It's their crunchy data analytics, I think. And like I mentioned, Parade DB built what they're calling pg analytics. Although in the most recent post back in June, they actually changed the name, I think, to PG Lake house. So I don't know if PG analytics is still a thing or not, or if it's all just PG lakehouse now. So this extension is super early, the announcement of it, because, you know, they're still building it. So they have a lot of objectives that they want to achieve with it. But someone from Hydra at Duckon five, which happened this week, did show off some of its capabilities, but in the future they want to do things like support postgres native storage so that DuckDB can integrate with backup replication features such as that they want full type compatibility with postgres so that all the data types can interoperate. They want full function capability so that those would work within DuckDB as well, and also a semantic capability because they said there are some differences with regard to rounding or precision, and they basically want to make those identical as well. So this is super interesting. I encourage you to check out this blog post as well as check the the repository if you want to learn more about it. Now, as a follow on to that, there was a post on crunchydata.com talking about postgres powered by DuckdB, the modern data stack in a box. So again, this is with regard to their crunchy data analytics, and this is a deep thinking post about where postgres fits in with regard to analytics. For example, postgres is very good at online transaction processing, but maybe not so much online analytical processing and how those differences could potentially be rectified. Maybe there's some sort of a hybrid system or following the scenario of embedding an analytical engine within postgres, like they're talking about doing with DuckDB, and even brings into question situations where people are reading parquet files on s three for doing analytical processing. But could you get better performance having local NVMe drives with the data on it? So feel free to review this post if you want to cover some of those topics, as well as another post that's related to it. Is that a timescale DB analytics trick? This is from Kmupple GitHub IO, and he mentions the challenge that some people have had with processing analytics in postgres. And he mentions the announcement by Motherduck about the new DuckDB integration in the new PGDuckDB extension. But he says, if you want to do analytics with postgres, there's a couple of ways you could do that. One, you could do, as he says, careful data modeling. So basically, how can you pre sort or pre aggregate data in such a way that allows you to answer analytical queries very quickly? Basically, you're pre processing the data in some way. Another way to do it is using timescale DB with their compression capabilities. So basically they offer a column store solution which allows you to significantly compress the data size. He's saying up to ten x, for example. Another solution is using the ZFS file system and use compression on it that might be able to give you better performance. Although I've said this before, definitely be careful with your replica because I have seen cases where replicas have struggled to keep up when using compressed ZFS volumes. And lastly he mentions a file data wrapper accessing compressed static data. I don't know how performant this would be necessarily, but that is another solution you can use. And he closes out the post giving an example of a smallish server trying to do analytics with a particular data set just using pure postgres and then using timescale Db with the column based store and compression and it basically shows a tenfold improvement in runtime. But if you want to learn more, definitely check out this blog post next piece of content postgres as a search engine this is from anyblockers.com and this is a pretty comprehensive post about different ways you can do text search within postgres. So right now postgres basically offers three ways to do I'll call it text based searching, basically not using a b tree index, but you can do full text search using T's vectors. You can do semantic search using PG vectors or you can do fuzzy matching using the PG trigram extension. And what this post does, he takes all of these methods and puts them together in one query, trying to get you the best of all worlds. So full text search gives you very good lexical search, basically identifying words or the root of words. A semantic search shows you things with similar meanings, like pasta is similar to pizza, not because the words are similar, but because maybe they both have tomato sauce in them, for example. And then fuzzy matching to take into account potentially misspellings or particular letter is missing. Well with the trigram search you can find those matches as well. So he has a documents table and he has a title and he has a t's vector on there for the full text search and a vector column for the semantic search. And what he's doing is a CTE where he covers the full text query here and the semantic query here. He doesn't do the fuzzy search yet, and then he does a join technique using reciprocal ranked fusion, basically a way to take these results and kind of merge them together appropriately to give you the best search results. So with that base he then adds the fuzzy search on top and it's essentially just another part of the CTE that gets combined together with the other two. Then he goes into a section about how to debug and understand how things are being ranked and why, as well as tuning different things to give you the best results. Like with regard to full text search. But I found this a super interesting blog post to give you a solution to give you the most comprehensive search capability possible just using postgres. Now, given the fact that you are doing three different types of searches at the same time, the performance is not mentioned. I imagine it's not super great, so you may have to lean on other columns. But he didn't mention any performance benchmarks in the post that I could see. But basically this is a solution to do as good postgres search as you can. And if something like this doesn't fit your needs, then you're probably going to have to look at another search solution like elasticsearch or something else. Or you know, we mentioned Parade DB earlier in this episode. They have a postgres extension that actually embedsdeh a text searching engine within it that has similar capabilities to elastisearch. So maybe that's another option you would want to explore. But check out this blog post if you want to learn more.
[00:09:27] Next piece of content there was another episode of Postgres FM last week. This one was on getting started with benchmarking, and here Michael and Nikolai were actually joined by Melanie Plagueman, who is a database internals engineer at Microsoft to talk all about benchmarking because a lot of her work with regard to postgres is working on patches and then analyzing or benchmarking, hopefully to see performance improvements, but definitely avoid performance regressions. So they basically felt the main reason for doing benchmarking is when you're doing development to make sure again that you see performance gains, but also pretty important that you don't make things worse. They also mentioned whether benchmarking was useful during upgrades, like for example, particular company may want to know, hey, if we go from postgres 14 to postgres 16, is anything going to be worse? Could we do benchmarking to find that out? And I think everyone on the panel universally agreed that that's really hard to do, using something like PG bench to replicate a production workload, because there's always so much going on. And what Nikolai volunteered is that one way he's helped people trying to make this assessment is let's just look at the, say, top hundred queries on a particular system, run them with a production like data set on the current version to get okay, what are the costs, what are the buffers with regard to the performance? And then upgrade the database and ask that same question for the hundred queries and do you see any regressions that have happened? He felt that was a much better way to make that type of assessment. The other way I've seen benchmarking being used is for like a proof of concept. If you have two or three ideas of how you could potentially structure data for a given feature, you could use PG bench testing to try the three different variations to see which performed better. They did talk a lot about the observability tools that Melanie uses in her benchmarking work. So a lot of the PG stats tables, because those are basically statistics on what the system is doing, as well as various OS metrics that are captured at the same time as well, and even some additional extensions that give you greater insight into the running system. And because she had a major hand in getting the new PGstat IO view into postgres, they talked about that as well. But this is a great episode. You can definitely listen to it here or watch the YouTube video down here.
[00:11:55] Next piece of content is good benchmark engineers and postgres benchmark week this is from ardentperf.com and they definitely saw the episode of Postgres FM. And this blog post kind of is a review of that as well, and talks about benchmarking in general from his perspective. So if you want to learn more about that, you can check out this blog post.
[00:12:18] Next piece of content how postgres stores data on disk this one's a page turner, pun intended. This is from Drew Dot Silcock dot de v dot, and he goes through the process of showing you how postgres actually physically stores your data on the disk. So, you know, normally when you interact with postgres, you just say insert this data into a table. Well, that's primarily a virtual representation, what is actually physically happening on the disk, and where are things being stored in the directories and the files, and how that data is stored in the file. So this blog post answers some of those questions. Next piece of content cloud native PG connecting External applications this is from dBI services.com. so in this post, again, cloud native PG is a kubernetes operator and Daniel has had many posts recently on setting it up. Well this one he shows you how you can connect an external application to postgres running within a kubernetes cluster. So if you're interested in that, you can check out this blog post.
[00:13:21] Next piece of content cloud native Pg recipe Eleven isolating postgresQl workloads in Kubernetes with kind this is from Gabriele Bartolini it and here he's discussing how to run your postgres loads in Kubernetes on their own nodes and avoid application containers from running on those nodes as well, and further preventing those postgres containers running on the same node. Because if you set up three running containers of postgres, you don't want them running on the same node within the cluster, you want them on separate nodes. So he talks about all the different configuration you can do to ensure that that doesn't happen. But if you want to learn more, check out this blog post, the last piece of content PostgresQL hacking workshop September 2024. This is from Rhaas dot blogspot.com, and the upcoming postgres hacking workshop is happening in September, and it's a walkthrough of implementing a simple postgres patch from sources to Cihdeende. And he talks about how you could potentially sign up to join, as well as reminders of what's expected when you join and participate. So check this out if you're interested.
[00:14:31] I hope you enjoyed this episode. Be sure to check out scalingpostgrows.com, where you can find links to all the content mentioned, as well as to sign up to receive weekly notifications of each
[email protected]. you can also find an audio version of the show, as well as a full transcript. Thanks, and I'll see you next week.