TimescaleDB 2.7 vs. PostgreSQL 14



boomskats 6d
Whenever I see these posts from TimescaleDB, I always want to ask them how it compares in performance to alternative extensions that implement the same features, rather than just comparing TimescaleDB to vanilla PostgreSQL.

For example, they mention their automated data retention and how it's achieved with one SQL command, and how DELETEing records is a very costly operation, and how "even if you were using Postgres declarative partitioning you’d still need to automate the process yourself, wasting precious developer time, adding additional requirements, and implementing bespoke code that needs to be supported moving forward".

There's zero mention anywhere of pg_partman, which does all of these things for you equally as simply, and is a fully OSS free alternative [0].

I get that it's a PG extension that competes with their product. I know that TimescaleDB does a few other things that pg_partman does not. But I can't help but find its (seemingly) purposeful omission in these, otherwise very thorough blog posts, misleading.

[0] https://github.com/pgpartman/pg_partman/blob/master/doc/pg_p...

krn 6d
If integration with PostgreSQL is not a requirement, I would suggest looking at QuestDB[1] for the best performing open-source timeseries database currently on the market.

[1] https://questdb.io/blog/2021/07/05/comparing-questdb-timesca...

chrisco255 6d
Should be noted that article is titled Timescale + Postgres vs. Postgres alone. Timescale is built on Postgres so it's not really a competition per se.
didip 6d
What about TimescaleDB vs a lot of other OLAP databases like ClickHouse, Druid, Pinot, Databend, etc.?

Or hybrid databases like StarRocks or TiDB?

srcreigh 6d
> For example, one recent query planner improvement excludes data more efficiently for relative now()-based queries (e.g., WHERE time >= now()-’1 week’::interval).

This could be done by just calculating the start date in code too.

> When hypertables are compressed the amount of data that queries need to read is reduced, leading to dramatic increases in performance of 1000x or more.

At my workplace we recently experimented with storing time series data in an array in a postgres row. This gets compressed as a TOAST array, can store thousands of ints in just a few DB pages (aka loading it is about the same as an index scan). We also use Timescale for a different service mind you. I'm sure this format is more efficient than the Timescale format too. In Timescale you would need rows containing (for example) (date, user_id, time_on_site), one row per day. The postgres array format (start_date, user_id, time_on_site_by_date) indexing where 0=start_date, 1=start_date+1 is like 1/3rd the size uncompressed. And yea, even if something is compressed, you still gotta put the uncompressed version in memory somewhere.

ithrow 6d
Is Timescale suited for keeping the history of the data and then query against any point in time?
znpy 6d
No mention of ha in the documentation.

I can’t understand whether HA would rely on the standard postgresql tooling or if you have to pay for some kind of enterprise license to get it.

alecco 6d
How is this on HN frontpage?
PeterZaitsev 6d
I wonder in those benchmarks are we speaking about TimescaleDB Apache Edition or non OSS TimescaleDB Community edition ?

Looks like these are quite different in features https://docs.timescale.com/timescaledb/latest/timescaledb-ed...

jxi 6d
[deleted by user]
maierru 6d
tldr "readonly" database :(

Update/delete of current chunk data blocked during compression of old data https://github.com/timescale/timescaledb/issues/4432