For example, they mention their automated data retention and how it's achieved with one SQL command, and how DELETEing records is a very costly operation, and how "even if you were using Postgres declarative partitioning you’d still need to automate the process yourself, wasting precious developer time, adding additional requirements, and implementing bespoke code that needs to be supported moving forward".
There's zero mention anywhere of pg_partman, which does all of these things for you equally as simply, and is a fully OSS free alternative .
I get that it's a PG extension that competes with their product. I know that TimescaleDB does a few other things that pg_partman does not. But I can't help but find its (seemingly) purposeful omission in these, otherwise very thorough blog posts, misleading.
Or hybrid databases like StarRocks or TiDB?
This could be done by just calculating the start date in code too.
> When hypertables are compressed the amount of data that queries need to read is reduced, leading to dramatic increases in performance of 1000x or more.
At my workplace we recently experimented with storing time series data in an array in a postgres row. This gets compressed as a TOAST array, can store thousands of ints in just a few DB pages (aka loading it is about the same as an index scan). We also use Timescale for a different service mind you. I'm sure this format is more efficient than the Timescale format too. In Timescale you would need rows containing (for example) (date, user_id, time_on_site), one row per day. The postgres array format (start_date, user_id, time_on_site_by_date) indexing where 0=start_date, 1=start_date+1 is like 1/3rd the size uncompressed. And yea, even if something is compressed, you still gotta put the uncompressed version in memory somewhere.
I can’t understand whether HA would rely on the standard postgresql tooling or if you have to pay for some kind of enterprise license to get it.
Looks like these are quite different in features https://docs.timescale.com/timescaledb/latest/timescaledb-ed...
Update/delete of current chunk data blocked during compression of old data https://github.com/timescale/timescaledb/issues/4432