Forgetting the specific queries for a moment (basically all queries in this table are relatively slow):
How would you handle such a scenario? Shard the database?
https://pgtune.leopard.in.ua/#/
Are you monitoring your machine to check if it is not starving on cpu?
Re the queries, we have been using pgbadger to collect metrics about the usage of the dB and the slowest queries by type. This is helpful as it guides where you should put your efforts.
https://github.com/darold/pgbadger/blob/master/README.md
This is very good ref about scaling Postgres.
https://pyvideo.org/pycon-ca-2017/postgres-at-any-scale.html
Sharding by month or other bucket of time could help.
We have a very similar situation except it’s billions of rows. One benefit is it’s a bit denormalized in that we store the meat of our data in a hstore field