Commit Graph

11 Commits (7620912d84f6a8b24143f1340dd653f44b13bf30)

Author SHA1 Message Date
Amber Brown 32e7c9e7f2
Run Black. (#5482) 2019-06-20 19:32:02 +10:00
Amber Brown 49af402019 run isort 2018-07-09 16:09:20 +10:00
Richard van der Hoff 43e02c409d Disable partial state group caching for wildcard lookups
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
2018-06-22 11:52:07 +01:00
Amber Brown df9f72d9e5 replacing portions 2018-05-21 19:47:37 -05:00
Erik Johnston 7c7706f42b Fix bug where state cache used lots of memory
The state cache bases its size on the sum of the size of entries. The
size of the entry is calculated once on insertion, so it is important
that the size of entries does not change.

The DictionaryCache modified the entries size, which caused the state
cache to incorrectly think it was smaller than it actually was.
2018-03-15 15:46:54 +00:00
Erik Johnston bbfe4e996c Make get_state_groups_from_groups faster.
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.

By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
2017-05-17 15:12:15 +01:00
Erik Johnston f85b6ca494 Speed up cache size calculation
Instead of calculating the size of the cache repeatedly, which can take
a long time now that it can use a callback, instead cache the size and
update that on insertion and deletion.

This requires changing the cache descriptors to have two caches, one for
pending deferreds and the other for the actual values. There's no reason
to evict from the pending deferreds as they won't take up any more
memory.
2017-01-17 11:18:13 +00:00
Erik Johnston 73c7112433 Change CacheMetrics to be quicker
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
2016-06-03 11:26:52 +01:00
Matthew Hodgson 6c28ac260c copyrights 2016-01-07 04:26:29 +00:00
Erik Johnston 4807616e16 Wire up the dictionarycache to the metrics 2015-08-12 10:13:35 +01:00
Erik Johnston 2df8dd9b37 Move all the caches into their own package, synapse.util.caches 2015-08-11 18:00:59 +01:00